September welcomes the start of another academic year and the media has been busy as usual covering the latest in Science Technology Engineering and Maths (STEM) news. As the skills gap continues to widen, Politics Home reports that primary school teachers are struggling to engage students with STEM subjects. Increasingly, young people have to become responsible for their own development in these areas, dedicating their own time to learn about the latest technologies.

Over the summer we shared with you the list of IET open days taking place across the UK. We hope you and your families got the chance to attend (if you did, please do share with us your experience on Twitter, we’d love to hear from you). To follow our own advice, we also decided to delve a bit deeper into tech over the six weeks holiday and attended the critically acclaimed ‘AI: More than human’ exhibition at The Barbican.

Here’s our Junior Account Manager Rose’s account of the show. Broken into two parts, this is part one:

Sometimes you can see it, other times you can’t, Artificial Intelligence has a habit of sneaking up on us when we least expect it. Whether it’s the use of facial recognition in London’s Kings Cross or the Cambridge Analytica scandal, there are many who are wary of the fast-developing technology, and understandably so.

However, are our fears more to do with how the technology is used, rather than the technology itself? If it’s the former, we need to ask some difficult questions about ethics. Do we trust homo sapiens to implement technology for the greater good of mankind, the planet and other species that live here? ‘AI: More than human at the Barbican’ prompted many such questions. It explored how civilisations across the centuries have worked, albeit sometimes unknowingly, towards today’s rapidly-developing world of advanced technologies. But just as any good exhibition should, it also provided some very interesting answers to how and why the AI revolution has happened and what the future may look like if we continue in the same vein.

For how long have we wanted to create robots?

The exhibition opened with ‘The dream of AI’ and showed how humans have always been curious about the artificial creation of living entities, whether through magic, science, religion or illusion. From the belief in sacred spirits living within inanimate objects in Shintoism through to the Gothic literature of the nineteenth century, the early roots of AI manifest themselves in different ways across various cultures as far back as 400 BCE.

 

Take, for example, the religious traditions of the Golem in Judaism. A mythical figure, the Talbud Jewish holy book says that the golem originated as dust or clay ‘kneaded into a shapeless husk’ and brought to life through complex, ritualistic chants described in Hebrew texts. The above image taken from the artist Lynne Avadenka’s book ‘Breathing Mud’ explores the relationship between sacred letters and the life which is given to the Golem, and by extension to the world. This reminds me of early mathematical diagrams and the coding which is so often used today to program otherwise inanimate objects such as robots.   

Apparently Jewish mystics in Southern Germany made attempts to create a Golem in the Middle Ages and believed this process would bring them closer to God. Is humankind’s fascination with creating artificial life a spiritual exercise after all?

The Uncanny Valley

Later in this section of the exhibition the Gothic tradition of the nineteenth century was cited as significant. Gothic literature such as Mary Shelley’s ‘Frankenstein’ (1823) and Bram Stoker’s ‘Dracula’ (1897) blur the line between the living and the dead and evoke an emotional response of terror – yet people continue to enjoy these novels and the many films and television series that have stemmed from them.

Is it the element of the uncanny within these stories which appeals to us? Sigmund Freud’s essay ‘The Uncanny’ (1919) defines ‘uncanny’ as ‘belonging to all that is terrible – to all that arouses dread and creeping horror’ but it also explains that the ‘uncanny’ is formed when ‘something unfamiliar gets added to which is familiar’ according to English Professor Jen Boyle’s interpretation of the text.

Perhaps this is why we get so perturbed by Count Dracula, essentially a human-being with a deathlike twist. Or Frankenstein the great inventor, who made a monster during a scientific experiment  using electricity and human body parts?

These creatures remind us of us – they’re part human, part monster. However, instead of supporting the positive self-image we like to preserve, they actually highlight the darker side of our psyches. They expose the capacity for human beings to become twisted and give in to their innermost desires.

‘AI: More than human’ goes even further in it’s exploration of the uncanny and its relationship to AI. The Uncanny Valley, a hypothesized relationship between the degree of an object’s resemblance to a human being and the emotional response, was demonstrated in a graph (see below). It shows that as the appearance of a robot is made more human, some people respond more empathetically until it reaches a point where it looks too human, for example social humanoid robots, and then people’s responses quickly become strong disgust.  

The Uncanny Valley Graph

Equally, if AI takes on too many human qualities such as empathy, creativity and leadership, many of us become perturbed, which is continually reflected in the news headlines today. 

Mind machines

The exhibition continued with a close look at the technological developments of the 19th and 20th centuries when the belief that rational thought could be systematised and turned into formulaic rules became more prevalent. Ada Lovelace, often considered the world’s first computer programmer, wrote a letter concerning a ‘calculus of the nervous system’ as early as 1844. As a young girl she was a particularly keen mathematician and was taken by her mother to see a demonstration model of the Difference Engine, the first computing machine designed by Charles Babbage. Ten years later she worked with Babbage on the Analytical Engine, a general purpose computer which had a store of 1,000 numbers of 40 decimal digits. The programming language was very similar to that used later by Alan Turing during the specification of the Bombe in 1941.

During WWII, the Bombe was used by Turning to decode messages sent by the Germans. It played a pivotal role in enabling the Allies to defeat the Nazis. It also led to the development of many other computers such as the ENIAC (1946) and the UNIVAC (1951). 

One of the most significant developments in the history of AI happened in 1956 at the Dartmouth Conference, a two-month event organised by computer scientist John McCarthy. Everybody who was anybody in the world of computers attended to work on the problem of how machines make language, process concepts and improve over time. It may not have met everybody’s expectations but it was there that the term ‘artificial intelligence’ was coined. The UK followed with ‘The Mechanism of Thought’ conference in 1958. 


It would only be a matter of 30 years or so before the Golden Era of computer technology began (think Windows 95!) and the first robots constructed by the Massachusetts Institute of Technology (MIT) would be built. Attila was also the first robot that I saw at ‘AI: More than Human’ which for me marked the great leap made by humans from stationery thinking machines to animate digital creatures.  

This was when the exhibition took a turn into the world of AI as we’ve come to know it today. In part two, I’ll explain how ‘AI: More than Human’ showed the many possible benefits of AI such as its potential to eradicate illnesses and produce whole new food groups. It also examined its darker side – the inherent prejudices it can hold and its capacity to ultimately govern society.

Until next time. 🤖