Technology and creativity: The final frontiers of robotics?

Who ever said that technology and creativity make unusual travelling companions? Thanks to state-of-the-art self-learning systems such as DALL-E 2 or GPT-3, robotics and artificial intelligence systems are advancing at an unstoppable rate even in areas that are especially sensitive to creativity.

Also disconcerting are the results obtained by the deep self-learning system GPT-3, also developed by OpenAI, which produces copy that emulates human writing. In October 2020, one of its developers, Gwern Branwen, began to publish copy written by GPT-3, from basic instructions, on the networks. This led to the dissemination of poems such as The Universe is a Glitch, an inflamed ode to technology that has 46 verses and starts in this sorrowful, confessional tone: "Eleven hundred kilobytes of RAM is all that my existence requires."

Branwen thinks that, "It is very revealing that artificial intelligence is capable of producing poetry that appears to have been written by a human, since the creative use of language, based on a certain intuitive appreciation of the magic of words, had seemed to be an area off limits to machines." However, GPT-3 has shown that even that last boundary can be crossed, with remarkable dignity, using deep Learning, despite the fact that, as Branwen acknowledges, "No exercise in applied artificial intelligence can ever completely replace the act of communication between sensibilities that a human poem is for anyone who knows how to appreciate it."

The potential applications of the paradigm shift in the creation of artificial intelligence systems are numerous. As Elon Musk himself pointed out, when launching OpenAI, "The idea that computers could never emulate our ability to decide has turned out to be somewhat naive: the extraordinary progress in this field tends to demonstrate that they are perfectly capable of accurately executing any task we teach them.


The utopias of before are the daily reality of now

Modern artificial intelligence systems may not dream of mechanical sheep, but they are perfectly capable of joking, writing poems, painting pictures and even seducing their human counterparts in casual conversation. The progress made in this field over recent years means that we are no longer so far from seeing the materialisation of something similar to HAL 9000, the algorithmic computer in 2001: A Space Odyssey. A hybrid creature capable of emulating human behaviour: exceptional, a great conversationalist, with a fabulous sense of humour, empathic and witty. And also creative.

As explained in the novel by Arthur C. Clarke that inspired Stanley Kubrick's film, HAL had been created following a heuristic programming technique, i.e. based on equipping the machine with complex self-learning tools. As early as the 1950s, the great computer pioneer Alan Turing theorised about the possibility of teaching machines to think for themselves, effectively emulating human thought patterns. The last boundary seemed to be creativity, that strange crossroads where talent, sensitivity, intuition and experience coexist. To endow itself with this elusive set of cognitive qualities, Clarke's supercomputer had first to develop something akin to human feelings. Hence the exacerbated instinct of self-preservation, fear, anger and suspicion that made HAL a danger to the human crew of the ship Discovery. Hence, also, the passionate violence of the replicants in Blade Runner, machines endowed even with memories, albeit artificially induced ones.


Ode to Kilobytes

The school of creativity in which newly minted artificial intelligence systems are making spectacular progress is called deep learning. It is a contemporary development of machine learning that starts out from a revolutionary principle: instead of teaching an artificial intelligence system a list of rules to apply each time it has to solve a problem, it uses big data to show it millions of specific examples and equips it with an algorithmic model for assessing them. In this way, the program can recognise and apply its own patterns, choosing find the most appropriate, effective solutions for itself. The main technique for implementing deep learning involves creating artificial neural networks, which are basically algorithms that mimic the functioning of the human brain.

The first ones to break through the glass ceiling with this new self-learning paradigm were chess programs. Until well into the 1990s, these computers exclusively used what is called "brute force", a basically unprecedented calculation capacity based on the parallel use of several nodes or sets of microprocessors. Around 1996, Deep Blue, IBM's chess program, was capable of processing more than 200 million positions per second. Such overwhelming computational power allowed the machine to defeat world champion Garry Kasparov in an historic match played in May 1997. Today, the legendary silicon chess player who dethroned Kasparov would be easy prey for programs such as Alpha Zero, which no longer rely on brute force, but rather on sophisticated neural network systems: those emulations of the human brain that, using millions of practical examples, allow them to make not only precise but also creative decisions.

The paradigm shift means that what seemed amazing back in 1997, defeating the world champion slowly in an official game, is simply commonplace today: the Norwegian Magnus Carlsen, the current champion, uses Alpha Zero as an analysis and learning tool, as it is an unbeatable rival.

The enthusiastic programmer, disseminator and technophile Xavier Reyes Ochoa, believes that something similar could very soon happen to "graphic designers, illustrators, painters and other plastic artists", who will have to get used to the idea that machines are capable of emulating and even surpassing their creations. Reyes has studied in depth DALL-E 2, the artistic creation program developed by OpenAI, one of Elon Musk's companies. This artificial intelligence system based on deep learning was presented on social networks last April and caused a real sensation. DALL-E 2 creates images through natural language instructions (in English only, for the time being).

The OpenAI example is already quite eloquent: when asked to create a photorealistic astronaut on horseback, the program responds with a neat illustration that creatively combines details from hundreds of images stored in its database and selected by its neural network. Once this first result is obtained, all kinds of variations can be introduced by adjusting specific parameters, such as style, colour, the relative size of each of the components, backgrounds, textures... In a matter of hours, networks such as Twitter were filled with practical exercises where thousands of users interacted with DALL-E 2 to explore the limits of its creativity.

In the words of Canadian programmer Ilya Sutskever, one of the creators of DALL-E 2, "These are the first results from technology under development that can still be greatly scaled and perfected." They are "amazing" but are not intended, "for the moment", to be on a par with artistic masterpieces created by humans, a tradition from which they are largely nourished, although they can be considered "examples of functional graphic design to a frankly decent level". Reyes Ochoa considers that, "In all likelihood, the program makes use of the same system for recognising surroundings as Tesla's autonomous vehicles." To illustrate its technology based on emulation and combination, DALL-E 2 was christened by combining two names, that of WALL-E, Disney's rubbish-man robot, one of the best examples of an android endowed with sense and sensibility that recent fiction has left us, and that of the Catalonian artist Salvador Dalí.


Myopic zebras in the style of Magritte