Bots and AI Art Engage Differently with the Digital Commons –

“AI: More than Human,” an exhibition that appeared at London’s Barbican Art Gallery this past summer and can now be seen at the Forum in Groningen, the Netherlands, mirrors the muddled zeitgeist of artificial intelligence. It seeks to bring together the various elements of art, research, and commerce, displaying interactive installations as well as projects applying AI in fields as diverse as agriculture and neuroscience. Rather than untangle these distinct areas, Barbican curator Anna Holsgrove has chosen to intermix them under sections titled the Dream of AI, Mind Machines, Data Worlds, and Endless Evolution.

I saw the show in the company of computational artist Memo Akten, who has been at the forefront of many micro-movements, learning new tools to study how they expand human creativity. At the Barbican, Akten presented the latest iteration of Learning to See (2017), an interactive installation in which machine-learning software analyzes a live feed from a camera pointed at a table covered with everyday objects. The software interprets this visual input based on data sets, sourced online, that contain tens of thousands of images—ocean views, fires, flowers, and star fields. Viewers are invited to rearrange the objects, and the software reinterprets these configurations depending on whether it is set to see flowers or stars, then projects its findings on a wall. A piece of fabric, sunglasses, and headphones can look like a blooming garden or a stretch of the cosmos.

Related Articles

view of Ian Cheng’s installation BOB,

In a recent interview, Akten noted that when people talk about AI they are usually referring to “big data-driven methods or systems.”¹ So-called AI art is created by making aesthetic choices when sourcing and selecting datasets. In another iteration of his ongoing exploration of computer vision, Akten created the painterly images of Learning to Dream (2017) by using photos from a large repository of digitized art collections, amassed and archived for the Google Art Project. Perhaps unsurprisingly, generative artworks like Learning to Dream that engage with the history of painting have captivated more traditional realms of the art world such as auction houses, commercial galleries, and institutions like the Barbican.

“AI art” is a term that both refers to these new software tools and makes a burdensome promise about the emergence of nonhuman creativity. The flourishing AI art movement resonates with both the digital technology sector and the art market, attracting levels of patronage and investment that are unprecedented for computer-generated work. Earlier phases of software art, while overlooked by the market, often gained widespread popularity with online audiences and peer groups formed around open-source software. Artists who make work with, and for, the “digital commons”—the aggregate of resources shared online—have generally resisted notions of proprietorship and commodification as antithetical to the values of their communities. Can AI art’s reliance on the digital commons for vast data resources be squared with its framing as a way of producing original and sellable work? Are collective, collaborative practices being erased by the hype for AI art?


teamLab’s interactive digital installation What a Loving and Beautiful World, 2011, in the exhibition “AI: More Than Human,” 2019, at the Barbican Art Gallery, London.

A useful foil to AI art is another software-based art form known as bots. The bot-making community creates personified automata, while ignoring the goal of simulating human intelligence. The scene that is active today can be traced back to 2007, not long after Twitter was founded, and around the time the first Apple iPhone was released. This Web 2.0 era saw the creation of platforms for user-generated content—a development facilitated by increased interconnectivity and interactivity between web services. New application programming interfaces (APIs) enabled web-based software to communicate with a platform’s servers. Flickr was one of the first web companies to offer an API, and was soon followed by Facebook, Twitter, and others. By 2010, Twitter had more than seventy thousand registered third-party software applications.² This interoperability gave rise to social media bots, most serving commercial purposes. The popular bot @horse_ebooks, for example, began its life as a Russian spam bot designed to promote digital publications. Its clumsy programming produced entertainingly garbled utterances, generated from ebook content. With a kitsch image of a majestic horse as its profile picture, the bot gathered a cult following that reached 200,000 at its peak, igniting an interest in whimsical and nonsensical presences that made Twitter more fun. It was later discovered that the artist and Buzzfeed strategist Jacob Bakkila had bought @horse_ebooks from its Russian developer and composed tweets in the voice of a spam bot.³ Other bot artists did not ventriloquize their creations, instead letting them run fully automated from servers or home computers.

These artists crafted weird characters and the code to operate them, borrowing ideas from disparate fields such as computational poetry, interaction design, and procedural video games. One of the first to do so was Allison Parrish, who created @everyword (2007), a bot that tweeted every word in the English language at the rate of one word per hour until it completed the task on June 7, 2014. Ranjit Bhatnagar’s @pentametron was an endless poem composed by consecutive retweets that happened to be written in iambic pentameter and rhyme with each other. Everest Pipkin’s @tiny_star_field maps cosmic vistas in ASCII characters. Kate Compton’s @losttesla gives voice to a self-driving car. Darius Kazemi’s @twoheadlines mashes up news stories, while Nora Reed’s @infinite_scream lets out a constant stream of anguish, tweeting “AAAHHH” with varying numbers of A’s and H’s. Bot artists embrace the fabrication of characters because this is integral to the way bots are introduced to audiences on social media—through carefully chosen names, avatars, and descriptions.

Bots also appear on platforms other than Twitter. Julien Deswaef’s Word Wars (2015–19) was a YouTube account that presented the daily news in the style of the Star Wars opening crawl. Kazemi’s Random Shopper (2012), as the name suggests, surprised its maker with a monthly selection of books, music, and movies bought on Amazon. Bot art can be seen as aligned with art historical precedents like Dada and assemblage art that piece together references from mass culture, often with humor. But while readymades treat objects as signifiers, bots treat signifiers as objects.


View of Memo Akten’s installation Learning to See, 2017, in “AI: More Than Human” at the Barbican.

AI art and bots both rely on preexisting material. But the ways in which they use it are quite different. Curator and theorist Nicolas Bourriaud alludes to computer processes when he writes: “Artists today program forms more than they compose them: rather than transfigure a raw element (blank canvas, clay, etc.), they remix available forms and make use of data.”4 His statement helps unlock a major distinction between bot art and AI art. AI artists treat media as raw data, as a crude element to undergo transfiguration. They feed AI images so that the software can make something new. But bot artists remix source material, as Bourriaud says, putting readily available media in other contexts. Many of Kazemi’s bots take archival images and circulate them on social media. His @oldschoolflyers (2015–) shares announcements for hip-hop shows from the 1970s and ’80s, and @FBIbot (2015–) distributes documents released by the Federal Bureau of Investigation in response to Freedom of Information Act requests. Parrish’s @the_ephemerides (2015–19) paired archival images taken by space probes with computer-generated poems about exploration and voyages. Derek Arnold’s @FFD8FFDB (2014–17) sourced stills from networked security cameras, filtered them in grainy purple and green, and added cryptic messages such as “U X < Q U < E < E E J A I T A.” Arnold’s output was not only redistributed to the public via social media, but was freely shared by the artist under a “no copyright reserved” creative commons license known as CC0.

View of Memo Akten’s installation Learning to See, 2017, in “AI: More Than Human” at the Barbican.

It could therefore be said that bot artists take a “regenerative” approach to the commons, using the term introduced by economist Kate Raworth to describe contemporary business practices that do not deplete the natural resources society depends on but rather ensure resources can be recycled, recuperated, and restored.5 Raworth also describes depletive practices—the extraction of natural resources to be industrially processed into nonrecyclable derivates—as “degenerative.” Although the term mainly describes processes that irreversibly transform organic matter, it could also be usefully applied to new economic interests that consider big data as a crude material with untapped value. In magazines like Forbes and Fortune, big data has been described as the “new oil.”6 Google, Facebook, Microsoft, IBM and other tech companies are using their access to massive amounts of information to create new advances in AI, and conversely, their data-driven AI machinery is being commercially offered to other companies sitting on huge troves of data. This lucrative pursuit of new data has been labeled the “extraction imperative” by psychologist and technology critic Shoshana Zuboff.7 In a troubling twist on the metaphor, Microsoft has deals with Chevron, BP, Equinor, and ExxonMobil to analyze data from previous oil exploration in order to boost production.8


While bot artists tend to work independently, most AI artists have a relationship with the tech sector, supported by cultural and research interests on the order of Google and Nvidia, a designer of graphics-processing units for gaming. These relations are partly founded on access to algorithms, data sets, and computers that can quickly perform a vast number of operations. In addition, some of the proponents of AI art began their artistic work while employed by these companies. Google engineer Alexander Mordvintsev created the DeepDream program, which generates psychedelic images through computer vision software that identifies patterns in pictures, and then iteratively transforms these patterns so that their tics become ever more pronounced. Robbie Barrat pioneered the use of generative adversarial networks (GANs) to make artwork, by selectively training his software on thousands of landscape paintings and portraits—sometimes in combination—that he scraped from the website WikiArt. In an interview, Barrat partly credits his early success to his access to the supercomputers at Nvidia, where he worked as a programmer. The company, he says, provided him with “absolutely insane GPU cluster supercomputers” that Barrat ran for two weeks to develop his work.9 Artist and engineer Mike Tyka cofounded the Artists and Machine Intelligence program at Google, which has supported many key artists exploring AI art, including Refik Anadol and Kyle MacDonald. This program runs in concert with Google’s Arts & Culture Experiments, which supports a wide range of projects engaging with cultural archives, such as Mario Klingemann and Simon Doury’s X Degrees of Separation (2016), which asks users to select two artworks and provides a series of other works that suggest a continuum between them.

These corporate associations are not necessarily nefarious. Perhaps Google and other large tech firms, rather than supporting AI art in a conspiratorial ploy to distract the public from some of the more contentious and degenerative aspects of commercial AI, are simply responding to artists who emerge from, or directly rely on, their research programs. This is not unprecedented in computer art, as many of the field’s pioneers had privileged access to supercomputers through their employment. Google, at least, is trying to convey a clear message: with some creativity, the tools of AI can be applied everywhere.

Robbie Barrat: Figure Studies, 2019, digital image made with neural networks.

As AI art practice continues to evolve, it is encouraging to see the resilience of the digital commons and the online communities that maintain it. Generous groups are developing new tools such as ml5.js to make machine learning and other AI methods accessible to a broad audience of artists, creative coders, and students. Initiatives like these can potentially make AI art socially beneficial, especially when combined with ideas and techniques from bot-making, video games, computational poetry, augmented reality, and other practices. This past July the School of Machines, Making & Make-Believe in Berlin hosted a Bots and Machine Learning workshop, where I taught alongside programmer Yining Shi. It resulted in hybrid projects such as Daria Sazanovich’s AR Body Filters, which uses ml5.js to make an in-browser app that detects the viewer’s face or body and then augments it with rendered tears and piercings.10

Data-driven methods will likely continue to flourish in these inclusive communities, without being hyped as impressive advancements in technology. In 1971 pioneering software artist Frieder Nake declared: “The big machinery, still surrounded by mystic clouds, is used to frighten artists and to convince the public that its products are good and beautiful.” Nake’s conclusion is as polemical now as it was then: “Computers ought not to be used for the creation of another art fashion.”11

Computers should of course be used for art, as they have been used in the decades since Nake’s remark. However, the framing of these practices needs improvement. Maybe we should forgo seeing art as individually created “original” works so that we might find new ways to understand and celebrate computational art. It may be best seen as a phenomenon that emerges from the digital commons, with its complex relational interweaving of practitioners, communities, technologies, media, and data.


1 Renée Zachariou, “Machine Learning Art: An Interview with Memo Akten,” Artnome, Dec. 16, 2018,
2 “Twitter registers 1,500 per cent growth in users,” New Statesman, Mar. 4, 2010,
3 See Susan Orlean “Man and Machine,” New Yorker, Feb. 10, 2014, pp. 33–39.
4 Nicolas Bourriaud, Postproduction: Culture as Screenplay: How Art Reprograms the World, trans. Jeanine Herman, New York, Lukas & Sternberg, 2002, p. 17.
5 See Kate Raworth, Doughnut Economics: Seven Ways to Think Like a 21st-Century Economist, White River Junction, Vt., Chelsea Green Publishing, 2017.
6 Examples of headlines using this metaphor include “Is Data the New Oil?” in Forbes, “Data Is the New Oil of the Digital Economy” in Wired, and “Data Is the New Oil” in Fortune.
7 See Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, London, Profile Books, 2019.
8 Zero Cool, “Oil is the New Data,” Logic, Issue 9,
9 Robbie Barrat quoted in Jason Bailey, “AI Art Just Got Awesome,” Artnome, Apr. 5, 2018,
10 Daria Sazanovich’s work can be interacted with in-browser at
11 Frieder Nake, “There Should Be No Computer Art,” Bulletin of the Computer Arts Society, 1971, pp. 18–19.


This article appears under the title “Bots vs. AI” in the January 2020 issue, pp. 58–63.

Original article by By Matthew Plummer-Fernandez

Leave a Reply

Your email address will not be published. Required fields are marked *