Last October, at a prints and multiples sale at Christie’s, New York, Edmond de Belamy, from La Famille de Belamy (2018) became the first algorithm-generated artwork to be sold at a major auction house. Produced through a machine-learning process known as a Generative Adversarial Network (GAN) that creates new images by approximating the visual features derived from existing ones fed to two competing systems as training data, the work hazily depicts a fictional man in the manner of eighteenth-century European portraiture. Christie’s lot listed the medium as “generative Adversarial Network print, on canvas” and attributed the work to a trio of French artificial intelligence researchers working under the name Obvious Art. The GAN that produced the end result, however, was fine-tuned by Robbie Barrat, a nineteen-year-old tinkerer from West Virginia. He had advanced the GAN’s artistic capabilities out of curiosity and made the technology open source earlier last year—available for Obvious Art to use. But when Edmond de Belamy sold for an estimate-smashing $432,500, Barrat’s name was nowhere to be found.
On June 25 Christie’s hosted an Art + Tech Summit on the “AI Revolution,” to build on the success of last year’s sale and discuss the issues of authorship and originality that loomed around it. The auction house assembled twenty-five thought leaders from tech companies and art institutions for a full day of panels and talks to address questions facing the field. How are the tools grouped under the umbrella of artificial intelligence—predictive analytics, data collection, and neural networks—impacting the art market? How do they impact the production and consumption of art?
The development of GANs can be traced back to Google engineers, who have since authored many of the AI breakthroughs that appear, to some, to approach creativity. Google’s Michael Tyka spoke about using GANs to produce images, arguing that AI, while sophisticated in its ability to produce visual elements via algorithm, is still just a tool. Tyka invoked Jackson Pollock’s process of letting paint drip down on his canvas: GANs are similarly “systems that you don’t fully control.” But he also claimed that machine-generated art is “creative” insofar it generates something novel of value. This loose definition might suffice on the question of novelty; who would say that the psychedelic imagery produced by Google’s Deep Dream program isn’t novel? “Value,” however, is harder to pin down.
Tyka is affiliated with Artists and Machine Intelligence (AMI), a program at Google that brings together artists and engineers to explore machine-learning projects. AMI treats humans as an essential part of the creative process, even when it involves the most complex forms of automation. The notion of authorship has legal implications, too. How do copyright laws govern a GAN-made work? Ed Klaris, an intellectual property attorney, weighed in, naming people who might be considered authors, from the developers of the original software algorithm to the artists responsible for the images used in the training data. Few legal precedents currently exist, but the issue surely matters to Christie’s. Auction houses need to determine authorship in order to justify pricing.