Trevor Paglen’s efforts to document the institutions and apparatuses of our surveillance society have taken him from the deserts of Nevada (where he photographed secret government aircraft bases) to the bottom of the ocean (where he trained his camera on internet cables tapped by the National Security Agency). Offering glimpses into a new form of dark geography, his latest work mines the images generated by artificial intelligences designed to monitor our everyday lives.
Recent advances in technology have enabled AI not only to pick a face out of a crowd but to read the subtlest physiognomic expressions with unsettling accuracy. In Machine Readable Hito (2017), artist-author Hito Steyerl-whose own work has investigated the intersections of politics, vision, and technology-is portrayed in a grid of 360 small photographic portraits. Beneath each image is the output of AI readings of her age and gender and the range of emotions detectable on her face. Moving from photo to photo, one constantly measures one’s own impressions against those of the AI, cognizant of being the more advanced beholder-but also of the likelihood that this won’t be the case for much longer.
Adjacent to this work was another portrait, this one a large-scale depiction of Martinique-born philosopher and radical Frantz Fanon (1925-1961). The image is what’s known as a “face print,” which is produced by averaging numerous photographs of a person’s face so that its defining features can be distinguished from those of others. In his book Black Skin, White Masks (1952), Fanon discusses the agony of being captured within a racist ideological matrix that alienates him from his own physical being. Paglen’s portrait, which reduces its subject to a visual schema, suggests an analogous form of capture. The work is a reminder that surveillance technologies are used to track certain bodies in disproportion to others.
At the center of the gallery, a large projection screen flickered with images used to train AI to recognize persons, objects, and gestures. Appearing at intervals throughout the video are gridded, pixelated visualizations of the various images broken down into constituent parts. Viewers here are “seeing what the AI is seeing,” according to Paglen’s artist’s notes. One of the more unsettling aspects of the work, ironically titled Behold These Glorious Times!, is the coldness with which AI experiences our reality. Far from “glorious,” this is a grayscale world void of affect, in which gestures of intimacy are reduced to mere data.
The most surprising works in the show were the dark and painterly “Adversarially Evolved Hallucinations” in the rear gallery. These works were produced using two forms of AI: what Paglen calls a “Discriminator,” which is taught to discern the items depicted in a particular “training set” of images, and a “Generator,” programmed to produce ambiguous, semiabstract pictures. Each image on view is the result of numerous exchanges in which the Generator fed the Discriminator images until the Discriminator mistakenly identified one as showing something it had been trained to recognize-such as a comet or a rainbow, which belong to the training set “omens and portents.” For one of the works, Vampire (Corpus: Monsters of Capitalism), 2017, the training set was limited to “monsters that have historically been used as allegories for capitalism.” The image is of a haunting, masklike form.
In “Right in Our Face,” a prescient 2011 essay on class-driven racism and the collaboration of global elites and the professional class in the deterioration of the boundary between democracy and fascism, Steyerl follows Giorgio Agamben in describing “the contemporary” as a figure who gazes unceasingly on the darkness of their time. Paglen continues to be such a figure. His works give form to a contemporary darkness that often goes unseen, though it’s right in our face.