Trevor Paglen recently held a video-chat with Sun-ha Hong, assistant professor of communications at Simon Fraser University in Canada, to discuss their respective concerns about high-tech surveillance. Paglen’s work includes “Limit Telephotography” (2007–2012), a series of images depicting highly sensitive infrastructure, like National Security Agency buildings and the deep-sea fiber-optic cables that transmit high-speed data, which are often hidden in remote locations and inaccessible to the public. Hong’s research covers topics such as self-tracking devices and what “knowledge” means in the age of big data. Paglen’s current exhibition “From ‘Apple’ to ‘Anomaly,’” at the Barbican Art Gallery, London, examines a set of images used to teach artificial intelligence systems how to see and categorize, prompting reflection on the aesthetics of machine vision. Below, the two talk about the shifting of surveillance from state monitoring to corporate data-mining.
TREVOR PAGLEN Let’s start with an easy question—what do we think about the word “surveillance”?
SUN-HA HONG That word encompasses so many things now! Surveillance used to be about prisons and the police—its French origins literally mean “looking down from above.” It was about state power. Now, predictive technologies are sold out of Silicon Valley, producing a confluence of public and private, commercial and noncommercial uses. I’m curious what you think about this, Trevor, since you’ve photographed sites of state surveillance like the NSA headquarters.
PAGLEN I agree that historically, when we talk about surveillance, we’d associate it with the state. But now, it’s difficult to distinguish between the state and capitalism, at least in the U.S. We’re living in a world of mass data collection, and that data is used for all sorts of things: it sometimes ends up serving state powers, but often serves capitalism.
That’s shifted my thinking. I worked as a cinematographer on Citizenfour [2014], Laura Poitras’s documentary about Edward Snowden and the NSA. But I started realizing there’s this way bigger power: Google! Most people are much more affected by Google or Facebook than the NSA in their everyday lives.
HONG Many people responded to the information about the NSA that Edward Snowden gave us by thinking “I haven’t done anything wrong, so I have no reason to be afraid.” This may be true in a narrow sense, but it also reflects a particular imagination about what surveillance is: it’s something the government does, it’s Big Brother, it’s what happens to you if you’re a criminal. This is often not true.
I’ve been researching self-tracking technologies: millions of people are using Fitbit, a device that allows you to know yourself better by tracking your steps, heart rate, and quality of sleep, so that you can optimize yourself and make yourself happier—who doesn’t want that? But Fitbit started partnering with insurance companies like John Hancock. If you want to be insured by them, you’ve got to have a Fitbit and share your data. Currently, it’s illegal for them to use that data directly to recalculate your premium, but that’s the horizon of potential use. Court cases are using Fitbit data too.
PAGLEN The binary that’s often set up between surveillance and privacy is really unproductive. All kinds of spaces have been opened up to capital and to policing in ways that weren’t really possible thirty years ago, when it wasn’t efficient to get access to data at such a huge scale. One consequence is obviously a loss of privacy, but that binary articulates the problem at the scale of the individual: privacy involves a kind of bourgeois concept of the subject. Instead of thinking about privacy, I think about anonymity as a public resource.
HONG Only about fifteen years ago we lived in a world where the internet was primarily an anonymous space. It seems like society never really made a conscious decision to step away from that. We never had a big debate about whether anonymity is bad. But we’ve transitioned toward a social media platform economy where a massive amount of our online life is no longer anonymous. We used to worry that anonymity made the internet toxic, that it helped people get away with being a misogynist, a Nazi, or a plain old jerk. Turns out, that’s not the case: getting rid of the anonymity hasn’t necessarily fixed the problem.
Instead, as you’ve pointed out in your work, we have a much more machine-
readable ecosystem. [AI pioneer] Joseph Weizenbaum pointed out that we try to make computers more like humans to serve humans better, but often we end up making people more like computers instead—or at least more friendly or readable to computers. We’re encouraged to behave in this machine-readable way not only online, but also in public space.
One of Snowden’s points that really struck me was that he wanted the American people to have an informed debate and choose if this was the future they want. The difficulty has been, can we have a debate about these things? Can we ever be properly informed about these things? Or do we just buy products and go along with the flow?
PAGLEN I’ve been thinking about the rhetoric of exactly that: the machine-readability of our everyday lives. What are the structures of meaning that technical systems generate? This has two aspects. One is formal: literally, what does a facial recognition camera see when it’s looking at you? The second part is where the semiotics of readability intersect with politics. Nobody cares whether or not computer vision systems can distinguish between an apple and an orange, what they care about is making money.
Some of my work, like that in “From ‘Apple’ to ‘Anomaly,’” asks what vision algorithms see and how they abstract images. It’s an installation of about 30,000 images taken from a widely used dataset of training images called ImageNet. Labeling images is a slippery slope: there are 20,000 categories in ImageNet, 2,000 of which are of people. There’s crazy shit in there! There are “jezebel” and “criminal” categories, which are determined solely on how people look; there are plenty of racist and misogynistic tags.
If you just want to train a neural network to distinguish between apples and oranges, you feed it a giant collection of example images. Creating a taxonomy and defining the set in a way that’s intelligible to the system is often political. Apples and oranges aren’t particularly controversial, though reducing images to tags is already horrifying enough to someone like an artist: I’m thinking of René Magritte’s Ceci n’est pas une pomme (This is Not an Apple) [1964]. Gender is even more loaded. Companies are creating gender detection algorithms. Microsoft, among others, has decided that gender is binary—man and woman. This is a serious decision that has huge political implications, just like the Trump administration’s attempt to erase nonbinary people.
HONG We don’t really go through due process in determining the political implications of these technological decisions. Recently, a survey run by MIT researchers asked people: if a self-driving car had to run over something, can you rank who it should run over? The car needs to know: do you hit the old man, the criminal, the executive female, the homeless person, the stroller? It will likely be robots deciding who lives and who dies, but behind that is human judgment: the researchers compared answers across demographics like country, education, and gender. How can you possibly have a good process for determining these things?
The decision-makers—the coders, the computer scientists who are working for corporations—don’t always conceptualize these choices as political or ethical. Sometimes, they’ve just got to do their jobs. Of course, there have been egregious consequences: [UCLA scholar] Safiya Umoja Noble writes about the famous instance when Google’s image search started labeling black people as gorillas. The standard excuse is that the algorithm doesn’t even know what racism is. That’s technically true, but that’s part of the problem: the algorithm should know what racism is, and it should be trained to respect human meanings and human history.
For your series “Limit Telephotography,” you photographed state surveillance facilities like Fort Meade and the Utah Data Center. Some of these sites look utterly boring, like 1970s shopping malls.
PAGLEN Data ethicists Kate Crawford and Vladan Joler made a work called Anatomy of an AI System (2018) that outlines the environmental footprint of neural network training. This sort of expanded sense of what a technical system actually is has been so important for me. Mountains have to be torn down to get the rare earth minerals that go into the circuit board!
HONG This comes back to what we said initially about surveillance: it’s much more than just Big Brother watching you. By the same token, the ethical problems go beyond asking Silicon Valley coders to read more ethics. I would like them to! But that’s not going to solve all the problems: there are both geological and geopolitical implications.
I regularly assign your essay “Invisible Images (Your Pictures Are Looking at You)” [published in the New Inquiry, 2016] to my students. You argue that we live in a world now where a majority of images are created by machines for machines: we don’t encounter them, because they’re not for us. There’s a kind of indifference to human perception and meaning-making. We’ve already talked about how algorithms are indifferent to human meaning, but this goes a step further to say that what humans think about these images is losing relevance. That’s not where the money is, that’s not where the efficiency is. Reading this, I thought, well, images are now catching up with what we’ve already done with text. Google Smart Reply is encoding our emails so that we don’t have to write them anymore—let’s face it, that makes all of us happier! But at the same time, it means we’re no longer communicating in this human way. Instead, we’re being conditioned to communicate more like machines, using shortcuts to say, “hello, that’s really interesting, goodbye.”
The elephant in the room is also deep fakes, audiovisual media that use AI to fabricate footage of someone saying or doing something they probably never did. What’s on your mind regarding deep fakes?
PAGLEN I’m not as worried about them as everybody else seems to be. We’ve been able to do that kind of thing since the beginning of photography, and it’s just not at the top of my list of concerns. A more interesting question to me is, what is the aesthetics of AI and neural networks? What ideological work does the superhuman aesthetics of AI do? Do they ask us to have faith in technical systems because they can perform feats that we’re not able to?
HONG Absolutely. It’s a question of aesthetics, and of affect. After the Snowden affair, people and organizations like the ACLU asked, can we creep people out about surveillance? Because that’s how you get people to care and think about it. Silicon Valley’s motive is to make surveillance cool; the latest gadget.
Right now I’m researching “ubiquitous computing,” this early 1990s buzzword. It was the idea that we would have computers everywhere that would serve our needs. Computers will not frustrate us like early ’90s computers frustrated everybody. They’re going to be cool—they’re going to make us superhuman. This vision of the future is often mainly aesthetic rather than structural: there’s still the same gender relations, the same workplace relations. You drink the same coffee and have the same job, but machines help you do it faster and in a cool way. The master text of this is “The Jetsons,” a futuristic cartoon with hovercars, but everything is exactly the same as “The Flintstones”: the dad is still reading a newspaper, whether it’s an engraved stone or a hologram. The mom still cooks for the family, just with futuristic tools. There’s a lot of stasis in our imagination of the future. I’m always wondering, how can art and how can scholarship help us break through that stasis?
PAGLEN When things like specific concepts of gender are encoded into technical systems, they become fixed and work to define the future of those relations. So your question about breaking through stasis, asking how we imagine creating forms of self-determination within a society in which systems are so ubiquitous, is really urgent.
HONG Yes, these systems ultimately are about saying “this is what counts as truth.” So you need to disbelieve your own ideas and your own experiences and latch onto this system, making that term “self-
determination” key.
—Moderated by Emily Watlington
This article appears under the title “Machine-Readable Images” in the October 2019 issue, pp. 22–24.