Resistance as Redistributing Agency

Tega Brain

Over the past several years I’ve made art to examine how data-driven computational techniques like machine learning are shaping and reshaping ecological thought. Ecological thought is how we perceive and reckon with our environmental entanglements—as philosopher Timothy Morton puts it, “all the ways we imagine how we live together” which, in turn, is also influenced by our technologies and what we imagine they can do. There is a tight coupling between ecological and technological thinking, something that is apparent in the history of ecology, a field that emerged in dialogue with the cybernetics and systems research of the mid-twentieth century. Today, the environmental sciences are profoundly shaped by computational approaches and practices like sensing, modelling and prediction; scientific findings are shared, interpreted and discussed through AI driven media platforms and information infrastructures, like search engines and large language models. In these ways, public environmental imaginaries are now profoundly influenced by algorithmically driven media platforms and machine learning systems. Layers of hardware and software increasingly influence how existential issues like climate breakdown are understood, imagined and acted upon; they are therefore worthy of deep, multidisciplinary inquiry.

I make art to examine and provoke a critical consideration of the weird entwining of technological and ecological thinking. I use an approach to practise that brings together art and engineering, in what I call eccentric engineering. Eccentric engineering uses art as a method to probe how infrastructures reconfigure human and environmental capacities and envisage alternatives to the human-centred and extractive systems of the present.

Eccentric engineering uses art as a method to probe how infrastructures reconfigure human and environmental capacities and envisage alternatives to the human-centred and extractive systems of the present.

A glimpse of this can be seen in Deep Swamp, an installation made up of a triptych of semi-inundated environments that gather together wetland life forms and artificially intelligent software agents. Three AI agents, Harrison, Hands and Nicholas, use an off-the-shelf machine vision system to evaluate and manage small wetland environments that also inhabit the gallery. Each agent can change the conditions of its environment in an attempt to optimise it for a particular goal, which is set by what each system considers a wetland to be. This understanding is produced via software that is built with a machine learning tool called TensorFlow, which can be trained on custom datasets of images. The images I collected as training data are labelled and classified as different types of environments—for example, rainforests, savannas, deserts, wetlands, landscape paintings and crowds—and were collected from online image platforms like Flickr, the Google Art project, and museums with open APIs allowing access to their collections.

Each agent then attempts to optimise its wetland environment so it fits in its designated class of this training data. Harrison aims to create an environment that looks like the photos of the “wetlands” class; Hans aims for his wetland to fit into a class of images of Western landscape paintings; Nicholas’s goal is for its wetland to be identified as fitting into a class of images containing people and crowds, seeking the presence of an audience. Every few minutes, the AIs take a photo of their environment, assess how closely it matches its assigned class, and then adjust the conditions in their respective wetlands accordingly.

Exhibition view of Deep Swamp, Ti con Zero, Palazzo delle Esposizioni, Rome, 2021. Photo by M3 Studio.

Deep Swamp’s imperfect machine perception stands as a reminder that despite AI’s promise—that massive datasets will provide us with a clearer way of seeing the world—all perspectives of our ecology are partial and imperfect, even those produced by huge swaths of data. As Donna Haraway might put it, the all-seeing god’s eye view of the world will always be a persistent fantasy.

Deep Swamp also makes the argument that there is no such thing as neutral environmental engineering or management, and that these practices are subject to a thick entanglement of social and cultural forces. Take for example, something as commonplace as the climate goal of staying under 2 degrees of warming—which persists despite many nations, like those of Africa and the Pacific Islands, observing that they will face certain disaster even with this level of warming.

There is no such thing as neutral environmental engineering or management, and these practices are subject to a thick entanglement of social and cultural forces.

The social and cultural forces driving environmental management are also increasingly influenced by digital networks and algorithmic or AI-driven media platforms, which is the focus of Synthetic Messenger, a collaborative work I made with Sam Lavigne in 2021. Synthetic Messenger is an online intervention that questions how the business model of online media platforms influences how the climate crisis is understood. Initially launched online during the pandemic, the work consists of a group of software bots that crawl the internet looking for news on the climate crisis. When they find an article on this topic, the bots then click on all the ads that run alongside the article in an attempt to inflate both the visibility and value of the story. With each fraudulent click, a tiny payment would be triggered between advertiser and news outlet, increasing the revenue earned by the articles as well as their aggregation around the internet.

Rather than deploying an AI system as I did in Deep Swamp, Synthetic Messenger instead carries out a guerrilla-style intervention within existing data-driven systems to reveal their logics and probe how these systems might be leveraged for agendas like climate action. As the bots simulate human engagement with climate news coverage, they amplify its visibility and increase its monetary value. The work thereby renders the climate crisis as a crisis of communications characterised by a post-truth politics, where narratives about what is happening to our climate have fractured and opinions on what to do about it are deeply polarised. And in provoking questions about whether media manipulations could or should count as a kind of climate engineering, the work reinforces the argument made by geographer Holly Jean Buck: that reform of our media platforms is urgently required so that it's possible to cultivate new, shared imaginaries for climate change response.

Screenshot of the Synthetic Messenger Zoom call recorded during the STRP Festival, June 2021.

Both Deep Swamp and Synthetic Messenger experiment with AI as a specific lens through which the world is increasingly viewed. In Deep Swamp, each software agent crudely understands its environment from a statistical model built with neural nets trained on custom datasets of images taken from online repositories like Flickr and Google Art. In Synthetic Messenger, AI-driven modes of digital advertising are manipulated to prompt thinking about the role of these systems in shaping public narratives about the climate crisis.

However, both works go beyond just showing the qualities and characteristics of AI-driven perspectives by also acting through them. They do not simply attend to AI as a lens, but also examine how AI reconfigures agency, or the capacity to act in different ways, and they do so by putting these technologies to work in public. The automated agents in each work (the environmental managers of Deep Swamp and the bots of Synthetic Messenger) make decisions from their partial machinic perspectives, and reshape the world with mixed results. The effects of the agents of Deep Swamp are visible to the audience. In some instances they have killed off their wetlands, but in others, they have created the conditions for lush, vibrant new growth. By contrast, the effects of Synthetic Messenger are not so clear. Although the bots act through what has become a key vector of online agency—the act of clicking— which would indeed be amplifying this news content, in some ways the workings of digital advertising systems remain relatively opaque. Despite our testing of this system, to ensure it avoids bot detection and so forth, it's impossible to clearly grasp or know the effect of each click.

The opacity of digital advertising is something that author Tim Hwang has written about in his book Subprime Attention Crisis, where he likens the investment and confidence in un-auditable advertising systems to the subprime mortgage crisis of 2008—which was also catalyzed by investment in risky and completely opaque assets. Hwang points out how these digital ad platforms are completely inscrutable and un-auditable, yet they hold up the enormous valuations of companies like Google and Facebook. In this way, ad metrics are attributed great power despite being unreliable and relatively unregulated. Although Synthetic Messenger makes a spectacle out of the bots’ activities, it is limited as to what it can reveal about the opaque platforms it is engaged with. We know that its actions would likely have an effect, but it is impossible to quantify or see this outside of a testing environment.

Screenshot of the Synthetic Messenger dashboard.

Media manipulation strategies have, until now, tended to be monopolised by fossil fuel proponents with the goal of cultivating denial, or encouraging predatory delay and inaction. There are many examples of this, including narrative strategies like BP’s carbon footprint calculator that frames climate response as an individual rather than industry responsibility, as well as manipulations of platform politics, such as the use of bots and trolls to amplify misinformation and doubt around the climate crisis. However, Synthetic Messenger shows that these techniques are also open for co-option for other agendas. The work shows one way to leverage the agency of these systems. My goal is not to pursue guerrilla actions as the end result, but rather use them as a way of building an argument that the media is an essential ingredient for climate reform.

The Covid-19 pandemic showed that these platforms were willing to use their powers to promote health information and vaccine uptake, so why aren’t they using the same tactics to address the climate crisis?

In fact, these efforts need not be guerrilla at all. Social media platforms like Facebook and Twitter claim to be able to enact significant agency over user behaviour, having built what Shoshana Zuboff calls “a new global architecture of behavioural modification”. The Covid-19 pandemic showed that these platforms were willing to use their powers to promote health information and vaccine uptake, so why aren’t they using the same tactics to address the climate emergency?

Both Deep Swamp and Synthetic Messenger play with automated data-driven systems through deploying the technologies they seek to critique, but do they do enough to reconfigure their logic or actually distribute agency in ways that challenge the status quo? Deep Swamp’s wetlands suffer from the highly constrained perspective and resulting ineptitude of a fairly crude AI system, and neither the plants nor the audience are provided a way to intervene or push back. Synthetic Messenger subverts automated media platforms to highlight the influence and agency of a digital advertising business model. However, when doing any kind of work on media platforms, the house always wins. Even when trying to subvert these technologies for digital and climate literacies, this artwork (like all bots) strengthens the advertising platforms it operates on by inflating engagement metrics and ad revenue. Artists and academics are all in this trap. Platforms give us reach, connection and power, but our participation is always on their terms. Our success, in the form of getting public engagement with our work—even when it is critical of these very systems—also means success for the platform in the form of revenue, expansion and continued capitalist extraction.

What other strategies are available to artists seeking to push back against the asymmetrical power relations of existing data-driven systems and platforms? Some of the work of artist Lauren Lee McCarthy and experimental designer Alexandra Daisy Ginsberg suggest a generative approach. In her work LAUREN (2017), McCarthy puts herself in the place of an AI system in the context of a smart home. As she monitors the home through cameras and makes the decisions about how it should be operated, she suggests another way these automated systems could be designed where human rather than machinic forms of intelligence are centred. British artist Ginsberg also employs a similar strategy in her work The Wilding of Mars (2019)—where she proposes that Mars be colonised not by humans but by plants, in a long-term evolutionary experiment that is not subject to human steering. Ginsberg imagines a form of space research that centres plant life and opens a space for examining non-human-centred design where, importantly, the non-humans involved are other life forms rather than computational agents. In my own work Solar Protocol (2021-2022), I similarly experiment with using the environment to automate the operation of a network of solar powered servers in place of an AI agent. Made in collaboration with Alex Nathanson, Benedetta Piantella and volunteers around the world, the Solar Protocol network collectively hosts a website, which is then served from whichever server is in the most sun at time.

In Solar Protocol, each server hosts a copy of the web platform and shares solar data with the network. Web traffic is sent to the server that is receiving the most sunshine and generating the most energy at the time.

These three works hint at technological programs with radically different politics to what we see in much deployment today. As forms of eccentric engineering, they suggest a way of practicing where critique of the status quo is achieved through reconfiguring who and what gets to act—shifting power dynamics and decision-making capacities in ways that differ from the present-day obsession with privatised surveillance, data collection and decision making by machine learning systems. Eccentric engineering gives glimpses of what it might look like to configure systems where collectives of humans and other species can enact more agency in how we make the world. In revealing these other intelligences to be lively and spectacular, they remind us that AI is but one very limited and narrow form of intelligence among many others.

Tega Brain is an Australian-born artist and environmental engineer whose work examines issues of ecology, data systems and infrastructure. She has created wireless networks that respond to natural phenomena, systems for obfuscating fitness data, and an online smell-based dating service. Her work has been shown widely including in the Vienna Biennale for Change and the Guangzhou Triennial. Her first book, Code as Creative Medium, is co authored with Golan Levin and published with MIT Press. More at tegabrain.com and @tegabrain.