web - Looking at the Machine – FACT 2024 Symposium
Photo credit: Gianna Rizzo
Stories & Ideas

Tue 27 Feb 2024

Looking at the Machine – FACT 2024 Symposium

Artificial intelligence Industry
ACMI author icon


Your museum of screen culture

Where do cultural institutions begin with 'artificial intelligence'?

Let’s begin with that inescapable cultural question of new computational practices and ‘artificial intelligence’. What are the actual affordances, complexities and realities of artificial intelligence for the arts and cultural sectors? How do we ensure that creators are able to experiment and are not exploited? What does this mean for arts and cultural workers, who need to possess the technological and aesthetic understandings that ensure wise, forward thinking and make ethical choices?


Eryk Salvaggio (Cybernetic Forests, USA), Associate Professor Katrina Sluis(ANU), moderated by Dr Joel Stern (RMIT University)

Read the transcript

Watch the video with graphic notations by Jessamy Gee

More recorded talks from the FACT 2024 Symposium

Join our newsletter

Get updates on the latest news, exhibitions, programs, special offers and more.


This transcript was machine-generated and published for search and accessibility purposes. It may contain errors.

Thanks, Seb, for that excellent introduction. It's so good to hear the key themes of FACT laid out in that way. Super exciting couple of days ahead. Thanks also to Gavin and to Uncle Bill Nicholson for the generous and warm welcome. I'd also like to begin by acknowledging that we're on unceded Aboriginal land and pay respects to Indigenous elders past, present and emerging, and to any First Nations people who are here today. Yeah, my name's Joel Stern. Seb asked me before if I was going to do any hype work to sort of get the audience going, any interactive stuff, and the answer is no. So I'm based at RMIT. I'm a researcher, artist and curator. Work in the School of Media and Communication there, so I'm not an ACMI staff. But it's really exciting to be invited to moderate and introduce this first session. I do feel part of the ACMI team in lots of ways. It's such an inclusive place, and I've been coming here for two decades now. My work is really concerned with questions of automation, computation, extraction and so forth in relation to art and culture, the politics and aesthetics of art, culture and technology. And it's a pleasure to introduce the two artists that are going to present in this first session. Eryk Salvaggio and Katrina Sluis. They'll each present in turn, followed by a conversation with me and some time for questions from you guys using the Slido app. So this session is called Looking at the Machine, and I think this idea of looking at is very apt for these two artists and researchers, thinkers. They both bring an extremely analytical gaze to the question of not just images, but all media. And obviously in the digital ecologies that we'll be discussing today, the distinctions between mediums, modalities and senses are more fluid than ever. So we shouldn't be surprised in these presentations to see the topic veer away from images into questions of sound, text, space and all manner of terrains in between. But to just return briefly to the idea of the analytical gaze, this is about a way of looking or a way of seeing, as John Berger famously put it. And Katrina was a bit annoyed that I got my John Berger reference in before her, but that's the privilege of the person who does the introduction. But Katrina’s Berger reference will be more sophisticated and substantial. But the idea of the way of seeing is that you almost look through rather than at the image, no matter how aesthetically compelling that image may be. So the image becomes a portal into its own underlying structures, infrastructures and social and political and technical construction and so on. And we really couldn't ask for two better people to lay this out for us. Eryk Salvaggio is a researcher and media artist. His work explores emerging technologies through a critical lens. And recently, like many of us in the room, I imagine he's been thinking about generativity and artificial intelligence, and especially about how the narrative mythologies of these technologies stack up against their actual impacts on social and cultural ecosystems. Eryk’s research blog, Cybernetic Forests, is one of the best things on the internet, in my opinion. It's filled with critical writing, artistic experiments, and he describes it as a space for sifting through the techno-social debris. And Eryk's going to be talking about noise and the information economy. His talk is called The Age of Noise. Katrina Sluis is a researcher and curator whose work is focused on the politics and aesthetics of art and photography, especially in computational culture. She's done many amazing things over the course of her career so far. But I think it's worth especially noting the extraordinary range of projects and artistic collaborations that Katrina instigated in the role of curator at the Photographers Gallery in London throughout the 2010s, which made a crucial contribution to shifting the discourse on photography in relation to what were emerging contexts at the time, like machine vision, synthetic imaging, net culture, and so on. Katrina is based in Canberra, where she's head of photography and media arts at ANU, and also co-convenes the Computational Culture Lab. She's going to talk about what images are today in this moment of extreme technological acceleration, and what this might mean for the agency of artists and institutions. And the talk is called Beyond AI FOMO. So okay, so we're going to have Eryk up first for about half an hour, then Katrina for about half an hour, and then all of us here in conversation. So please welcome Eryk Salvaggio. Thanks. Noise is a slippery word. It means both the presence and absence of information. Today it's in the urbanisation of our world, the hum of traffic and jet engines. Noise is also where we go to escape noise. In August of 2023, Spotify announced that users had listened to 3 million hours of white noise recordings. Noise to sleep to, noise to drown out noise. Noise is also the mental cacophony of data, on social media, of smartphones, and the algorithmic spectacle. The age of noise is a logical conclusion, a successful ending for the information age. And information, which was once scarce, is now spilling from the seams of our fibre optic cables and airwaves. The information age is over. Now we enter the age of noise. We pin the information age to the invention of the transistor in 1947. The transistor was quaint by today's standards, a mechanism for handling on-off signals. Engineers built pathways through which voltage flowed, directing and controlling that voltage in response to certain inputs. We would punch holes into cards and feed them to a machine, running light through the holes into sensors. The cards became a medium, a set of instructions written in the language of yes and no. In other words, it all started with two decisions, yes or no, one or zero. The more we could feed the machine, the more the decisions the machine could make. Eventually it seemed the number of decisions began to encroach on our own. The machine said yes or no so that we didn't have to. By the start of the social media era, we were the ones responding to these holes. Like or don't like. Swipe left or swipe right. It all began with a maze of circuitry. The first neural networks, our adding machines, the earliest computers were designed to reveal information. Noise meant anything that crept into the circuits and the history of computing is in part a history of noise reduction. The noise in our telephone wires and circuit boards, even our analog TV broadcasts, was background radiation. Energy pulsing invisibly in the air, lingering for millennia after the Big Bang exploded our universe into being. Our task was to remove any traces of it from our phone calls. Today millions of on off calculations can take place in a single second. Put enough of these signals together, run them fast enough and you can do remarkably complex things with remarkable speed. Much of that has been harnessed into lighting up pixels. Put enough pixels together and you get a digital image. You get video games. You get live streams. You get maps, interfaces and you collect and process responses to live streams, maps and interfaces. And with what we call generative AI today, we obviously aren't using punch cards. Now we inscribe our ones and zeros into digital images. The data mining corporations behind social media platforms take these digital images and they feed them to massive neural nets and data centres. In substance, the difference between punch cards and today's computation is that our holes are smaller. But every image that we take becomes a computer program. Every caption and every label becomes a point of information. Today's generative AI models have learned from about 2.3 billion images with about 24 bits of information per pixel. All of them still at their core, a yes or no decision moving through a structure. I don't say this to give you a technical overview of image processing. I mention it because the entirety of human visual culture has a new name. We used to call these collections archives or museum holdings or libraries. Today we call them data sets. This collected culture has been harnessed to do the work of analog punch cards. And these cards, these physical objects, were once stamped with a warning. Do not fold, spindle or mutilate. Our collected visual heritage in its digital form carries no such warning. We don't feed our visual culture into a machine by hand anymore, and the number of decisions that we have automated are so large that even the name is ridiculous. Teraflops. We upload images to the internet, pictures of our birthday parties, our weddings, embarrassing nights at the club, not so much me anymore. Our drawings, our paintings, these personal images meant to communicate with others are clumped together with other archives. Cultural institutions share a wealth of knowledge online for the sake of human education and the arts history and beyond. And in training an AI model, all of these images are diffused, a word that is so neatly parallel to this diffusion of unfiltered information that we surround ourselves with. And for once, it's a technology named in a way that describes what it actually does. Diffusion models actually diffuse. This word means what it says. It dissolves the images, it strips information away from them until they resemble nothing but the fuzzy chaos of in between television channels. Images are diffused into noise. Billions of good and bad images all diffused into noise for the sake of training an artificial intelligence system that will produce a billion more images. From noise into noise, we move from the noise of billions of images taken from our noisy data driven visual culture, isolate them and dissolve them into the literal noise of an empty JPEG to be recreated again into the noise of one more meaningless image generated by AI among the noise of billions of other images, a count of images that already overwhelms any one person's desire to look at them. The information age has ended and we have entered the age of noise. We often think of noise as a presence. In America, we call it snow, the static. I've heard of other things as well. It's called ants in Thailand. Other places have other metaphors. But snow is a presence. We see snow. We see noise. We hear noise. The noise from a communication engineering perspective is the absence of information. Sometimes that absence is the result of too much information, the slippery paradox. Information which cannot be meaningfully discerned is still noise. Information has been rushing at us for about two decades now, pushing out information in the frame of content to such an extent that almost no signal remains that is worth engaging with. Here's a map of the internet visualised 20 years ago. Since then, it has only grown, today becoming a disorienting flood of good and bad information coming through the same channels. And what we are calling generative AI is the end result of a successful information age, which in just 24 years has rewritten all cultural norms about surveillance, public sharing, and our trust in corporatised collections of deeply personal data. Server farms bind this data through regimes of surveillance and financialisation. The guiding principle of social media has always been to lure us into sharing more so that more data could be collected, sold, and analysed. They've calibrated the speed of that sharing to meet the time scales of data centres rather than human comprehension or our desire to communicate. And all this data has become the food for today's generative AI. The words we shared built chat GPT, the images we shared built stable diffusion. Generative AI is just another word for surveillance capitalism. Taking our data with dubious consent and activating it through services it sells back to us. It is a visualisation of the way we organise things, a pretty picture version of the technologies that sorted and categorised us all along. Instead of social media feeds or bank loans or police lineups, these algorithms manifest as uncanny images, disorienting mirrors of the world rendered by a machine that has no experience of that world. If these images are unsettling because they resemble nothing like the lives they claim to represent, it's because that is precisely what automated surveillance was always doing to us. The internet was the Big Bang of the information era, and its noisy debris lingers within the Big Bang of generative AI. Famously, Open AI's chatbot stopped learning somewhere in April of 2021. That's when the bulk of its training was complete, and from there it was all just fine-tuning and calibration. Perhaps that marks the start of the age of noise, the age where streams of information blended into and overwhelmed one another in an indecipherable wall of static, so much information that truth and fiction dissolved into the same fuzz of background radiation. I worry that the age of noise will mark the era where we turn to machines to mediate this media sphere on our behalf. It follows a simple logic. To manage artificial information, we turn to artificial intelligence. But I have some questions. What are the strategies of artificial intelligence? The information management strategies that are responsible for the current regime of AI can be reduced to two, abstraction and prediction. We collect endless data about the past, abstract it into loose categories and labels, and then we draw from that data to make predictions. We ask the AI to tell us what the future will look like, what the next image might look like, what the next text might read like. It's all based on these abstractions of the data about the past. This used to be the role of archivists. Archivists used to be the custodians of the past, and archives and curators, facing limited resources of space and time, often pruned what would be preserved. And this shaped the archives. The subjects of these archives adapt themselves to the spaces we make for them. Just as mold grows in the lightest part of a certain film, history is what survives the contours we make for it. We can't save everything. But what history do we lose based on the size of our shelves? These are a series of subjective, institutionalised decisions made by individuals within the context of their positions and biases and privileges and ignorances. The funding mandates the space and the time. No offence. Humans never presided over a golden age of inclusivity, but at least the decisions were there on display. The archive provided its own evidence of its gaps. What was included was there, and what was excluded was absent. And those absences could be challenged. Humans could be confronted. Advocates could speak out. I'm reminded of my work with Wikipedia, simultaneously overwhelmed with biographies of men, but also host to a remarkable effort by volunteers to organise and produce biographies of women. When humans are in the loop, humans can intervene in the loop. Today, those decisions are made by pulsing flops. One of the largest data sets we have is a collection of 2.3 billion images, LAION-5B. It is the backbone of training most open source image generation models and likely most proprietary models. It is a scrape of the common crawl index of the web, and its curation was done by a machine learning tool called Clip. Clip's assessment criteria was simple enough. Using machine vision, compare the assortment of pixels in an image to the text in its caption. If the clusters of pixels looked like others that shared words in those captions, call it a match and include it. If it looked like a duck and it's labeled a duck, it's a duck. That was the end of the curatorial intervention into the data set behind generative AI. People seemed to think that humans were involved in curating this collection. They were not. Instead, a group of volunteers created a tool to collect these images, made decisions about what that tool would do, which I just described, and then they deployed it. These were the folks with decent enough intentions, by the way. They wanted to build a data set that people could look at and understand and evaluate. The data was out there online, and when you look at online culture exclusively through the lens of data to be analysed, it's not hard to see why they would grab all they could. Nobody looked at the result. How could they? It's 2.3 billion images. But the data set was collected, it was put online, and then used as training data for image synthesis. Of course, humans were involved in this data set's curation in an indirect way. These images were the noise that defined the tail end of that information age. It is online culture. It included samples of nearly every genre of visual evidence, memes and pictures of our pets and children and our drawings, but also photographs of Holocaust victims, of Nazis on holiday in France, images of comic books and pornography, Taylor Swift and Abu Ghraib. The Stanford Internet Observatory noted that the data set contained up to 3,000 images of child sexual abuse. The researcher Gary Marcus found that it contained countless examples of SpongeBob SquarePants and The Joker and other copyrighted material, while the researcher Abiba Berhane has counted a long list of racist, misogynistic and violent content in the data set. Given what we know of internet culture, this should be no surprise. The information age's relentless spread of images were all reduced into one very challenging stew. And from this noise came a promise, a seductive but dangerous promise. That is the promise of new possibilities, a paradigm shift. AI's integration of noise is not just on a metaphorical level. Noise is inscribed into these systems socially, culturally and technically. These images are diffused, dissolved into static and the machine learns how that static moves as the image is degraded. It can walk that diffusion backward to the source image and this is how it learns to abstract features from text into generated images. Our training data constrains what is possible to the central tendencies in that training set, the constellations of pixels most likely to match the text in the description. In other words, it navigates averages, trading meaning for the mean. But the original context is irretrievable. It reminds me of what Don DeLillo writes in White Noise that the world is full of abandoned meanings. Meanings emerge from relationships. Words in the dictionary don't tell a story until they are arranged in certain relationships. And this is why memory matters. There is something disturbing to me about reducing all of history to a flat field from which to generate a new future. There are echoes of colonialism there. To take history, erase it and rewrite it in the language of new potentials, opportunities, prosperity, without deferring to those who built its foundations. AI offers us a possibility of what? Opportunities for who? Prosperity for which people? Who gets to build a fresh start on stolen intellectual property? Who gets to pretend that the past hasn't shaped the present? Who has the right to abandon the meaning of their images? Of course, all of these images still exist. Nothing is eradicated. Nothing gets destroyed. But neither is anything honoured or elevated or valued. Nothing in the training data holds any more meaning than anything else. It's all noise. Images of victims and perpetrators fuse together for our enjoyment. As noise, tiny traces of trauma inscribed for the sake of seeing something to post online. But I want an imagination that moves us towards a resolution of the traumas of the past rather than simply erasing them. The AI image is not new in the sense that it creates something. Rather than new, the static is random, its old patterns adapted to random noise. That's distinct from newness. It is more true to say that the image is wrong. It is a hypothesis of an image that might exist in the static based on all that has come before. The image that emerges is also noise, constrained by language and data. It references language and data to find clusters in that static. It is a prediction of what the image might be, a hypothetical image, constraining every possible image through the filter of our prompts. And all of these abstractions are wrong. No image made by image synthesis is true to the world, but every image is true to the data that informs it. It would be lovely to think of AI as creating something new. The age of noise offers us only a false reprieve from the information onslaught. This is not merely imaginary, though the imagination is what we are fighting for. The images themselves contain discernible traces of our past. The data that constrains that noise is shaped by racism, pornography, stolen content, images of war and abuse. It is shaped by the way we label our images online. The training data for the prompt white girls contains thousands of images from Victorian era portraits. The training data for black girls depicts sexualised adult women, including explicit pornography, which I have censored here as black boxes. Some companies are trying to navigate this by inserting diversifying words into our prompts without telling us, which solves the problem through the interface, but the models still produce stereotypes. British people are almost always elderly white men. So are doctors. So are professors. So are a lot of things. Mexican men are typically depicted with sombreros in mid-journey. The training data for these bodies informs that outcome, and they weigh on the representations of their outputs. This diversity of human bodies cannot really make it through a machine that constrains images to their composites. In the white noise of AI, we are all fused together into one and sorted by the weight of the most common. And today, all of these training data sets are offline. We can't look at them. We can't analyse them or figure out what's in them, figure out the biases that are shaping the images that come from these products. We can no longer examine them for their traces. We can't study their genealogies. The AI image is dressed up as dreams or imaginations of machines, but few dream of their own overt sexualization or dehumanisation into racist caricature. Humans have biases, but humans also have a consciousness. We work to raise the consciousness of people. You cannot raise the consciousness of a machine. So we must raise the consciousness of those who designed them. We must intervene in the shape of the data sets, and we must propose new strategies beyond reduction and prediction to counter the hostility of building a future from the reduction of history into an infinite automated remix. That is not to say we should dispense with the past. Far from it. The idea of the remix is that we choose elements to arrange, and we move that culture forward with thought, arrange the pieces to the moment. The remix is not random pastiche. It is a thoughtful engagement. AI images are not a remix. They are the constraint of random noise through prediction. They don't engage the past or understand it. They reanimate it like a seance, but then they lie about what the ghosts tell us. The shapes of these models sketch into noise are constrained by the shapes of the past. I liken this to a haunting, and I know that Mark Fisher has described the phenomenon of sonic hauntology, a music that referenced the past visions of the future, a future that seems to have been canceled. It reflects an inability to dream of the future on any other terms but nostalgia. And I would say that AI images are hauntological. The structures of our collective past shape them. This is true not only for the prediction of images, but for any system which relies on previous data to predict future outcomes. If we look to the census, that starting point of data collection efforts, what we see in the data is marred by what could not be written into it. The data sets are haunted by what they did not measure, children who were not born because mixed-race couples were forbidden, home ownership records that don't exist because black families could not buy the home. Data contains only the trajectories of history that have been allowed to us up to now. If we feed that information in the service of future decision-making, then these ghosts become police and the living become sorted by the dead. Noise is the residue of the Big Bang. Noise is where the past lingers. Noise is where the ghosts are. It's the absences that haunt us, and we ignore history at our peril. When we talk about data and generative AI models, we are talking about images. When we talk about data sets, we are talking about vast collections of images. It would not be a mistake to say that a vast collection of images is an archive. But we don't say archives in AI. We say data sets. An archive, I think, proposes a greater degree of responsibility. Archives are curated. Collections rely on humans to examine and assess images. Archives contextualise while data sets strip context away. Archives find relationships between people, places, and things. These data sets link objects only to resemblances of shape and colour. The data set is an archive diffused, and these tensions are the subject of great fascination to me. In my work, I think about the age of noise and how it has been constrained by the age of information. I use found footage and archival images as a way of thinking through these tensions, placing the archive in dialogue with the noise that AI uses to make sense of it. In my work with AI, I've begun to think through this diffusion, selecting elements of visual archives and placing them into a tension site marked by generative AI models that I've tricked into circumventing trading data, generating abstract patterns. Ideas like this allow me a way to escape the contempt of data-driven abstraction and to pursue some version of uncharted possibility from inside this tension of static and definition. I want to see if new languages emerge. I want to know what the avant-garde can be in an age when noise is mainstream. I'm visualising this liminal space between archive and datafication, between home movies and prompt injections. I don't know if I'm resisting or embracing AI by hanging out in this space, by thinking like a diffusion model, but I'm working alongside it and within it in a bid to understand it as best I can. In 2023, I was invited to provide some of my methods for hacking AI art systems, the largest hacker convention in the world, DEF CON 31, as part of a surprisingly White House-backed AI village. I did not know that Joe Biden would be involved in a hacker convention when I said yes. I was able to share and learn some strategies to make work that the tools weren't designed to make, most notably asking it to generate images of noise. It actually can't do that. It's not the way they're designed. They have to strip noise away from the start point of noise in the direction of a prompt. So if you ask it to give you noise, it's constantly trying to remove noise, and it gets stuck in a feedback loop that generates these kind of abstractions. How ironic that these machines literally cannot make noise. I find these beautiful and elegant as a result of that messy contradiction. Not beautiful in the usual sense that we evaluate an AI image. There's none of that pristine lighting or supermodel faces from composites of all supermodels and all lighting techniques. They're the residue of a system that has been set adrift without data about the past. It's a rendering of an image from a machine that has been told, you don't actually know anything. Swim is a work from this year that I created while trying to visualise all of these ideas. Of information from the archive dissolved while shaping the future, the past stretched out endlessly into the present until it is perceived as something new. In swim, a swimmer from archival footage, frolics in a pool, now transformed into animated frames of an AI-generated glitch. Over the course of nine minutes, the footage is dissolved, as is all training data, to be quantified and linked to words. Swim is a description and a prompt. The archives used to be where history went. Now it is where the future is made. Here I've blended the resulting noise with a real body, a body from the archives, the image of a swimmer. I'm well aware to the presence of the male gaze in this footage. The swimmer is a body being analysed, studied, traced, and this film was labeled in the archive as a form of erotic entertainment. The male gaze moves into automated form, enacting all the leering but out of remove. The analysis of the body in the archive is what makes deepfake pornography possible. We want to look at bodies and AI, we look at them through the historical gaze, itself a masculine gaze. And yet, this swimmer in the midst of this surveillance, in the midst of being dissolved, in the midst of being translated into media, strikes me, quite simply, as enjoying herself. I don't know, of course, but a part of me longs for that weightlessness, that ability to move freely while submersed in what might, in any other context, drown us. This work is a critique of AI, but it's also a hopeful refusal of my worst fears about these systems and their relationship to creativity. That is, that our techno-social future reflects AI's techno-cultural forms. If so, we can use these tools to push back against their logic, to reveal what is concealed by the interfaces and the language and myths of AI, to surface what swims in the archive, beneath the prompt, in all its messy, contradictory human complexity. I'm often asked if I fear that AI will replace human creativity, and I don't remotely understand the question. Creativity is where agency rises, and as our agency is questioned, it is more important than ever to reclaim it, through creativity, not adaptability, not contorting ourselves to machines, but agency, contorting the machines to us. I fear that we will automate our decisions and leave out variations of past patterns based on the false belief that only repetition is possible. Of course, my work is also a remix. It has a lineage. To Nam June Paik, who famously quipped, I use technology in order to hate it properly. And this is part of the tension, the contradictions that we're all grappling with. I'm trying to explore the world between archive and training data, between the meaningful acknowledgement of the past and the meaningless reanimation of the past through quantification. Archives are far more than just data points. We're using people's personal stories and difficult experiences for this. There's a beauty of lives lived and the horrors, too. These images are more than data. There is more to our archives than the clusters of light-coloured pixels. Our symbols and words have meaning because of their context and collective memory. When we remove that, they lose their connection to culture. If we strip meaning from the archive, we have a meaningless archive. We have five billion pieces of information that lack real-world connections. Five billion points of noise. Rather than drifting into the mindset of data brokers, it is critical that we as artists, as curators, as policymakers approach the role of AI in the humanities from a position of the archivist, historian, humanitarian, and storyteller. That is, to resist the demand that we all become engineers and that all history is data science. We need to see knowledge as a collective project, to push for more people to be involved, not less, to insist that meaning and context matters, and to preserve and contest those contexts in all their complexity. If artificial intelligence strips away context, human intelligence will find meaning. If AI plots patterns, humans must find stories. If AI reduces and isolates, humans must find ways to connect and to flourish. There is a trajectory for humanity that rests beyond technology. We are not asleep in the halls of the archive, dreaming of the past. Let's not place human agency into the dark, responsive corners. The challenge of this age of noise is to find and preserve meaning. The antidote to chaos is not enforcing more control. It's elevating context. Fill in the gaps and give the ghosts some peace. Thanks. Whoa, thanks, Eryk. Wow. Hi, everyone. My name's Katrina Sluis, and I'm really excited to be here today and slightly intimidated about going after that. And today, I'll be talking about AI for the demo and thinking through some of the things that Eryk has been talking about and how it then plays out in a kind of museum and cultural context. And I'm very aware that there's a lot of anxiety around this moment. We're constantly being berated that we need to get on the AI bandwagon. There is already a sense, especially propagated through museum and technology conferences, that museums are left behind and we now need to look to this new technology, such as big data it was five years ago. Now it's AI. And we need to think through how we can reconfigure the museum in relation to our audiences, because, let's face it, expertise has left the institution and fled. And we need to catch up with the big boys in Silicon Valley. But before we get excited about moving fast and breaking things, I think, as we've sort of heard, that this is a moment of a radical reconfiguration between the relations of seeing and knowing. And it's not just some fancy tools that are going to change Photoshop, but actually, as we've heard, strikes to the core of how we make sense of the world and communicate both our past and our future. Here's my token John Berger slide. And this is one example of where, in this paradigm, we've heard the relationship between the model and the training data becomes this radical tension. And here we have, in the field of computer vision, John Berger's BBC production, a snapshot of two different models being trained on two different data sets and how they perceive what they're viewing on screen. You'll see that, depending on whether it was grounded in the sort of West Coast culture of Silicon Valley or not, then fish might become burritos and other sorts of things. So as we've just heard, there is this reconfiguration where older models of understanding and seeing the world are being radically reconfigured at the back end. And this is a shift from an optical or indexical regime in which we would be pointing cameras at the world and there would be at least some kind of preservation of the light falling on the sensor, imprinting it with something of nature, to a kind of simulation of that through a kind of stochastic process of pattern and randomness, as we've just heard beautifully by Eryk. And so we're in a situation where museums and galleries are still thinking through the legacies of 20th century media within a kind of computational culture that's in the process of cannibalising it and operationalising it as a lure for various kinds of new forms of value and sociality. And my colleague, Andrew Dewdney, has called upon us to even forget photography, as photography is an old 20th century word that is actually concealing our ability to make sense of the present moment and conceals a kind of back end of datafication, which does call on the history of photography as a kind of scientific instrument of rationalism, of classification, but is intensively animated in this moment as a kind of simulation. Just we've heard earlier the Mark Fisher reference to hauntology, Andrew calls it a zombie. So how are cultural institutions sustaining this image of the zombie, and should we be slaying it? So there is a kind of question here of what is currently emerging and what's the language and the tools we should be using to actually engage with this new moment. And there has been many, many different attempts to kind of theorise what's happened since the massification of photography when the camera converged with the computer and the phone, and what that might mean for questions of cultural value, and the very terminology we use to understand this. So we are in an unstable moment, we still don't have the terms to understand, and that's okay. We don't need to rush through and pretend as institutions, we know the answers and retain our cultural authority. So of course at the front end you have a simulation of historical photographic culture or painterly culture depending on what you're generating. And of course as we've heard this actually is a kind of iceberg where the tip is the image and it conceals a back end of complicated recent extraction. And of course we've been hearing a lot about that recently, about communities having to compete for local water with Microsoft in Iowa, and the kind of geopolitics of water. Sam Altman saying don't worry we can geoengineer it out until we can find a solution. But this is really happening right now, just as climate change or climate collapse is happening as we speak. And we've been reading about this week in the news the AMOC, potential collapse of the AMOC weather system, which will completely reconfigure the planet in relation to climate tipping points. So this is all happening at a moment of climate emergency which is happening in the present. Of course this also requires human labor. Astra Taylor calls it photomation rather than automation. There is often a human in the loop but they are the ones being paid a cent or two or not at all to label this data if we think about the age of machine commission. And if we think about now in terms of the age of general models, correct the model and intervene in it. So because these models are beyond repair, the human in the loop is there to correct it, to make sure the output does not offend given its original training data. We also have a kind of crisis happening of the commons as we've heard. Museums and galleries of course have been great supporters of creative commons over the years and the sense of open culture. And of course this has created another crisis of open culture about what does it mean to share. And how do we understand that whilst we were all encouraging our audiences to upload pictures on Flickr, that their children are now in training data used against minorities in China and other places. So there is a kind of crisis of what open means and what a progressive open knowledge system might look at at an age of data colonialism and extraction. And as this quote by Dwayne Monroe says, the tech industry has hijacked the commons that is freely available as a term I see in my computer science papers often and then rents us access to what should be open. So in the case of poor photographers and my poor photography students, they are now, their future is to be producers of authentic training data and then as consumers of the software which their data has been trained on. So Hito Steyerl describes this as a great new political economy of data laundering and extraction of noise and messy data into new forms of value. So there is a kind of massive question about what a kind of progressive data practice might be if we think through the histories of the museum. And of course we're also seeing as many AI ethics researchers say, automation in an age of AI is a great way of sidestepping accountability. And I was very interested to see, as you may have seen in the news, we're already seeing that with image making where the altered image of, by Channel 9 of this person, Georgie Purcell, was actually decried because it was actually the machine that did it. No human intervention, we're really sorry, this happened to you, it was the AI content fill. There was actually nobody at the computer at the time or even looking at the output, it was the machine that made this mistake. And of course that's a kind of small example but when you start thinking through these technologies, working out who gets a mortgage or not, who gets a job or not, it becomes a very important issue in relation to our audiences and communities and their experience. So does the museum risk being an onboarding tool for big tech now in relation to these kinds of debates? Because the fact that machines can now make something that passes for art is great marketing for Silicon Valley companies engaged in a new AI arms race. And to give you a sense of the kind of scale of that, a friend of mine was telling me her partner is a Google worker and they were told recently that Google will need to invest the next three years of revenue entirely into servers for them to catch up with Microsoft. So this is the kind of context that we're seeing huge tech worker layoffs and the kind of concentration of passion around massive extraction in order to train even larger models at scale. So as Hito Steyerl mentioned in our recent event, Critical AI in the Art Museum, this creates a whole set of ethical and cultural questions for how we present these technologies and how we deploy them on our audiences and educate their eyeballs in the case of the Italian Museum using machine learning to track and optimise how visitors engage with art. So yes, there is a kind of sense that if are we beginning, do we risk being able to naturalise military technology in the museum if we think through Boston Dynamics robots suddenly being able to, can they make art instead of running around with guns on their heads killing people? I was chilled when I saw that work when a three-year-old next to me said, oh, daddy, cute. And here we also have an example of climate activists staging a die-in inside MoMA's installation of Refik Anadol's work, Unsupervised. So this kind of question of what kind of AI do we let into the institution and how is that represented and how do our audiences engage with that? And of course, this is a really tricky question for institutions because, as I love this quote by Victoria Walsh, Andrew Dewdney and Emily Pringle from way back in 2014, the digital is primarily understood as a technical tool rather than a knowledge system and a culture or even a political economy. So what this creates is a scenario where it's very confusing and different parts of the institution model the digital very differently, making it very incomprehensible how we might work differently using different kinds of methods in the institution because often what one person means by the digital or AI now can mean very different things depending on the department. So the development team sees this AI moment generally as a new source of patronage. Why aren't we doing a project with Google? It'll bring in lots of money. It'll be fantastic. These sorts of things have been said to me before. The executive team are very excited. We need to be doing AI. We will be really contemporary. People will really realise that we are the cutting edge of this moment and we are thought leaders in this field. The comms team are, of course, great. Our digital marketing can go into override. We have new analytics platforms using machine learning in order to understand visitors in the museum, which is now a dangerous data black spot and we must use machine learning to extract every data point from inside the institution in order to give our audiences what they want. And the exhibitions team, of course, they're like, it's okay. We don't need to think about this. We'll just get an artist in. They're the new avant-garde. They're going to tell us the answer. And whoa, isn't this a great moment in art history? And of course, the collections team are there just going, okay, how can we use this as a new tool of democratisation? How could we rethink the metadata? How could we be thinking through this? And of course, I've missed the poor education team. The poor education team who are there thinking, okay, this is a moment for a new visual literacy and new forms of progressive methods, but no one listens to them. So we have this situation where institutions are trying to address this moment, this reconfiguration of the relationship between seeing and knowing, audiences who are using these technologies and trying to understand what their implications are for culture and the future. So I think the question is then, well, it's very easy for me to stand here and critique that. What do we actually do? What do we do? And I think this is something that I can share my experience from because back in 2012, as the intro may have indicated, I was employed as the very first digital curator at the Photographers Gallery in London. This was a moment where the institution was also facing the flight of cultural expertise from the museum to the web. They wanted to build screens that everyone could share their images with and celebrate this new democracy of the image, but also were very aware they didn't know what it would mean for the cultural institution. And I think this is a moment where they expected me to come in and with a curatorial view to generate a new canon of photography, of artists working with these tools that would securely help the institution to manage this difficult transition. In reality, I think what we did was actually go, actually this is a moment where knowledge and understanding is diffused and practice-based research in the art museum becomes a really interesting place for asking questions and learning in public. And so I want to kind of begin to discuss a little bit about what that might look like and how we approach this moment in the Photographers Gallery. One of the last projects I did there back in 2019 was precisely to recognise the fact that in this moment, machines were obviously becoming extremely important viewers of photographic culture and we had no way of beginning to talk about what was happening in that space. My colleagues were saying, we just need to slow images down and look at them more closely. And I would say we need to speed them up. We need to look at them in a 50 millisecond glance because that is the model of vision being encoded in these systems. And so we developed a lot of research in the institution with audiences, with computer scientists, with technologists and so forth, including restaging early computer vision experiments of machine seeing with audiences and staff to begin to try and crack open and begin to institutionally think through some of these kinds of questions. And this also meant failure and it meant us admitting that we didn't know the answers. It meant us relinquishing that sense of cultural authority and not having the shiny, beautiful Google interface that everyone can get excited by, but through doing small scale interventions with other people in the institution. And this also required us to move our attention away from the artist as a privileged site of knowledge about image culture toward other actual agents such as computer scientists. And we did a lot of work looking at this person, Fei Fei Li, who many of you might know, created the dataset ImageNet, which is an extremely important object recognition dataset in computer vision. And she talks about how this collection of 14 million images stands in for a representation of the world, about how she thinks that by scraping lots of images in the web, you average out the bias of individual photographs, and how looking at her own child, she understood the machine to be like a child who, through looking at lots of images, would learn about the world. So this was a radical shift, and ImageNet is the dataset, and it's, as a benchmark dataset, led to the rejuvenation of neural networks and our current kind of AI moment. So we began to think through, like, the computer scientist is a really radical agent of visual culture. We began to read computer science papers, and we also began to think through what happens when you think through it not as an archive but a dataset. Photo institutions are often sending artists into archives to look at them and understand them. We asked artists to go and look at major photographic datasets and come back and commission work about what they saw. Some artists looked at, you know, 100 hours of images. They tried to learn how to work with the scale of what that meant. We also tried to exhibit ImageNet. So what does it mean for a photo institution to exhibit 14 million images? So that was a completely insane research experiment. It was very hard to even get ImageNet at a point where it was being taken down because there was a lot of interest in it, as we've heard before, about the sexist and racist stereotypes. And unbeknownst to us, Trevor Paglen released ImageNet Roulette just after we started to exhibit it at 50 milliseconds an image over four months. So how do you begin to think of the new temporality and scale and experiment with that? What else is really kind of crucial in that respect is also finding accessible shared resources and producing those tools. And what's been really exciting is a lot of the texts that we commissioned from 2019 are being referenced in AI ethics papers and works now traveling into major papers. This is a commission that we did with Mimi Onuoha, who was here last year. It's now being referenced by computer scientists. However, the institution, because this kind of work, this kind of moving, does not actually reach the mainstream newspapers, doesn't result in some big commission, it's very hard for the institution to absorb its own knowledge that it generates through that and value it. So this is the kind of question for us as to how do we begin to think through a different kind of practice in the institution? How do we understand it and its impact? And what happens when audiences and artists and technologists become co-researchers in that practice in order to not just reify the front end or the output of such models, but to help understand and comprehend the back end of such models in order to hopefully build new literacies and competencies, both in ourselves but in the people we serve. So I'm going to wrap it up there. And if you're interested in more of these kinds of questions, I'll just direct you to the Critical AI in the Art Museum, which was a project, a set of discursive events that recently ran, hosted at ANU. There's an archive of all the talks and papers, and you're welcome to read some of those resources in thinking through what this kind of moment might be in relation to our programming and our cultural strategy. Thank you. All right, can everyone hear us okay? Yeah. God, those were two amazing presentations. My head is spinning a little bit because there was just so much complexity and, yeah, so many incredible provocations. I like the way both of you kind of lay out in a very sort of critical and analytical way, some of the sort of challenges, you know, pathologies even, and sort of intractable contexts that come out of this sort of technological acceleration and our struggle as sort of institutions and practitioners to deal with that context. But then, yeah, there's a bit of hope at the end, or at least, you know, this confronting of the question of what might a progressive data practice be in this context. And, you know, so in both of your presentations, there's a tension between kind of critiquing, reflecting, and on some level embracing AI. And Eryk, you said something that I thought was super interesting, which was that creativity and agency is about forcing the machine to sort of contort to the shape of the human rather than sort of reshaping the human in the form of the machine. And, yeah, I wondered if, you know, both of you might just sort of reflect on this question of sort of agency and contortion. You know, can we reshape the machine, especially, you know, in an era where sometimes we blame the machine, you know, for indiscretions that, you know, with sort of this governance by no one often means sort of no accountability. But, yeah, what does it mean to contort the machine rather than contort ourselves? Maybe you start, Eryk. Okay, the mic works. Great. I would say I'm always thinking about these machines, and we have this phrase, the black box. And I think that so often the black box is just sort of like where people want to stop explaining things, more so that there's any sort of real mystery as to how these systems work. If you look at them from the point of like what do they do? Like what do they actually produce? There's no mystery to that. And so for me, I like to think about what can I get it to do? I'm told that it can do these things. I'm told that it can't do certain other things. There's content restrictions, which are oftentimes valid and worthy of being there, but there's also a strange application of their logic. Women kissing is not allowed on OpenAI, but men kissing is. Don't know why. Those types of things, those types of enforcements are really interesting to me to break and to see if they can break, to see what sort of patchwork bandages they're putting in on these systems. And so to me, it's kind of an exploration. It's a way of saying I want to do something that... And here is a machine that is designed to do something, but I'm going to ignore what they've told me to do with it. I'm going to ignore the instruction manual, and I'm going to figure out what I want to do with it. I just think it's... I think that's like a fusion of the critical and creative that is really productive for me personally. Super. What do you think, Katrina, about this sort of idea of contortion? Yeah. I think what was very interesting with the work we're doing at the Photographers Gallery was seeing how photography was understood and mobilised in this new tech culture of computer vision and now image generation. And so when we were working with it, we were like, well, precisely this question, like how do you inhabit it? How do you... Where does this come from? What is its logic? And you begin to probe it, and you're like, oh, okay. So the amateur photograph is seen as more unbiased than a professional photograph in computer science. Oh, okay. And then... So amateur... And then you suddenly start poking it, and then you're like, oh. And suddenly I'm speaking to someone, Andy Nystrom, who's a guy in Canada, who didn't actually know that the Yahoo! Flickr Creative Commons dataset scraped 300,000 images off his Flickr page and has the honour of being one of the only photographers ever cited in the literature as a contributor to this new regime. So you start kind of pulling on these things and finding people. You suddenly discover datasets by computer scientists of a beautiful photographic project of their child's toys, and the kind of culture of image making that infuses this. And so being able to pull that into the photo institution, that it's not a black box, that these ideas are infused through these different practices and into these practices, and then maybe taking something from it into the institution, like an early computer vision experiment where people are looking at images for 50 milliseconds and actually going, well, let's not use Flickr photographs, let's use the collection from the institution. How does that change things? What does it mean if the audience is annotating images instead of the machine? Like what kind of knowledge is produced there? So I think it's again, the great thing about research in an institution which is not in the university is the ability to do precisely these things in ways that are outside of the academy that are experimental and generative. And I think this is where we all need to go in this moment. That point you made about sort of like getting the audience to annotate images, it brought to mind one of the provocative things that you said in your very poetic lecture performance, which is that history is now data science, I think, or sort of history can be thought of as data science and we're all engineers. So what are some of the implications for thinking of history in those terms? And maybe we can sort of use that to start to get into the question of the archive and the data set and the tensions between the two. Yeah, I mean, I meant that kind of as a warning because I think words have a lot of power in the way that we sort of relate to information has a lot of power. And one of the great things about AI is that, at least if we engage with it critically, is that it shows us the way that the organisation of information structures the ways in which we live. And that if we think of data sets instead of archives, and this is not, you know, there's uses for data sets, right? Obviously. But when we think about history as a data set, when we think about all of these images that you've just described as nothing but data, and they're not contextualised, they're not put into any kind of context, then I think what ends up happening is that we strip a kind of meaning out of it. Yeah, I don't know. I'm lost in your question. I'm mulling it. But yeah, I'll stop there. Maybe you have something smarter to say than me. I think what you're trying to say is this reduction to everything into a data point is a kind of process of capitalist artification of knowledge and requires us to engage with it on those terms and to resist this endless transparency of data, to resist this kind of datafication and to insist on another relation with this material. Is that right? Yeah. No, I do think it's about, like, what are the connections that we're trying to make with a data set? When we're talking about generative AI, it really is just captions and images and pixel clusters. That's what we're looking at. And that's how things are being organised. And that's reducing everything to just very much literally stereotypes. That's what these things are. They're stereotype engines. But they're also going to be constantly reproducing these stereotypes and these reductions and reinforcing those. And that's what worries me. I would love to see us be able to get into the data sets, like the project you described. And some of my own work is to get into the data sets and see what the relationship is between the pieces of data that we are drawing from. What are the stories in the data that are getting severed when it is represented and repackaged as this glossy photo magazine with, like, 6,000 fingers, which is what generative AI is. So the stories in the data set fascinate me. Because they are severed. And our only relationship to all of that history, all of that culture that is in that data set, the 1.3 billion images, is the shiny, glossy, seven-fingered supermodel. And that's just a strange thing to do to culture. Strange thing to do to visual culture. Yeah, I love your metaphor of the iceberg with the images, the tip, and this sort of iceberg itself as the sort of reality of resource extraction under the surface. I'm going to start to incorporate some questions that are coming through the Slido, because they're also responding to what we're talking about now. One of them comes from Elliot Bledsoe, and it's to do with the archive and the data set. Because Eryk, you especially kind of put forward the archive as the kind of preferable, let's say cultural entity for dealing with history, creativity, agency, as opposed to the data set in all its automation and mechanisation. Elliot asks, what does collection, preservation, and access mean if the notion of the archive sort of collapses into the data set? What will archives do? What will they collect? And I wonder if both of you might have a go at responding to that. Do you want me to jump in? Yes, please. This is a really interesting question, because there is a kind of trend now in certain spheres of ethical AI, which is like, oh my god, we'll solve this by getting a domain expert involved. And for one example, I was contacted by some engineers who were like, we've got this aesthetic art data set, but what will really be helpful is if someone who really knows what they're talking about annotates it, rather than it's unsupervised, scraped, whatever. And of course, then you get into these discussions about the last 12 years of museum theory going, well, the meaning of the image is not beholden to the curator as the mediator, but is transmedial, it's complex. And I was like, no, no, no, that is not what we wanted to hear. And another kind of model that's coming up as well is like, oh, well, actually we need archivists and curators and information scientists here to kind of help us with the data set. So the problem with the data set is it's not archiving enough. But of course, the counter to that is to say that these kinds of methods are not without their own kind of power struggles and problems, and suddenly is like the new ground truth of from what we're going to build that it becomes quite complicated. So I guess, yeah, everything is an archive, but we need archivists more than ever is the answer I think from that kind of part of tech culture. Do you want to add anything to that, Eryk? No. Okay, no worries. We've got so many really good questions. And yeah, I think implicit in sort of those answers is, you know, a defence of the role of the curator, or at least the thoughtful curator that is kind of engaging the archive, you know, in particularly sort of critical and imaginative ways. And there's a question here that kind of I think gets to these tensions of sort of what is the role of curation, especially in a context where automated forms of curation, recommender algorithms and other sort of technological processes are also playing a larger and larger role. Sorry, do you want to? Can I please answer this question? Yes, yeah, go on. I love it. I love it. Yeah. So I've been following how over the last 10 years there has been a tendency at tech conferences to tell data analytics people that they need to be more like curators. And at the same time in the cultural sphere, at least in Britain when I was there, you had reports by a big data expert, like counting what counts, what big data can do for the arts, where curators are being asked to think a lot more like data analysts. And so there's this kind of weird, this miasma of curating moving in this way and being annexed into the computer lab and then back out again. And certainly when I was at the Photographer's Gallery, the chair was always going, this is a moment in photography where we need curators now more than ever. And of course the cultural sphere completely ignored the massification of photography. And the computer scientists leant into it and said, here is a major grand challenge to visuality. We need to go there and solve how we're going to curate, classify all these images. And what's happened as a result is that we're now seeing automated curation on platforms. How are images to be aesthetically evaluated and classified and valorised on such platforms? Well we've just scraped a data set called the Aesthetic Evaluation Data Set from photo communities, amateur photo communities who have thankfully ranked each other's work. And we're going to use that as the ground truth data of what makes a good photo. And now we're going to operationalise it. And you see this kind of weird system now where some photo platforms like IM was, is both a community but also a whole site for the creation of personalised tastes through photographers training the models. So again it's back to this weird iceberg thing where Flickr on the one hand is a photo community, on the other hand it exists as a data set being constantly recirculated amongst computer scientists. So yes, curation is being both cannibalised and valorised and intensified I think is, it's paradoxical I think. We labor under contradictions don't we? So this question is how to historically and presently disenfranchise groups who already have to fight to be included within the canon of art, and art is in scare quotes here which is good. How do we advocate to a computer? How does that sort of struggle to access what you call Katrina the privileged site of the artist work in the context of these sort of forms of computation that in a way become entangled with what we used to think of as the curatorial? Any thoughts? I think a lot about the fact that these, I mean I mostly speak from the perspective of generative AI and these sort of mass market no-code tools that are out there and the way that they are shaping things. And I can say that at least with data sets we don't have access anymore to anything that is part, that's informing those. We just don't know anymore. Even the open source ones are offline and so we don't know. But I do wonder if there isn't an opportunity to rethink the data set from the archival perspective in the sense of having a community curated version of something like a LAION-5B or something like Wikimedia Commons but not exactly like Wikimedia Commons where people can voluntarily contribute what they want to contribute and where people can sort of look and examine where biases might be occurring and correct those biases internally. I think that transparency and consent are really missing from the conversation around generative AI and I think that the flip side of that is how much work that puts on folks who are not going to be represented by a data set that is going to scrape five billion random images. And so there is opportunities to intervene but it's the placement of sort of who is responsible for doing that work, for that labor. At the moment a lot of that labor is just being exploited but I do wonder, and I wonder if the ship has sailed because I think so many people would be very reluctant to voluntarily contribute to a data set for something like OpenAI to go ahead and use. But it is one way and it is interesting to think about like a massive community project like that for such a so-called paradigm shifting tool. The paradigm really hasn't shifted. It's kind of extractive capitalism as it's always been but is there an opportunity for an organisation of individuals, of people to make a challenge to the way that's done? I wonder and I am optimistic that there is but I'm pessimistic that we're going to actually do it. I'll just riff on that pessimism a little bit with a couple of other questions. I'll group two questions together. One is are we already out of control? Will Microsoft, Google and other big tech orgs determine our future cultural perspective? And then Tao asks a version of that question which I think is very connected to the recent lecture series that you convened Katrina on critical AI in the art museum. And she asks what's at stake with big tech art washing? How might it differ or relate to art washing of other dubious industries like Big Pharma and extractive industries? I wonder if you have a response to that. Yeah, I mean it's been interesting in Britain seeing all the backlash against BP sponsorship of institutions. I think there was this brilliant letter by the media art collective in Brussels, Constant, to cultural institutions during the pandemic. And this open letter was saying here is a moment where those institutions that are built on mutual respect, commoning and love are allowing the kind of concretisation and colonisation, further colonisation of the institutions by big tech. You're losing control over your own data. And we know institutions are poorly funded, et cetera. So, we understand that kind of context. But it was really kind of saying, yeah, we need to be thinking very clearly about what is at stake with collaboration. I remember when Google Art Project was happening and being set up and it was like, oh, this is just a happy project where we're not the experts, we'll give you all the tech, we'll have this great digitisation project. We don't own any of it, it's the commons, it's great. And then you say to someone, well, are you going to release all the metadata that you're generating as a result of all the interactions with the history of art on your platform? Do you see that as public cultural value? They're like, oh, no, not like that. And of course... Huh? The person asked the question, how would you see it, what would you see in that? Yes, yes. But you're talking about the digital. Yeah, yeah, yeah. So, we have this kind of problem where what is progressive or not is really hard to understand. So, having access to the whole history of art in new ways is fantastic. This is brilliant. And reaches people in new ways. But at the same time, questions of circulation, the value isn't in the image anymore. It's all the sets of relations around that image. So how do we comprehend that? So these are the kinds of questions that institutions are very, well, probably not really dealing with. But that question of how should we be investing our time in open source tools, supporting community models, other different ways, should we be putting all our collections on green servers, feminist servers? These sorts of questions, which is about infrastructure and not just about output, is really hard. What do you think, Eryk, about these questions of the kind of relationship between big tech and cultural institutions and museums? I have, obviously, a different perspective as an artist, where one of the things I've noticed is a lot of the types of work that is the most exciting, the most sort of literally spectacular, often requires the kinds of tools that a Google can provide, or a Microsoft or IBM or whoever. And as a result, a lot of the work ends up reflecting a kind of mystification of the technology. That's through a kind of selection process that happens with the labs that are bringing artists in. A lot of the work I see in this kind of AI space is kind of this mystification of the tools, kind of inspiring awe about what AI is and how complex it is. And there's this kind of... When it comes to art washing, I'm really concerned about that. I think at this moment in time, it's really crucial to engage critically with that mythology of AI and to question this sense of vast, overwhelming complexity that we will never understand. I think that's the opposite direction. And I think looking for work... That's a form of art that I don't know if it gets talked about that much, but yeah, that's all I'll say on that matter. Yeah. We've sort of been talking over the recent days about the role of critique within institutions. And I think demystification might be another way of putting it. One of the questions coming through the Slido is about whether critique is a kind of gesture that just makes us feel okay about playing with these machines. The question also gets to your own work, Eryk, and how you think about the incorporation of the data sets and the images that are kind of extracted into them in the creation of your own work, and sort of how you deal with those sort of ethical contradictions that come up. Yeah. One of the things I've learned is that I will be uncomfortable that while working with these things there will always be this balance. If I'm trying to understand the systems, then that means I'm going to be paying OpenAI, I'm going to be paying Midjourney, and I'm going to be talking about AI and how we can make work with it. So there's that. There is always that level of tension with complicity with these tools. And I've found that you can say, no, I'm not going to use them. And to me, that's actually probably the morally correct position. But it's also, I think, not doing a service to the types of understanding that we really need if we're going to educate ourselves in order to shape the way these tools come about. And so essentially, I feel like I've weaponised my art too much. But I think about that complicity a lot. I recently made a music video where I fused my face with Sam Altman and sang lyrics about tech art and its complicity. And I feel very much like that sums it up. I'm using the tool. I'm paying Sam Altman. I'm talking about AI at a conference, right, which there's an argument to be made that that's boosting the hype cycle, right? But at the same time, that puts an impetus on me to raise the types of concerns that are really important that many people who could not get on this stage would be hopefully served by me raising or the work raising. So yeah, I don't know. That may be a bit grandiose. But that's how I kind of try to anchor myself in that complicity is through, well, if I am complicit, then what is my responsibility? Yeah, no bit of grandiosity is needed at times. I mean, I love the Nam June Paik quote that you brought in, embracing technology in order to hate it better. And I was sort of reminded that his work kind of came out of the context of television sets going from not being in any American household in 54, 55 to by sort of 56, 57, being in the majority of households and people watching six hours of TV a day. So there was this need to, in a way, sort of demystify the television and kind of go into through the screen somehow to kind of break it from the inside. Katrina, what do you think about this question of critique and its role? Yeah, I think it's a great one and one that has framed a lot of the work at the Centre for the Study of the Networked Image, which I was involved with in London. And I think this question of like, you know, having said what I've just said, like, it's very hard to be inside, like there is no inside outside, like that you were saying. There is no pure way of engaging with it. We're all in this mess. And I think, you know, there's a danger for thinking about the art world, you know, throwing also throwing sticks at the tech world and going, you are this, you are that, whatever. And then, you know, having a great show at a great institution to go and do that. What actually is more interesting, I think, is treating these kinds of systems as the dominant form of culture and having to engage with it. So we at the Photographers Gallery threw a 10th birthday party for ImageNet, the data set, which was kind of weird. We had people protesting on Twitter about it. But it was one way I got Fei-Fei Li to come to the gallery and speak about this grey media that she'd created that had changed visual culture. And being able to understand, and I work with lots of people in tech companies from the inside who are also kind of questioning their own practices. And I think that is a fantastically good thing. What we need is, instead of critique for critique's sake, is actual space where we can collectively come together from these positions in order to understand these technologies that are reshaping visuality. So that involves not alienating a computer scientist or a Google worker, but actually working with them and talking with them. And, you know, I mean this sounds very naive, et cetera, but instead of writing just another book, but actually using the space of the institution where these things can actually happen and be transformative and happen in practice. Is that what you are kind of calling a critical practice? Yeah, yeah. So the idea of the museum worker being a practitioner scholar, and it not just being the academic who comes in and drops their research in the institution and leaves, but actually is generative from the people working there, is generated by the questions they have and that the problems that they see. And there is agency and value given to that kind of practice in the institution, because institutions institute conditions of possibility. So the institution has a role to be able to facilitate a very different way of thinking through and talking collectively, rather than finger waving, I think, about these questions that ultimately affect all of us. Yeah, I just want to piggyback on that and say I think there's a lot of work that art can do in presenting different models. And if you're presenting a different model, then you want the people who made the original model to be part of that, because that's how that model circulates, enters into the sort of realm of ideas. I mean, it's well known, right, the science fiction tropes, right, like Star Trek. All of our technology looks the way it does because of Star Trek, and that's sort of the art arguments. But having worked a lot in San Francisco and putting together shows where artists were being very critical of technology, I also saw a lot of engagement from the folks building the very technology that was being critiqued, either saying, like, I didn't realise this, or maybe this is giving me ideas for something that I could do. And again, there's complicity in that, but it is also how you build a better system. It's how you get to better systems. That dialogue is so important. So turning our backs on that dialogue, I think, is completely unproductive. Well, that brings us right up to the end of our session. So please join me in thanking Eryk and Katrina.