In his talk at Rhizome’s 7×7 conference, the musician and comedian Reggie Watts said that artists should learn how to use the creative applications of artificial intelligence so they can ‘be at the table and have an opinion.’ His words summed up the ethos of the event, which invited artists who had not previously worked extensively with AI to experiment with it, and perhaps develop innovative ways of using it. But the event also raised the question of what it means for artists to be at the table: Is their role simply to demonstrate the technology’s use, or to offer and implement transformative ideas?
Dedicated to digital art and culture, the New York City-based organization Rhizome launched Seven on Seven (now styled as ‘7×7’) in 2010 as an artistic spin on the hackathon, an event where coders convene and compete to develop new software over a short, intense period of work. The concept was to pair seven engineers who have creative interests with seven artists who work with technology to collaborate on new projects, with the hope that their differing but complementary perspectives might yield new insights. In the early years, some teams managed to put together some workable code or a new artwork in the 24 hours they were given to work together and then present their projects at the New Museum. After a 3-year hiatus, the event returned on January 27, 2024 for its 13th edition, with its first-ever thematic focus: artistic applications of artificial intelligence. Rather than developing new AI tools – a task more suited for a research lab than a hackathon – the pairings found ways to work with existing ones. In a marked difference from previous iterations of 7×7, this year’s artists were not expected to bring any technical knowhow of their own – just creative goals that might test the limits of the current state of the technology. Can AI improvise? Can AI dance? Can AI be funny?
The latter question was taken up by Cristóbal Valenzuela, Co-Founder and CEO of creative AI company Runway, and the writer and comedian Ana Fabrega, whose jokes distill characters and situations into single sentences. Rhizome ditched the 24-hour limit on collaborations several editions ago, and the pair had an extended correspondence, testing several methods for generating work – in text, audio, and video – in Fabrega’s style. With a little tweaking, a voice model managed to incorporate the pauses and inflections of her comedic timing. ChatGPT offered an explanation of her work that sucked all the fun from it. Filters applied with AI over existing videos resulted in some weird and wonky effects – in one case, turning Fabrega into a fuzzy green monster.
Valenzuela and Fabrega’s presentation demonstrated that different AI tools have their own personalities. ChatGPT is already a polished product, whereas AI video editors are still in development. This was especially apparent in the 5-minute video the pair shared at the end of their talk, which, like Fabrega’s comedy routines, cycled through several unrelated fragments: a generated song, a screenshot of dry dialog with a chatbot, a video of a filtered Fabrega saying she was ‘trapped in Midjourney.’ The tonal and stylistic differences among these vignettes were somewhat at odds with Valenzuela’s insistence that AI is merely a tool that can be bent to the goals of the user; it was apparent that the different applications had pushed Fabrega in distinct directions.
The presentation by Reggie Watts and the theoretical physicist Stephon Alexander yielded more satisfying results. Alexander began with a talk on the connections between quantum physics and AI: how the shapes of galaxies within the macrostructure of the universe resemble neurons, and how Leon Cooper – recipient of the Nobel Prize in Physics in 1972 and Alexander’s mentor at Brown University – introduced an equation from quantum mechanics to AI research that became the basis for the design of neural networks. Watts then mused on what this might mean for our understanding of spontaneity and creativity, and how such concepts can play out in neural networks. Both Alexander and Watts are serious musicians, and so to bring their ideas to life they enlisted engineer Ben Shirken to develop an AI distortion pedal, which was demonstrated in a jam session at the end of their presentation. Using a voice model trained on audiobooks recently released by Watts and Alexander, the pedal produced saturated tones augmented by phrases spoken in the two men’s voices – a richly layered effect that would not be possible without AI.
The day’s most spectacular presentation was a performance by a human dancer and Spot, the robot dog from Boston Dynamics, notorious for the frightening appearance of its claw-like head and its use by police and military organizations. Even dancing Spot looked menacing, stomping its metal paws on the stage with the grace of a trash compactor. Artist Miriam Simun choreographed the dance to show off the robot’s ‘athletic intelligence,’ working with several members of the Boston Dynamics team to make it happen (not just the advertised participant David Robert, the company’s director of human-robot interaction). In the presentation’s second half, Simun delivered a spoken-word performance, sharing ideas about touch and movement as forms of knowledge: Although a machine’s facility with language and numbers is currently privileged as evidence of intelligence, its ability to react to physical surroundings matters too. The artist’s presentation drew on her previous collaborations with nonhuman species, though this was her first encounter with robotics.
As with Fabrega and Watts, Simun’s experience was as much educational as it was creative. This marked a shift from the hackathon spirit of 7x7s past, where artists met technologists as peers, not pupils. Here, their seat at the table seemed to be predicated on their relative lack of expertise compared to the participating technologists, as if their role was to model the process of learning about AI for observers. It felt oddly appropriate that the jam session of Watts, Alexander, and Shirken concluded as the voice from the AI distortion pedal intoned: ‘Too much knowledge. Too much knowledge.’
– Brian Droitcour is editor-in-chief of Outland, an online magazine about digital art. Published courtesy of Art Basel.
Leave a Reply