In 1964, Marshall McLuhan described how the content of any new medium is that of an older medium. This can make it stronger and more intense:
The content of a movie is a novel or a play or an opera. The effect of the movie form is not related to its program content. The “content” of writing or print is speech, but the reader is almost entirely unaware either or print or of speech.
Marshall McLuhan, Understanding Media (1964).
In 2024, this is the promise of the generative AI tools, that we currently have access to, tools like ChatGPT, Dall-E, Claude, Midjourney, and a proliferation of others. But this is also the end result of 30 years of new media, of the digitalization of anything and everything that can be used as some form of content on the internet.
Our culture has been built on these successive waves of media, but what happens when their is nothing left to feed the next wave?
It feeds on itself, and we come to live in an era of Soylent Culture.
Of course, this has been a long time coming. The atomization of culture into it’s component parts; the reduction in clips to soundbites, to TikToks, to Vines; the memeification of culture in general were all evidence of this happening. This isn’t inherently a bad thing, it was just a reduction to the bare essentials as ever smaller bits of attention were carved off of the mass audience.
Culture is inherently memetic. This is more than just Dawkins’ formulation of the idea of the meme to describe a unit of cultural transmission while the whole field of anthropology was right over there. The recombination of various cultural components in the pursuit of novelty leads to innovation in the arts and the aesthetic dimension. And when a new medium presents itself, due to changing technology, the first forays into that new medium will often be adaptations or translations of work done in an earlier form, as noted by McLuhan (above).
It can take a while for that new media to come into its own. Often, it’ll be grasped by the masses as ‘popular’ entertainment, and derided by the ‘high’ arts. It can often feel derivative as it copies those stories, retelling them in a new way. But over time, fresh stories start to be told by those familiar with the medium, with its strengths and weaknesses, tales told that reflect the experiences and lives of the people living in the current age and not just reflections of earlier tales.
How long does it take for a new media to be accepted as art?
First they said radio wasn’t art, and then we got War of the Worlds
They said comic books weren’t art, then we got Maus
They said rock and roll wasn’t art, then we go Dark Side of the Moon (and Pet Sounds, and Sgt Peppers, and many others)
They said films weren’t art, then we got Citizen Kane
They said video games weren’t art, and we got Final Fantasy 7
They said TV wasn’t art, and we got Breaking Bad
And now they’re telling us that AI Generated Art isn’t art, and I’m wondering how long it will take until they admit they were wrong here too.
But this can often happen relatively ‘early’ in the life-cycle of a new media, once creators become accustomed to the cultural form. As newer creators began working with the media, they can take it further, but there is a risk. Creators that have grown up with the media may be too familiar with the source material, drawing on the representations from within itself.
F’rex: writers on police procedurals, having grown up watching police procedurals, simply endlessly repeat the tropes that are foundational to the genre. The works become pastiches, parodies of themselves, often unintentionally, unable to escape from the weight of the tropes they carry.
Soylent culture is this, the self-referential culture that has fed on itself, an Ourobouros of references that always point at something else. The rapid-fire quips coming at the audience faster than a Dennis Miller-era Saturday Night Live “Weekend Update” or the speed of a Weird Al Yankovic polka medley. Throw in a few decades worth of Simpson‘s Halloween episodes, and the hyper-referential and meta-commentative titles like The Family Guy and Deadpool (print or film) seem like the inevitable results of the form.
And that’s not to suggest that the above works aren’t creative; they’re high examples of the form. But the endless demand for fresh material in the era of consumption culture means that the hyper-referentiality will soon exhaust itself, and turn inward. This is where the nostalgia that we’ve been discussing come into play, a resource for mining, providing variations of previous works to spark a glimmer in the audience’s eyes of “Hey, I recognize that!”
But they’re limited, bound as they are to previous, more popular titles, art that was more widely accessible, more widely known. They are derivative works. They can’t come up with anything new.
Perhaps.
This is where we come back to the generative art tools, the LLMs and GenAIs we spoke of earlier. Because while soylent culture existed before the AI Art tools came onto the scene, it has become increasingly obvious that they facilitate it, drive it forward, and feed off it even more. The AI art tools are voracious, continually wanting more, needing fresh new stuff in order to increase the fidelity of the model, that hallowed heart driving the beast that continually hungers.
But the model is weak, it is vulnerable.
Model Collapse
And the one thing the model can’t take too much of is itself. Model collapse is the very real risk of a GPT being trained on LLM generated text. Identified by Shumailov et. al. (2024), and “ubiquit(ous) among all learned generative models”, model collapse is a risk that creators of AI tools face in further developing the tools. In an era of model collapse, the human-generated content of the earlier, pre-AI web becomes a much valuable resource, the digital equivalent of low-background steel sought after for the creation of precision instruments in an era of atmospheric nuclear testing, where the background levels of radiation made the newly mined ore unsuitable for use.
(The irony that we were living in an era when the iron was unusable should not go un-noted.)
“Model collapse is a degenerative process affecting generations of learned generative models, in which the data they generate end up polluting the training set of the next generation. Being trained on polluted data, they then mis-perceive reality.”
(Shumailov, et. al., 2024).
Model collapse can result in the models “forgetting” (Shumailov, et al, 2023). It is a cybernetic prion disease. Like the cattle that developed BSE by being fed feed that contained parts of other ground up cows sick with the disease, the burgeoning electronic “minds” of the AI tools cannot digest other generated content.
Soylent culture.
But despite the incredible velocity that all this is happening at, it is still early days. There is an incredible amount of research being done on the effects of model collapse, and the long term ramifications for it on the industry. There may yet be a way out from culture continually eating itself.
We’ll explore some of those possible solutions next.