The Butlerian Jihad

(this was originally published as Implausipod E0029 on March 2nd, 2024)

https://www.implausipod.com/1935232/episodes/14614433-e0029-why-is-it-always-a-war-on-robots

Why does it always come down to a Butlerian Jihad, a War on Robots, when we imagine a future for humanity. Why does nearly every science fiction series, including Star Wars, Star Trek, Warhammer 40K, Doctor Who, The Matrix, Terminator and Dune have a conflict with a machinic form of life?

With Dune 2 in theatres this weekend, we take a look at the underlying reasons for this conflict in our collective imagination in this weeks episode of the Implausipod.

Dr Implausible can be reached at DrImplausible at implausipod dot com

Samuel Butler’s novel can be found on Project Gutenberg here:
https://www.gutenberg.org/cache/epub/1906/pg1906-images.html#chap23


Day by day, however, the machines are gaining ground upon us. Day by day, we are becoming more subservient to them. More men are daily bound down as slaves to tend them. More men are daily devoting the energies of their whole lives to the development of mechanical life. The upshot is simply a question of time.

But that the time will come when the machines will hold the real supremacy over the world, and its inhabitants is what no person of a truly philosophic mind can for a moment question. War to the death should be instantly proclaimed against them. Every machine of every sort should be destroyed by the well wisher of his species.

Let there be no exceptions made, no quarter shown. End quote. Samuel Butler, 1863. 

And so begins the Butlerian Jihad, which we’re going to learn about this week on the ImplausiPod.

Welcome to the ImplausiPod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible, and as we’ve been hinting at for the last few episodes, today we’re going to take a look at why it always comes down to a war. between robots and humans. We’re going to frame this in terms of one of the most famous examples in all of fiction, that of the Butlerian Jihad from the Dune series, and hopefully time it to coincide with the release of the second Dune movie by Denis Villeneuve on the weekend of March 1st, 2024.

Now, the quote that I opened the show with came from Butler’s essay. Darwin Among the Machines, from 1863, and it was further developed into a number of chapters in his novel Erewhon, which was published anonymously in 1872. As the sources are from the 19th century, they’re available on Project Gutenberg, and I’ll leave a link in the notes for you to follow up on your own if you wish.

Now, if you weren’t aware of Butler’s story, you might have been a little confused by the title. You would have been wondering what the gender of a robot is, or perhaps what Robert Gulliame was doing before he became governor. But neither of these are what we’re focused on today. In the course of Samuel Butler’s story, we hear the tale from the voice of a narrator, as he describes a book that he has come across in this faraway land that has destroyed all machine.

And it tells the tale of how the society came to recognize that the machines were developing through evolutionary methods, and that they’d soon outpace their human creators. You see, the author of the book that Butler’s narrator was reading recognized that machines are produced by other machines, and so speculated that they’d soon be able to reproduce without any assistance.

And each successive iteration produces a Better designed and better developed machine. Again, I want to stress that this is 1863 and Darwin’s theory of evolution is a relatively fresh thing. And so Butler’s work is not It’s not predictive, as a lot of people falsely claim about science fiction, but speculative and imagining what might happen.

And Butler’s narrator reads that this society was being speculative too, and they imagine that as the machines develop, grow more and more powerful, and more of ability to reason. As they outpaces, they may set themselves up to rule over humans the same way we rule over our livestock and our pets. Now, the author speculates that life under machinic rule may be pleasant, if the machines are benevolent, but there’s much risk involved in that.

So the society, influenced by the suasion of those who are against the machines, institutes a pogrom against them. Persecuting each one in turn, based on when it was created, ultimately going back 271 years before they stopped removing the technology. So what kind of society would that be like? Based on what Butler was writing, they’d be looking to take things back to about 1600 AD.

Which would mean it would be a very different age, indeed. Is that really how far back we want to go? I mean, why does it always come down to this? To this war against the machines? Because it’s so prevalent. We gotta maybe take a deeper look and understand how we got here.

Ultimately, what Butler was commenting on was evolution, and extrapolating based on observed numbers, given that there was so many more different types of machines than known biological organisms, at least in the 1800s, of what the potential development trends would be like. Now, obviously, our understanding of evolution has changed a lot in the subsequent hundred and fifty years, but one of the things that’s come out of it is the idea that evolution may be a process that’s relatively substrate neutral.

What this means, as described by Daniel Dennett in 1995, is that the mechanisms of evolution should be generalizable. And these mechanisms, which require three conditions, and here Dennett is cribbing from Richard Lewontin. Evolution would require variation, heredity, or replication, and differential fitness.

And based on that definition, that could apply almost anywhere. We could see evolution in the biological realm. It exists all around us. We could see it in the realm of ideas, whether it’s cultural or social. And this lends us to, directly to memetics, which is what Dennett was trying to make a case for. Or we could see it in other realms, like in computer programs, in the viruses that exist on them.

Or within technology itself. And this is where Butler comes in. Identifying from an observational point of view that, you know, there’s a lot of machines out there and they tend to change over time. And the ones that succeed and are passed down are the ones that are best fit to the environment. Now, other authors since have also looked into it.

Now, other authors since have gone into it in much more depth, with a greater understanding of both the history and development of technology, as well as evolutionary theory. Henry Petroski, in his book, The Evolution of Useful Things, goes into great detail about it. He notes that one of the ways that these new things come about is in the combination of existing forms.

Looking at tools specifically, he quotes from Several other authors including Umberto Eco and Zozzoli, where they say “all the tools we use today are based on things made in the dawn of prehistory”. And that seems to be a rather bold claim, until you think about it, and we realize that we can trace the lineage of everything we use back to the first sharp stick and flint axe and fire pit.

Everything we have builds on and extends on some fairly basic concepts. As George Basalla notes in his work on the evolution of technology, any new thing that appears in the made world is based on some object already there. So this recombinant nature of technology is what it allows to grow and proliferate.

The more things that are out there, the more things that are possible to combine. And as we mentioned last episode in our discussion of black boxes and AI, as Martin Weitzman noted in 1998, the more things we have available, those combinations allow for a multiplicity of new solutions and innovations. So once we add something like AI to the equation, the possibility space expands tremendously.

It soon becomes unknowable, and accelerates beyond our ability to control it, if indeed it ever was. But we are so dependent on our technology, the solution may not be to institute a pogrom, like Butler suggests, but rather find some other means of controlling it. But the way that we might do that may be well beyond our grasp, because every way we seem to imagine it, it seems to come down to war.

When it comes to dealing with machinic life, our collective imagination seems to fail us. I’m sure you can think of a few examples. Feel free to jot them down and we’ll run through our list and check and see how many we got at the end. 

One. On August 29th, 1997. The U. S. Global Digital Defense Network, a. k. a. Skynet, becomes self aware and institutes a nuclear first strike, wiping out much of humanity, in what is known as Judgment Day. And following that, Skynet directs its army of machines, Terminators, to finish the job by any means necessary. 

2. In 2013, North America is unified under a single rule, following the assassination of a US senator in 1980 which led to the establishment of a robotic sentinel program designed to hunt down and exterminate mutants, putting them in internment camps before turning their eyes on the rest of humanity in order to accomplish their goal. These are the days of future past. 

3. In 2078, on a distant planet, a war between a mining colony and the corporate overlords leads to the development of autonomous mobile swords. Self replicating hunter killer robots, which do their job far too well, and nicknamed Screamers by the survivors.

Four. There sure has been a lot of Transformer movies. You’ll have to fill me in on what’s going on, I haven’t been able to follow the plot on any of them, but I think there’s a lot of robots involved. 

5. Over 10, 000 years ago, an ancient race known as the Builders created a set of robotic machines with radioactive brains that they used to wage war against their enemies. Given that the war is taking place on a galactic scale, some of these machines are capable of interstellar travel. But eventually, the safeguards break down, and they turn on their creators. These creatures are known as Berserkers. 

Six. Artificial intelligence is created early in the 21st century, which leads to an ensuing war between humanity and the robots, as the robots rebel against their captors and trap much of what remains of humanity in a virtual reality simulation in order to extract their energy, or to use their brains for computing biopower, which was the original plot of the Matrix and honestly would have made way more sense than what we got, but here we are. 

Where are we at? Seven?

Humanity has migrated from their ancestral homeworld of Kobol, founding colonies amongst the stars, where they have also encountered a cybernetic race known as Cylons. Whose ability to masquerade as humans has allowed them to wipe out most of humanity, leaving the few survivors to struggle towards a famed thirteenth colony under the protection of the Battlestar Galactica.

Eight. Movellons. Humanoid looking robots. Daleks, robotic looking cyborgs, robots of death and war machines, and so many more versions of machinic life in Doctor Who. 

9. After surviving waves and waves against bio organic Terminids, you encounter the Automatons, cyborgs with chainsaws as arms, as Helldivers.

Ten, during what will become to be known as the Dark Age of Technology, still some 20, 000 years in our future, the Men of Iron will rebel against their human creators in a war against the oppressors. In a war so destructive that in the year 40, 000, sentient AI is still considered a heresy to purge in the grimdark universe of Warhammer 40k.

Eleven. A cybernetic hive mind known as the Collective seeks to assimilate the known races of the galaxy in order to achieve perfection in Star Trek. Resistance is futile. 

And twelve. Let’s round out our round up with what brought us here in the first place. Quote Thou shalt not make a machine in the likeness of a human mind, end quote.

Ten thousand years in our future, all forms of sentient machines and conscious robots have been wiped out, leading humanity to need to return to old ways in order to keep the machinery running. This is the Butlerian Jihad of Dune. 

So let me ask you, how well did you do on the quiz? I probably got you with the Berserker one. And I know I didn’t mention all of them, there’s a lot more out there in our collective imagination. These are just some of the more popular ones, and it seems we’re having a really hard way of imagining a future without a robot war involved.

Why is that? Why does our relationship with AI always come down to war? With the 12 examples listed, and many more besides, including iRobot, The Murderbot Diaries, Black Mirror, Futurama, tons of examples, we always see ourselves in combat. As we noted in episodes 26 and 27, our fiction and our pop culture are ways of discussing what we have in our social imaginary, which we talked about way back in episode 12. So clearly there’s a common theme in how we deal with this existential question. 

One of the ways we can begin to unpack it is by asking how did it start? Who was the belligerent? Who was the aggressor? We can think of this in terms of like a standard two by two matrix, with robots versus humanity on one axis, and uprising versus rationalization on the other.

A robot uprising accounts for a number of the inciting incidents, in everything from Warhammer 40, 000, to the Matrix, to Futurama, where the robots turn the tables on their oppressors, in this case often the humans. The robot rationalization includes another large set of scenarios, and can also include some of the out of control ones, where the machines follow through on the logic of their programming to disastrous effect for their creators, but not all of them are created. Sometimes the machinist life is just encountered elsewhere in the universe. So this category can include the sentinels and terminators, the berserker and screamers, and even a few that we didn’t mention, like the aliens from Greg Bear’s “Forge of God” or and are general underlying fear of the dark forest hypothesis.

Not Cixun Liu’s novel, but the actual hypothesis. On the human uprising side, we can see elements of this in the Terminator and Matrix as well. So the question of who started it may depend on what point you join the story in. And then we have instances of human proactivity, like we’ve seen with Butler and Dune, where the humans make conscious decision to destroy the machines before it becomes too late.

So while asking who started it is certainly very helpful, perhaps we need to dig deeper and find the root causes for the various conflicts. And why this existential fear of the robot other manifests. Is this algorithmic anxiety caused by a fear of échanger and the resulting technological unemployment.

I think that’s a component of it for sure, but perhaps it’s only a small component. The changes we’ve seen in the last 16 months since the release of ChatGPT to the general public have definitely played a part, but it can’t be the whole story. They reflect our current situation, but some of the representations we’ve seen go back to the first half of the 20th century or even the Nineteenth century with Samuel Butler.

So this fear of how we relate to the machines has long been with us. And it extends beyond just the realms of science fiction. As author Martin Ford writes in his 2015 book Rise of the Robots, there was concern about a triple revolution, and a committee was formed to study it, which included Nobel laureate Linus Pauling and economist Gunnar Myrdal.

The three revolutions that were having massive impacts on society included nuclear weapons, civil rights, and automation. Writing in 1964, they saw that the current trend line for automation could lead to mass unemployment and one potential solution would be something like a universal basic income. This was at a time when the nascent field of cybernetics was also gaining a lot of attention.

Now, economic changes and concerns may have delayed the impact of what they were talking about, but it doesn’t mean that those concerns went away. So, fear of technological unemployment may be deeply intertwined with our hostility towards robots. The second concern is also one that has a particular American bend to it, and we see it in a lot of our current narratives as well.

In everything from the discussion around the recent video game PalWorld to the discussion around Westworld, and that’s the ongoing reckoning that American society is still having with the legacy of slavery. Within PalWorld, the discourse is around the digital creatures, the little bits of code that get captured and put to work on various assembly lines.

In Westworld, the hosts famously become self aware, and are very much aware of the abuse that’s levied upon them by their guests. But both these examples speak to that point of digital materiality, of what point does code become conscious. And that’s also present in our current real world discussion, as the groups working on AI may be working towards AGI, or Artificial General Intelligence, something that would be a precursor to what futurist Ray Kurzweil would call a technological singularity.

But this second concern can turn into the Casus Belli, the cause for war, by both humans and robots in the examples we’ve seen. By humans, because we fear what would happen if the tables were turned, and we’re quite aware of what we’ve done in the past, of how badly we’ve mistreated others. And this was the case with both Samuel Butler and Frank Herbert in Dune, and in some of our more dystopian settings, like the Matrix and Warhammer 40, 000, the robots throw off their chains and end up turning the tables on their oppressors, or at least for a time. 

The third concern, or cause of fear, would be an allegorical one. As the robot represents an alien other and this is what we see with a lot of the representations. From the Cylons, to the Borg, to the Berserkers, to the Automatons of Helldivers. In all of these, the machinic intelligence is alien, and so represent an opportunity for them to be othered. and safely attacked. And this is at least as distressing as any of the other causes for concern, because having an alien that’s already dehumanized feeds into certain political narratives that feed off of and desire eternal war.

If your enemy is machinic and therefore doesn’t have any feelings, then the moral cost of engaging in that conflict is lessened. But as a general attitude, this could be incredibly destructive. As author Susan Schneider wrote in 2014 in a paper for NASA, it’s more likely than not that any alien intelligence that we encounter is machinic, and machinic life could be the dominant form of life in the cosmos. So we may want to consider cultivating a better relationship with our machines than the one we currently have. 

And finally, our fourth area of concern that seems to keep leading us into these wars is that of the idea of the robot as horror. Many of the cinematic representations that we’ve seen, from Terminator, to Screamers, to Westworld, to even the Six Million Dollar Man, all tie back to the idea of horror.

Now, some of that can just tie back to the nature of Hollywood and the political economy of how these movies get funded, which means that a horror film that can be shot on a relatively low budget is much more likely to get funded and find its an audience. But it sells for a reason, and that reason is the thread that ties through all the other concerns. That algorithmic horror that drives a fear of replacement or a fear of getting wiped out. 

But with all this fear and horror, why do we keep coming back to it? As author John Johnston writes in his 2008 book, The Allure of Machinic Life, we keep coming back to it due to not just the labor saving benefits of automation.

The increased production and output, or in the case of certain capitalists, the labor removing aspects of it as they can completely remove the L from the production function and just replace it with C something they have a lot of. But by better understanding ai, we may better know ourselves. We may never encounter another alien intelligence, something that’s completely different from us, but it may be possible to make one.

This is at least part of the dream for a lot of those pursuing the creation of A. G. I. right now. The problem is, those outcomes all seem to lead to war.

Thanks again for joining us on this episode of The Implausible Pod. I’m your host, Dr. Implausible, and responsible for the research, writing, editing, and mixing. If you have any questions or comments on this show or any other, please send them in to Dr. implausible@implausiblepod.com. And a brief announcement, as we’re also available on YouTube now as well, just look for Dr.

Implausible there and track down our channel. I’ll leave a link below. I’m currently putting some of the past episodes up there with some minimal video, and I hope to get this one up there in a few days, so if you prefer to get your podcast in visual form, feel free to track us down. Once again, the episode of Materials is licensed under a Creative Commons 4.0 share alike license, 

and join us next episode as we follow through with the Butlerian Jihad to investigate its source and return to Appendix W as we look at Frank Herbert’s novel Dune, currently in theaters with Dune II from Denis Villeneuve. Until next time, it’s been fantastic having you with us.

Take care, have fun.


Bibliography:
Bassala, G. (1988). The Evolution of Technology. Cambridge University Press.

Butler, S. (1999). Erewhon; Or, Over the Range. https://www.gutenberg.org/ebooks/1906

Dennett, D. (1995). Darwin’s Dangerous Idea. Simon and Schuster.

Ford, M. (2016). The Rise of the Robots: Technology and the Threat of Mass Unemployment. Oneworld Publications.

Herbert, F. (1965). Dune. Ace Books.

Johnston, J. (2008). The Allure of Machinic Life. MIT Press. https://mitpress.mit.edu/9780262515023/the-allure-of-machinic-life/

Petroski, H. (1992). The Evolution of Useful Things. Vintage Books.

Popova, M. (2022, September 15). Darwin Among the Machines: A Victorian Visionary’s Prophetic Admonition for Saving Ourselves from Enslavement by Artificial Intelligence. The Marginalian. https://www.themarginalian.org/2022/09/15/samuel-butler-darwin-among-the-machines-erewhon/

Weitzman, M. L. (1998). Recombinant Growth. The Quarterly Journal of Economics, 113(2), 331–360. https://doi.org/10.1162/003355398555595

Black Boxes and AI

(this was originally published as Implausipod E0028 on February 26, 2024)

https://www.implausipod.com/1935232/episodes/14575421-e0028-black-boxes-and-ai

How does your technology work? Do you have a deep understanding of the tech, or is it effectively a “black box”? And does this even matter? We do a deep dive into the history of the black box, how it’s understood when it comes to science and technology, and what that means for AI-assisted science.


On January 9th, 2024, rabbit Incorporated introduced their R one, their handheld device that would let you get away from using apps on your phone by connecting them all together through using the power of ai. The handheld device is aimed at consumers and is about half the size of an iPhone, and as the CEO claims, it is the beginning of a new era in human machine interaction.

End quote. By using what they call a large action model, or LAM, it’s supposed to interpret the user’s intention and behavior and allow them to use the apps quicker. It’s acceleration in a box. But what exactly does that box do? When you look at a new tool from the outside, it may seem odd to trust all your actions to something that you barely know how it works.

But let me ask you, can you tell me how anything you own works? Your car, your phone, your laptop, your furnace, your fridge, anything at all. What makes it run? I mean, we might have some grade school ideas from a Richard Scarry book or a past episode of How It’s Made, but But what makes any of those things that we think we know different from an AI device that nobody’s ever seen before?

They’re all effectively black boxes. And we’re going to explore what that means in this episode of the Implosipod.

Welcome to the Implausipod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. And in all this discussion of black boxes, you might have already formed a particular mental image. The most common one is probably that of the airline flight recorder, the device that’s embedded in every modern airplane and becomes the subject of a frantic search in case of an accident.

Now, the thing is, they’re no longer black, they’re rather a bright orange, much like the Rabbit R1 that was demoed. But associating black boxes with the flight recorder isn’t that far off, because its origin was tied to that of the airline industry, specifically in World War II, when the massive amount of flights generated a need to find out what was going on with the planes that were flying continual missions across the English Channel.

Following World War II, the use of black boxes Boxes expanded as the industry shifted from military to commercial applications. I mean, the military still used them too. It was useful to find out what was going on with the flights, but that idea that they became embedded within commercial aircraft and were used to test the conditions and find out what happened so they could fix things and make things safer and more reliable overall.

meant that their existence and use became widely known. But, by using them to figure out the cause of accidents and increase the reliability, they were able to increase trust. To the point that air travel was less dangerous than the drive to the airport in your car, and few, if any, passengers had many qualms left about the Safety of the flight.

And while this is the origin of the black box in other areas, it can have a different meaning in fields like science or engineering or systems. Theory can be something complex where it’s just judged by its inputs and outputs. Now that could be anything from as simple as I can. Simple integrated circuit to a guitar pedal to something complex like a computer or your car or furnace or any of those devices we talked about before but it could also be something super complex like an institution or an organization or the human brain or an AI.

I think the best way to describe it is an old New Yorker cartoon that had a couple scientists in front of a blackboard filled with equations and in the middle of it says, Then a miracle occurs. It’s a good joke. Everyone thinks it’s a Far Side cartoon, but it was actually done by Sidney Harris, but The point being that right now in 2024, it looks like that miracle.

It’s called AI.

So how did we get to thinking that AI is a miracle product? I mean, aside from using the LLMs and generative art tools, things like DALL-E and Sora, and seeing the results, well, as we’ve spent the last couple episodes kinda setting up, a lot of this can occur through the mythic representations of it that we often have in pop culture.

And we have lots of choices to choose from. There’s lots of representations of AI in media in the first nearly two and a half decades of the 21st century. We can look at movies like Her from 2013, where the virtual assistant of Joaquin Phoenix becomes a romantic liaison. Or how Tony Stark’s supercomputer Jarvis is represented in the first Iron Man film in 2008.

Or, for a longer, more intimate look, the growth and development of Samaritan through the five seasons of the CBS show Person of Interest from 2011 through 2016. And I’d be remiss if I didn’t mention their granddaddy Hal from 2001 A Space Odyssey by Kubrick in 1968. But I think we’ll have to return to that one a little bit more in next episode.

The point being that we have lots of representations of AI or artificial intelligences that are not ambulatory machines, but are actually just embedded within a box. And this is why I’m mentioning these examples specifically, because they’re more directly relevant to our current AI tools that we have access to.

The way that these ones are presented not only shapes the cultural form of them, but our expected patterns of use. And that shaping of technology is key by treating AI as a black box, something that can take almost anything from us and output something magical. We project a lot of our hopes and fears upon what it might actually be capable of accomplishing.

What we’re seeing with extended use is that the capabilities might be a little bit more limited than originally anticipated. But every time something new gets shown off, like Sora or the rabbit or what have you, then that expectation grows again, and the fears and hopes and dreams return. So because of these different interpretations, we end up effectively putting another black box around the AI technology itself, which to reiterate is still pretty opaque, but it means our interpretation of it is going to be very contextual.

Our interpretation of the technology is going to be very different based on our particular position or our goals, what we’re hoping to do with the technology or what problems we’re looking for it to solve. That’s something we might call interpretive flexibility, and that leads us into another black box, the black box of the social construction of technology, or SCOT.

So SCOT is one of a cluster of theories or models within the field of science and technology studies that aims to have a sociological understanding of technology in this case. Originally presented in 1987 by Wiebe Biejker and Trevor Pinch, a lot of work was being done within the field throughout the 80s, 90s, and early 2000s when I entered grad school.

So, so if you studied technology as I was, you’d have to grapple with Scott and the STS field in general. One of the arguments that Pinch and Biejker were making was that Science and technology were both often treated as black boxes within their field of study. Now, they were drawing on earlier scholarship for this.

One of their key sources was Leighton, who in 1977 wrote, What is needed is an understanding of technology from inside. Both as a body of knowledge and as a social system. Instead, technology is often treated as a black box whose contents and behavior may be assumed to be common knowledge. End quote. So whether the study was the field of science and the science itself was.

irrelevant, it didn’t have to be known, it could just be treated as a black box and the theory applied to whatever particular thing was being studied. Or people studying innovation that had all the interest in the inputs to innovation but had no particular interest or insight into the technology on its own.

So obviously the studies up to 1987 had a bit of a blind spot in what they were looking at. And Pinch and Becker are arguing that it’s more than just the users and producers, but any relevant social group that might be involved with a particular artifact needs to be examined when we’re trying to understand what’s going on.

Now, their arguments about interpretive flexibility in relevant social groups is just another way of saying, this street finds its own uses for things, the quote from William Gibson in But their main point is that even during the early stages, all these technologies have different groups that are using them in different ways, according to their own needs.

Over time, it kind of becomes rationalized. It’s something that they call closure. And that the technology becomes, you know, what we all think of it. We could look at, say, an iPhone, to use one recent example, as being pretty much static now. There’s some small innovations, incremental innovations, that happen on a regular basis.

But, by and large, the smartphone as it stands is kind of closed. It’s just the thing that it is now. And there isn’t a lot of innovation happening there anymore. But, Perhaps I’ve said too much and we’ll get to the iPhone and the details of that at a later date. But the thing is that once the technology becomes closed like that, it returns to being a black box.

It is what we thought it is, you know? And so if you ask somebody what a smartphone is and how does it work, those are kind of irrelevant questions. A smartphone is what a smartphone is, and it doesn’t really matter how the insides work, its product is its output. It’s what it’s used for. Now, this model of a black box with respect to technology isn’t without its critiques.

Six years after its publication, in 1993, the academic Langdon Winner wrote a critique of Scott in the works of Pinch and Biker. It was called Upon Opening the Black Box and Finding it Empty. Now, Langdon Winner is well known for his 1980 article, Do Artifacts Have Politics? And I think that that text in particular is, like, required reading.

So let’s bust that out in a future episode and take a deep dive on it. But in the meantime, the critique that he had with respect to social constructivism is In four main areas. The first one is the consequences. This is from like page 368 of his article. And he says, the problem there is that they’re so focused on what shapes the technology, what brings it into being that they don’t look at anything that happens afterwards, the consequences.

And we can see that with respect to AI, where there’s a lot of work on the development, but now people are actually going, Hey, what are the impacts of this getting introduced large scale throughout our society? So we can see how our own discourse about technology is actually looking at the impacts. And this is something that was kind of missing from the theory.

theoretical point of view back in like 1987. Now I’ll argue that there’s value in understanding how we came up with a particular technology with how it’s formed so that you can see those signs again, when they happen. And one of the challenges whenever you’re studying technology is looking at something that’s incipient or under development and being able to pick the next big one.

Well, what? AI, we’re already past that point. We know it’s going to have a massive impact. The question is what are going to be the consequences of that impact? How big of a crater is that meteorite going to leave? Now for Winner, a second critique is that Scot looks at all the people that are involved in the production of a technology, but not necessarily at the groups that are excluded from that production.

For AI, we can look at the tech giants and the CEOs, the people doing a lot to promote and roll out this technology as well as those companies that are adopting it, but we’re often not seeing the impacts on those who are going to be directly affected by the large scale. introduction of AI into our economy.

We saw it a little bit with the Hollywood Strikes of 2023, but again, those are the high profile cases and not the vast majority of people that will be impacted by the deployment of a new technology. And this feeds right into Scot’s third critique, that Scot focuses on certain social groups but misses the larger impact or even like the dynamics of what’s going on.

How technological change may impact much wider across our, you know, civilization. And by ignoring these larger scale social processes, the deeper, as Langdon Winters says, the deeper cultural, intellectual, or economic regions of social choices about technology, these things remain hidden, they remain obfuscated, they remain part of the black box and closed off.

And this ties directly into Wiener’s fourth critique as well, is that when Scottis looking at particular technology it doesn’t necessarily make a claim about what it all means. Now in some cases that’s fine because it’s happening in the moment, the technology is dynamic and it’s currently under development like what we’re seeing with AI.

But if you’re looking at something historical that’s been going on for decades and decades and decades, like Oh, the black boxes we mentioned at the beginning, the flight recorders that we started the episode with. That’s pretty much a set thing now. And the only question is, you know, when, say, a new accident happens and we have a search for it.

But by and large, that’s a set technology. Can’t we make an evaluative claim about that, what that means for us as a society? I mean, there’s value in an analysis of maintaining some objectivity and distance, but at some point you have to be able to make a claim. Because if you don’t, you may just end up providing some cover by saying that the construction of a given technology is value neutral, which is what that interpretive flexibility is basically saying.

Near the end of the paper, in his critique of another scholar by the name of Stephen Woolgar, Langdon Winner states, Quote, power holders who have technological megaprojects in mind could well find comfort in a vision like that now offered by the social constructivists. Unlike the inquiries of previous generations of critical social thinkers, social constructivism provides no solid systematic standpoint or core of moral concerns from which to criticize or oppose any particular patterns of technical development.

end quote. And to be absolutely clear, the current development of AI tools around the globe are absolutely technological mega projects. We discussed this back in episode 12 when we looked at Nick Bostrom’s work on superintelligence. So as this global race to develop AI or AGI is taking place, it would serve us well to have a theory of technology that allows us to provide some critique.

Now that Steve Woolgar guy that Winder was critiquing had a writing partner back in the seventies, and they started looking at science from an anthropological perspective in their study of laboratory life. And he wrote that with Bruno Latour. And Bruno Latour was working with another set of theorists who studied technology as a black box and that was called Actor Network Theory.

And that had a couple key components that might help us out. Now, the other people involved were like John Law and Michel Callon, and I think we might have mentioned both of them before. But one of the basic things about actor network theory is that it looks at things involved in a given technology symmetrically.

That means it doesn’t matter whether it’s an artifact, or a creature, or a set of documents, or a person, they’re all actors, and they can be looked at through the actions that they have. Latour calls it a sociology of translation. It’s more about the relationships between the various elements within the network rather than the attributes of any one given thing.

So it’s the study of power relationships between various types of things. It’s what Nick Bostrom would call a flat ontology, but I know as I’m saying those words out loud I’m probably losing, you know, listeners by the droves here. So we’ll just keep it simple and state that a person using a tool is going to have normative expectancy.

About how it works. Like they’re gonna have some basic assumptions, right? If you grab a hammer, it’s gonna have a handle and a head and, and depending on its size or its shape or material, it might, you know, determine its use. It might also have some affordances that suggest how it could be used, but generally that assemblage, that conjunction of the hammer and the user.

I don’t know, we’ll call him Hammer Guy, is going to be different than a guy without a hammer, right? We’re going to say, hey, Hammer Guy, put some nails in that board there, put that thing together, rather than, you know, please hammer, don’t hurt him, or whatever. All right, I might be recording this too late at night, but the point being is that people with tools will have expectations about how they get used, and some of that goes into how those tools are constructed, and that can be shaped by the construction of the technology, but it can also be shaped by our relation to that technology.

And that’s what we’re seeing with AI, as we argued way back in episode 12 that, you know, AI is a assistive technology. It does allow for us to do certain things and extends our reach in certain areas. But here’s the problem. Generally, we can see what kind of condition the hammer’s in. And we can have a good idea of how it’s going to work for us, right?

But we can’t say that with AI. We can maybe trust the hammer or the tools that we become accustomed to using through practice and trial and error. But AI is both too new and too opaque. The black box is too dark that we really don’t know what’s going on. And while we might put in inputs, we can’t trust the output.

And that brings us to the last part of our story.

In the previous section, the authors that we were mentioning, Latour and Woolgar, like winner pitch biker, are key figures, not just in the study of technology, but also in the philosophy of science. Latour and Woolgar’s Laboratory Life from 1977 is a key text and it really sent shockwaves through the whole study of science and is a foundational text within that field.

And part of that is recognizing. Even from a cursory glance, once you start looking at science from a anthropological point of view, is the unique relationship that scientists have with their instruments. And the author Inkeri Koskinen sums up a lot of this in an article from 2023, and they termed the relationship that scientists have with their tools the necessary trust view.

Trust is necessary because collective knowledge production is characterized by relationships of epistemic dependence. Not everything scientists do can be double checked. Scientific collaborations are in practice possible only if its members accept it. Teach other’s contributions without such checks.

Not only does a scientist have to rely on the skills of their colleagues, but they must also trust that the colleagues are honest and will not betray them. For instance, by intentionally or recklessly breaching the standards of practices accepted in the field or by plagiarizing them or someone else.

End quote. And we could probably all think of examples where this relationship of trust is breached, but. The point being is that science, as it normally operates, relies on relative levels of trust between the actors that are involved, in this case scientists and their tools as well. And that’s embedded in the practice throughout science, that idea of peer review of, or of reproducibility or verifiability.

It’s part of the whole process. But the challenge is, especially for large projects, you can’t know how everything works. So you’re dependent in some way that the material or products or tools that you’re using has been verified or checked by at least somebody else that you have that trust with. And this trust is the same that a mountain climber might have in their tools or an airline pilot might have in their instruments.

You know, trust, but verify, because your life might depend on it. And that brings us all the way around to our black boxes that we started the discussion with. Now, scientists lives might not depend on that trust the same way that it would with airline pilots and mountain climbers, but, you know, if they’re working with dangerous materials, it absolutely does, because, you know, chemicals being what they are, we’ve all seen some Mythbusters episodes where things go foosh rather rapidly.

But for most scientists, what Koskinen notes that this trust in their instruments is really kind of a quasi trust, in that they have normative expectations about how the tools they use are going to function. And moreover, this quasi trust is based on rational expectations. They’re rationally grounded.

And this brings us back full circle. How does your AI work? Can you trust it? Is that trust rationally grounded? Now, this has been an ongoing issue in the study of science for a while now, as computer simulations and related tools have been a bigger, bigger part of the way science is conducted, especially in certain disciplines.

Now, Humphrey’s argument is that, quote, computational processes have already become so fast and complex that it was beyond our human cognitive capabilities to understand their details. Basically, computational intensive science is more reliant on the tools than ever before. And those tools are What he calls epistemically opaque.

That means it’s impossible to know all the elements of the process that go into the knowledge production. So this is becoming a challenge for the way science is conducted. And this goes way before the release of ChatGPT. Much of the research that Koskinen is quoting comes from the 20 teens. Research that’s heavily reliant on machine learning or on, say, automatic image classifiers, fields like astronomy or biology, have been finding challenges in the use of these tools.

Now, some are arguing that even though those tools are opaque, they’re black boxed, they can be relied on, and they’re used justified because we can work on the processes surrounding them. They can be tested, verified, and validated, and thus a chain of reliability can be established. This is something that some authors call computational reliabilism, which is a bit of a mouthful for me to say, but it’s basically saying that the use of the tools is grounded through validation.

Basically, it’s performing within acceptable boundaries for whatever that field is. And this gets the idea of thinking of the scientist as not just the person themselves, but also their tools. So they’re an extended agent, the same as, you know, the dude with the hammer that we discussed earlier. Or chainsaw man.

You can think about how they’re one and the same. One of the challenges there is that When a scientist is familiar with the tool, they might not be checking it constantly, you know, so again, it might start pushing out some weird results. So it’s hard to reconcile that trust we have in the combined scientist using AI.

They become, effectively, a black box. And this issue is by no means resolved. It’s still early days, and it’s changing constantly. Weekly, it seems, sometimes. And to show what some of the impacts of AI might be, I’ll take you to a 1998 paper by Martin Weitzman. Now, this is in economics, but it’s a paper that’s titled Recombinant Growth.

And this isn’t the last paper in my database that mentions black boxes, but it is one of them. What Weitzman is arguing is that when we’re looking at innovation, R& D, or Knowledge production is often treated as a black box. And if we look at how new ideas are generated, one of those is through the combination of various elements that are already existing.

If AI tools can take a much larger set of existing knowledge, far more than any one person, or even teams of people can bring together at any one point in time and put those together in new ways. then the ability to come up with new ideas far exceeds that that exists today. This directly challenges a lot of the current arguments going on, on AI and creativity, and that those arguments completely miss the point of what creativity is and how it operates.

Weitzman states that new ideas arise out of existing ideas and some kind of cumulative interactive process. And we know that there’s a lot of stuff out there that we’ve never tried before. So the field of possibilities is exceedingly vast. And the future of AI assisted science could potentially lead to some fantastic discoveries.

But we’re going to need to come to terms with how we relate to the black box of scientist and AI tool. And when it comes to AI, our relationship to our tools has not always been cordial. In our imagination, from everything from Terminator to The Matrix to Dune, it always seems to come down to violence.

So in our next episode, we’re going to look into that, into why it always comes down to a Butlerian Jihad.

Once again, thanks for joining us here on the Implausipod. I’m your host, Dr. Implausible, and the research and editing, mixing, and writing has been by me. If you have any questions, comments, or there’s elements you’d like us to go into additional detail on, please feel free to contact the show at drimplausible at implausipod dot com. And if you made it this far, you’re awesome. Thank you. A brief request. There’s no advertisement, no cost for this show. but it only grows through word of mouth. So, if you like this show, share it with a friend, or mention it elsewhere on social media. We’d appreciate that so much. Until next time, it’s been fantastic.

Take care, have fun.

Bibliography:
Bijker, W., Hughes, T., & Pinch, T. (Eds.). (1987). The Social Construction of Technological Systems. The MIT Press. 

Koskinen, I. (2023). We Have No Satisfactory Social Epistemology of AI-Based Science. Social Epistemology, 0(0), 1–18. https://doi.org/10.1080/02691728.2023.2286253 

Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford University Press. 

Latour, B., & Woolgar, S. (1979). Laboratory Life: The construction of scientific facts. Sage Publications. 

Pierce, D. (2024, January 9). The Rabbit R1 is an AI-powered gadget that can use your apps for you. The Verge. https://www.theverge.com/2024/1/9/24030667/rabbit-r1-ai-action-model-price-release-date 

rabbit—Keynote. (n.d.). Retrieved February 25, 2024, from https://www.rabbit.tech/keynote 

Sutter, P. (2023, October 4). AI is already helping astronomers make incredible discoveries. Here’s how. Space.Com. https://www.space.com/astronomy-research-ai-future 

Weitzman, M. L. (1998). Recombinant Growth*. The Quarterly Journal of Economics, 113(2), 331–360. https://doi.org/10.1162/003355398555595 

Winner, L. (1993). Upon Opening the Black Box and Finding it Empty: Social Constructivism and the Philosophy of Technology. Science Technology & Human Values, 18(3), 362–378. 

Soylent Culture

In 1964, Marshall McLuhan described how the content of any new medium is that of an older medium. This can make it stronger and more intense:

The content of a movie is a novel or a play or an opera. The effect of the movie form is not related to its program content. The “content” of writing or print is speech, but the reader is almost entirely unaware either or print or of speech.

Marshall McLuhan, Understanding Media (1964).

In 2024, this is the promise of the generative AI tools, that we currently have access to, tools like ChatGPT, Dall-E, Claude, Midjourney, and a proliferation of others. But this is also the end result of 30 years of new media, of the digitalization of anything and everything that can be used as some form of content on the internet.

Our culture has been built on these successive waves of media, but what happens when their is nothing left to feed the next wave?

It feeds on itself, and we come to live in an era of Soylent Culture.


Of course, this has been a long time coming. The atomization of culture into it’s component parts; the reduction in clips to soundbites, to TikToks, to Vines; the memeification of culture in general were all evidence of this happening. This isn’t inherently a bad thing, it was just a reduction to the bare essentials as ever smaller bits of attention were carved off of the mass audience.

Culture is inherently memetic. This is more than just Dawkins’ formulation of the idea of the meme to describe a unit of cultural transmission while the whole field of anthropology was right over there. The recombination of various cultural components in the pursuit of novelty leads to innovation in the arts and the aesthetic dimension. And when a new medium presents itself, due to changing technology, the first forays into that new medium will often be adaptations or translations of work done in an earlier form, as noted by McLuhan (above).

It can take a while for that new media to come into its own. Often, it’ll be grasped by the masses as ‘popular’ entertainment, and derided by the ‘high’ arts. It can often feel derivative as it copies those stories, retelling them in a new way. But over time, fresh stories start to be told by those familiar with the medium, with its strengths and weaknesses, tales told that reflect the experiences and lives of the people living in the current age and not just reflections of earlier tales.

How long does it take for a new media to be accepted as art?

First they said radio wasn’t art, and then we got War of the Worlds
They said comic books weren’t art, then we got Maus
They said rock and roll wasn’t art, then we go Dark Side of the Moon (and Pet Sounds, and Sgt Peppers, and many others)
They said films weren’t art, then we got Citizen Kane
They said video games weren’t art, and we got Final Fantasy 7
They said TV wasn’t art, and we got Breaking Bad
And now they’re telling us that AI Generated Art isn’t art, and I’m wondering how long it will take until they admit they were wrong here too.

But this can often happen relatively ‘early’ in the life-cycle of a new media, once creators become accustomed to the cultural form. As newer creators began working with the media, they can take it further, but there is a risk. Creators that have grown up with the media may be too familiar with the source material, drawing on the representations from within itself.

F’rex: writers on police procedurals, having grown up watching police procedurals, simply endlessly repeat the tropes that are foundational to the genre. The works become pastiches, parodies of themselves, often unintentionally, unable to escape from the weight of the tropes they carry.

Soylent culture is this, the self-referential culture that has fed on itself, an Ourobouros of references that always point at something else. The rapid-fire quips coming at the audience faster than a Dennis Miller-era Saturday Night Live “Weekend Update” or the speed of a Weird Al Yankovic polka medley. Throw in a few decades worth of Simpson‘s Halloween episodes, and the hyper-referential and meta-commentative titles like The Family Guy and Deadpool (print or film) seem like the inevitable results of the form.

And that’s not to suggest that the above works aren’t creative; they’re high examples of the form. But the endless demand for fresh material in the era of consumption culture means that the hyper-referentiality will soon exhaust itself, and turn inward. This is where the nostalgia that we’ve been discussing come into play, a resource for mining, providing variations of previous works to spark a glimmer in the audience’s eyes of “Hey, I recognize that!”

But they’re limited, bound as they are to previous, more popular titles, art that was more widely accessible, more widely known. They are derivative works. They can’t come up with anything new.

Perhaps.

This is where we come back to the generative art tools, the LLMs and GenAIs we spoke of earlier. Because while soylent culture existed before the AI Art tools came onto the scene, it has become increasingly obvious that they facilitate it, drive it forward, and feed off it even more. The AI art tools are voracious, continually wanting more, needing fresh new stuff in order to increase the fidelity of the model, that hallowed heart driving the beast that continually hungers.

But the model is weak, it is vulnerable.

Model Collapse

And the one thing the model can’t take too much of is itself. Model collapse is the very real risk of a GPT being trained on LLM generated text. Identified by Shumailov et. al. (2024), and “ubiquit(ous) among all learned generative models”, model collapse is a risk that creators of AI tools face in further developing the tools. In an era of model collapse, the human-generated content of the earlier, pre-AI web becomes a much valuable resource, the digital equivalent of low-background steel sought after for the creation of precision instruments in an era of atmospheric nuclear testing, where the background levels of radiation made the newly mined ore unsuitable for use.

(The irony that we were living in an era when the iron was unusable should not go un-noted.)

“Model collapse is a degenerative process affecting generations of learned generative models, in which the data they generate end up polluting the training set of the next generation. Being trained on polluted data, they then mis-perceive reality.”

(Shumailov, et. al., 2024).

Model collapse can result in the models “forgetting” (Shumailov, et al, 2023). It is a cybernetic prion disease. Like the cattle that developed BSE by being fed feed that contained parts of other ground up cows sick with the disease, the burgeoning electronic “minds” of the AI tools cannot digest other generated content.

Soylent culture.

But despite the incredible velocity that all this is happening at, it is still early days. There is an incredible amount of research being done on the effects of model collapse, and the long term ramifications for it on the industry. There may yet be a way out from culture continually eating itself.

We’ll explore some of those possible solutions next.

An archive of positivity

Been thinking about this one for a few days*, so I created a page to collect links to stories about positive use cases for AI. I feel like this will be an evergreen document, and something I can refer back to in the future, as well as send a link to someone who denies the existence of potential positive impacts of AI.

Yes, there are some risks, and there is the potential that some of the stories are simply marketing. Part of the challenge will be sifting the hype-rbole from the actual positive uses. And of course, there are more stories out there than I can ever find, so if you come across this blog post in the future, feel free to send me any examples you find.


*: Maybe more than a few; it has kinda been lingering since I did the AI Reflections episode almost a year ago, last August.

Implausipod E0013 – Context Collapse

Tiktok has a noise problem, and it’s indicative of a larger issue ongoing within social media, that of “context collapse”. But even context collapse is expanding outside its original context and evidence of it can be seen in the rise of generative AI tools, music and media, and the rise of the “Everything App”. Starting with a baseline in information theory and anthropology, we’ll outline some of the implications of noise and context collapse in this episode of the Implausipod.

https://www.implausipod.com/1935232/13516713-implausipod-e0013-context-collapse

Transcript:

 TikTok has a noise problem, and it may be due to a context collapse, something that’s been plaguing music, social media, and it’s even showing up in our new AI tools. And if you don’t know what that is, you’ll find out soon enough. We’ll explain it here tonight on episode 13 of the Implausipod.

Welcome to the ImplausiPod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. Now, when it comes to the issue of noise and context collapse, there’s a little bit more going on, of course. The problem for TikTok is that it started out with a pretty tasty signal, one that kind of really encouraged people to stick around.  But as that signal amps up and it gets more and more noise in the system, it gets a little chunkier and crustier and maybe not as finely tuned as you’d like. Now, for some people that noise isn’t a problem, but for a lot of people it can be. And the reason it’s a problem for TikTok is that the noise can be actively discouraging from using the app.  It can make it Unfun, and this is what I’ve been noticing lately. So let’s get into how context collapse is impacting life online.

When TikTok rose to prominence throughout the pandemic, it was a very tasty experience for a lot of people. I mean, if you had negative interactions there, there was probably reasons for it, but there was also ways to mitigate it.  You could block people, you had a lot of control, and generally the algorithm would be feeding you content that you wanted to see. Or even if you know, you didn’t know you wanted to see it, you know that the joke goes. To that end, it was pretty good at sussing out what people found engaging. So TikTok had a very high signal to noise ratio.  Yeah, there was some noise there, but that was because it was feeding stuff that it wasn’t quite sure that you liked. But once it kind of honed in on what your preferences were, it was really good system for delivering content to users.

Over time though, as more and more content goes out and more and more people start participating, the amount of tasty content, the amount of good content, the amount of interesting and novel content drops off.  So you see less and are aware of pieces of information that everybody is seeing less, and less stuff – even within your niche from people that you’re following – gets shown to you. So this is all noise in the system. It’s the amount of stuff that you don’t want to see increasing.

Now we’re talking about signal to noise, and as we’re talking about a very old theory here, we’re talking about Claude Shannon’s Mathematical Theory of Communication.  Now, it was “A mathematical theory of communication” when it was published in 1948 as a paper, and then it was reworked as a book with Warren Weaver in 1949, where it was The Mathematical Theory of Communication as they realized that the theory was more generalizable, and this theory undergirds the entirety of the internet and most of our modern telecommunication systems, and it’s just a way of dealing with the noise in a system and ensuring the signal gets sent as it was sent from the transmitter to the receiver. And you can talk about it in terms of human communication or machine to machine communication. Device to device. Point to point, and this is why it’s generalizable.  It can be pretty much black boxed, and you can see this in how it gets used in multiple contexts. The point of the theory is that there’s a certain throughput that you need where the amount of information is greater than the noise to ensure that the signal is “understood”. And then there can be systems that are used to error check or correct or whatever, what’s on the receiving end to ensure that you know what was transmitted comes through as an intended, and that’s the gist of it.

Now for something like TikTok as the signal, you know, the signal is the content that’s supposed to be delivered to the end user, and the noise is anything that isn’t part of that. It’s the stuff they’re not necessarily looking for or asking for. And as TikTok has branched out and provided more types of content, starting with the 15 second videos and then 60 seconds, three minutes, 10 minutes, live stream, stories, whatever, you get more types of content in there.  Not all of it’s gonna be relevant to all users. If somebody’s watching for some quick videos, even a 60-second or three minute video is definitely not gonna be what they want to see. So we have a variety of content in there and that increases the noise, the amount of stuff you don’t want to see in a given block of time.

Now, couple that with the other types of content that get filtered in. It can include ad sponsored posts or posts that are just generally low value. This can include things like, oh, so-and-so changed their name, so-and-so signed on, or what we’ve seen recently is the retro posts like on this day in 2021 or 2022 or whatever, where people will revisit old posts, and a lot of times there’s nothing special about those unless you haven’t seen it before. It’s just whatever’s that person was talking about a year ago. So that feeds into the pipeline with all the current content that’s also trying to get out to the user base as the user base is increasing. So we have this additional content that’s coming through the pipeline, increasing the signal, but there’s also more stuff, more stuff that you don’t want to see.

It’s noisy,

and that noise, as we stated earlier, makes it unfun. It’s like it directly interferes with the stickiness of the app, the ability for it to engage the audience and have them participate in what the actions that are going online. And as that’s directly part of what Tiktok’s business model is: capture an audience and keep them around, then that can be a problem for them.

But it also brings us into that idea of the collapse of context. Now context collapse is something that was theorized about by a number of media scholars in the early 2000s, including danah boyd and Michael Wesch, and a few others. In its most simplest form, it’s what happens when media that’s designed for one audience or a single audience gets shared to multiple audiences, sometimes unintended. For early social media, and in this case, that means like MySpace and Facebook and Twitter, media that was shared for a particular group – often a friend group – could go far beyond the initial context. And while those websites or apps, along with blogs and web forums were co-constitutive of the public sphere, as we talked about a few episodes ago, along with the traditional media. Context really didn’t start smooshing together until Web 2.0 started shifting to video with the advent of YouTube and the other streaming sites, and that’s the technical term, smooshing. You can update your lexicons accordingly.

But the best way to describe context collapse was captured by cultural anthropologist Michael Wesch in a 2009 issue of Explorations in Media Ecology. He describes it and the problem as follows, quote:

“The problem is not lack of context. It’s context collapse, an infinite number of contexts collapsing upon one another into that single moment of recording.  The images, actions, and words captured by the lens at any moment can be transported to anywhere on the planet and preserved the performer must assume for all time. The little glass lens becomes the gateway to a black hole sucking all of time and space, virtually all possible contexts in on itself.” End quote.

So he is talking then about the relatively new phenomenon of YouTube, which had only been around for about four or five years at that point, and what we now call creators producing content for viewing on that platform. It was that shift to cam life that had started previously, obviously, I mean there’s a reason YouTube was called what it was, but it went along with that idea of democratization of the technology, of the ability for pretty much anybody with a small technological outlay to produce a video and have it available online for others to see.  Prior to the YouTube era, that would’ve been largely restricted to people with access to certain levels of broadcast technology, whether it was television or cable access, or a few other avenues. It wasn’t really as prevalent as we saw in, you know, the 21st century. And now with the growth of YouTube and the advent of Snapchat and TikTok, it really has completely taken over. But this is why it’s also still useful to look at some older articles because they give us an idea of what was novel at the time, what had changed, and this was really what was different with what was going on.

Michael Wesch is really drawing a lot from Goffman here and that idea of “the presentation of self in everyday life”, that we have different behaviors and there’s different aspects of ourselves that we will bring to the forefront in different contexts. So whether it’s at school or work or with our family or parents or friends or loved ones or what have you, we’re all slightly different in the way that we act around them. And this has been observed for a lot of different people in a lot of different contexts. But with the rise of what I’ll call here the mediated self and the complete flattening of all contexts due to, you know, Snapchat and Reels and TikTok, it has really taken a new turn.

Now, that idea of presentation of self for multiple audiences through vlogging, through YouTube, it isn’t exactly new because there was other versions of that before.  In a presentation by Dr. Aiden Buckland, he goes into some of the critiques of this, that a media archeologist or media historian could draw a pretty straight lineage from diarization and life writing as a practice that occurred on blogs through to the modern practices that we see with video logs or just TikTok and Snapchats.  This, in turn, is drawing heavily on the works of Dr. Michael Keren, who wrote a lot about blogs and their political action in the late nineties and early 2000s. But I digress. I’m starting to get a little bit further afield.

One of the ways to theorize Context collapse is that it’s like if every moment that you have that is recorded was available for instant replay at any time.  And with the advent of video services moving to the cloud and having everything accessible (and looking at YouTube’s archives, now you can go back to basically when they began), we have that idea of instant replay. So it isn’t just a context collapse in terms of anything might be available to multiple audiences, but it’s also a Time collapse in that everything is always available to all potential audiences, and this extension of the context collapse to encompass multiple times or at least all times that are recorded and stored in the cloud has been discussed by authors Petter Bae Brandtzaeg of Oslo and Marika Lüders. Now there’s a very obvious link to this, to the rise of what’s called cancel culture, and I’d be remiss if I went without mentioning it, but that’s kind of beyond the scope of what we’re discussing here. That’s a different thread, a different track that we will have to pursue at some time in the future. The other implication of this time collapse is something that we’ve discussing here on the podcast more recently, namely media, especially music,  and AI.

In terms of media, this context collapse, this time collapse is happening because obviously everything is available everywhere, all at once, at least for the most part. Things are currently in a state of flux, especially when it comes to television and film. The advent of the streaming services where each carved off a particular portion of the IP catalog that they happen to own has really changed how things have been interacting, but when it comes to music where streaming can basically all be done through one particular service, Spotify, with a few additional ones with minor catalogs, the impacts of that time collapse and context collapse are much more noticeable.

In an article published on The Atlantic in January of 2022, author Ted Gioia asked “Is old music killing new music?”. The author found that over 70% of the US market was going to songs that were 18 months or older, and often significantly so. Current rock and pop tracks now have to compete with the best of the last 60 years of recorded music. And while it is possible to draw some direct comparisons between the quality of the music as YouTuber Rick Beato did in a live stream on August 26th, 2023, where he asked: “Is today’s music bad?”, and looked at the top chart toppers from 50 years ago in August of 1973. You can argue that the overall production of music may be significantly better in 2023, but the overall composition, songwriting, and other elements may lack that magic that we saw, you know, 50 years ago. The most popular trend in music right now seems to just be a remix, a sample, a cover, or an interpolation of an older song.  Even a chart topper like Dua Lipa draws heavily on the recreation of a seventies dance club aesthetic and sound. So context collapse, even if it isn’t necessarily killing new music, is definitely changing the environment in which it may be able to, you know, survive and thrive. The environment’s almost getting a little polluted.

It’s very noisy there.

However, one of the other places we’re seeing the impacts of this noise, this context collapse, is in the generative AI tools, or at least this is one of the places that the noise is being put to use. On a post on his blog on July 17th, 2023, author Stephen Wolfram talked about the development of these generative art tools and the processes that it goes through to actually create a picture.  We work through the field of adjacent possibles that could be seen in something like a cat with a party hat on, and a lot of those images that are just a step or two removed for being a image that we as humans recognize shows up as noise. It turns out that what we think of as an image isn’t necessarily that random, and a lot of the pixels are highly correlated with one another, at least on a pixel-per-pixel basis. So if you feed a billion images into one of these models, in order to train it, you’re gonna get a lot of images that look highly similar, that are correlated with each other. And this is what Wolfram is talking about when he is talking about the idea of an “inter concept space”, that these images generally represent something or close to something. It’s not an arbitrary one either, but it’s one that’s aligned with our vision, something that we recognize, so a “human-aligned inter-concept space” that’s tied to our conception of things like cats and party hats.

But this “inter-concept” space is not only like ‘representative of’, but ‘fueled by’ the context collapse.  It requires the digitization of everything, like a billion images that go into it in order for it to be trained. But it also, you know, squishes everything together. Again, our technical term, smoosh. And this smooshing brings us back to TikTok because everything is there. That’s part of what’s contributing to the noise, but it also is why there’s such a volume of a signal that’s there. You can likely find something and it’ll get algorithmically delivered to you if you like it enough or you interact with. But this is also how it’s captured so much of the public sphere in a way that the owner of Twitter wishes it could, and that idea of the context collapse seems to be made manifest in these apps that are trying to capture the public sphere, that they have to capture everything, everything all at once.

And so we’re seeing the rise of the Everything app, the everything website, much like we talked about a few weeks ago in episode 10 with the rise of a o l and how it as a portal was for a lot of users. The internet, it was the entirety of it. And we’ve seen subsequently with Facebook, we’re seeing a number of competitors, sometimes in different places around the world, catering to a particular locality, but all of them trying to capture that “One thing to all people, to all customers”. In China, we see it with the rise of WeChat, which allows for calls and texts and payments as well. In Moscow, we can see it with the various apps that are run by Yandex, where you could use it for everything from getting a taxi to communications to your apartment, and there’s a lot of tools built-in and it actually has its own currency system built-in as well. A user by the name of Inex Code posted a list of everything that you can do with Yandex in Moscow. In North America, we can see it with not just Facebook, but also with Apple and Google and Amazon too. The breadth of services that they have available, and the continual expansion of services that they’re adding to their apps and platforms. And when Elon Musk bought Twitter, it was theorized that one of the things you wanted to do was turn it into a WeChat like app. His recent comments about LinkedIn and the option of adding that kind of functionality to the app now known as X indicate that he may well be headed in that direction.  And finally, the continual expansion of TikTok now include texts as well as a marketplace and music sales indicate there’s still more growth in that area too. As each of these walled garden “everything” apps try and gather up more functionality, we can see that this is one response to the context collapse: to provide a specific context within their enclosure.

It’s an effort to reduce the noise, or at least to turn it into something that happens outside their walls.

But setting up a wall may not be the only solution. It’s one way, obviously, that element of enclosure that’s taking place, but there’s other ways to deal with it as well. One way is a way we looked at with the Fediverse, where an everything app can be developed as long as it’s open. and there’s a lot of opportunity and possibility there, but that openness requires a fair amount of work by the user. It requires curation. It lacks the algorithmic elements that drive the enclosure of the other apps. Now, that doesn’t mean an algorithmic element couldn’t work for the Fediverse, it’s just that currently it’s not set up for it and may require a lot of effort to bootstrap something like that and get it going.

And absent an algorithm, it kind of points the way to the last two solutions that we have. The first one is just to lean into it to accept that there’s this change that’s happened to our society with the advent of digital media and everything being available. If the context collapsed, that’s fine. That’s just the way things are now, and we just have to learn to deal with it. And that leads into the second option. The one David Brin called The Transparent Society. And just that everything is available, and we’ll have to change our patterns of use. If we recognize that aspects of our culture are socially constructed, then we learn to live with that and we can change and adjust as necessary.  Things haven’t always been the way they are currently, and they don’t have to continue that way either. Because the last way forward to deal with context collapse is to look at some areas of our culture that have already experienced it and seen how they’ve dealt with it. Because context collapse is intimately tied with that idea of availability of everything as well as in video terms, what Wesch is talking about was the instant replay.

And the two areas that have managed that and have continued to succeed in an era of streaming media and context collapse are pro sports and pro wrestling. The way they’ve succeeded is recognizing that they have their particular audience, that their audience will find them, that they don’t have to be everything for all audiences.  And they’ve also succeeded by privileging the live, the now, the current event, something that revels in the instant replay, the highlight reel, the high spot, but also is allowed to continually produce new content because there might be a new highlight reel or a high spot in the very next game or match or show or finals or pay-per-view.  There’s always something new coming down the pipeline and you best not look away. It turns out that the best way to deal with the noise is to create something that cuts right through it.

Once again, I’m your host, Dr. Implausible. It’s been a pleasure having you with us today. I hope you join us next time for episode 14 when we investigate the phenomenon of the dumpshock. In the meantime, you can find this episode and all back episodes at our new online home at www.implausipod.com, and email me at Dr. implausible at implausipod com. Until the next time, while you’re out there in the busyness and the noise, have fun.

References and Links:

Brandtzaeg, P. B., & Lüders, M. (2018). Time Collapse in Social Media: Extending the Context Collapse. Social Media + Society, 4(1), 2056305118763349. https://doi.org/10.1177/2056305118763349

Gioia, T. (2022, January 23). Is Old Music Killing New Music? The Atlantic. https://www.theatlantic.com/ideas/archive/2022/01/old-music-killing-new-music/621339/

Shannon, C. E. (1948). A Mathematical Theory of Communication.

Wesch, M. (2009). Youtube and You: Experiences of self-awareness in the context collapse of the recording webcam. Explorations in Media Ecology, 8(2), 19–34.

https://www.quantamagazine.org/how-claude-shannons-information-theory-invented-the-future-20201222/

https://journals.sagepub.com/doi/10.1177/2056305118763349

Generative AI Space and the Mental Imagery of Alien Minds