Baked In: Social Media and Tech Determinism

(this was originally published as Implausipod E0032 on May 26th, 2024)

https://www.implausipod.com/1935232/episodes/14896508-e0032-baked-in-social-media-and-tech-determinism


How much of your experience online is dictated by the environment you’re
in, and how it was constructed?  What is you rebuild Twitter, and it
still ends up being toxic?  Did you fail, or succeed without knowing it?

These are the kinds of questions that arise when we look at technology from a
deterministic point of view: that technology is the driver of cultural and social change and growth.  And while this ideology has its adherents, many of the assumptions about technology, and tech determinism are already Baked In to the way we deal with tech in the world.


What if you rebuilt Twitter from the ground up, and it ends up being as toxic as the old one? Did you do something wrong, or were you just wildly successful? That’s the question we’re trying to address in this week’s episode, but perhaps we need to approach this from a different angle. So let me ask you, when you visit a website online, or use an app on your phone.

How does it make you feel? Do you feel happy? Amused? Upset? Angry? Enraged? And did it always feel that way? Did it used to feel good and then perhaps it took a turn for the worse? It became a little bit more negative? If it doesn’t make you feel good, why do you keep going back? Or perhaps you don’t, perhaps you move on to someplace new, and for the first little while it’s cool, it feels a lot like the old place used to be, but you know, before things changed, before other people came along, or before the conversation took a turn for the worse.

But the question is: How long before this place starts going downhill too, before the same old tired arguments and flame wars that seem to follow you around through the years and decades keep catching up to you? I mean, maybe it’s you, there’s always a chance, but let’s take a moment and assume we’re not slipping into solipsism here, as this seems to be a much more widely reported experience, and ask ourselves if maybe, just maybe, that negativity that we experience on the internet is something endemic.

It’s part of the culture, it’s baked in.

Welcome to The ImplausiPod, an academic podcast about the intersection of art, technology and popular culture. I’m your host, Dr. Implausible. And in this episode, we’re going to address the question of how much of your experience online is shaped by the environment you’re in and how it is constructed.

Because there is no such thing as a natural online environment, all of these things are constructed at some point, but it’s a question of what they’re constructed for. We know that social media spaces can often be constructed for engagement, which is why it lends itself to rage farming and trolling. But how far back does it go?

We know we see commonalities in everything from Facebook and Twitter, to YouTube comment sections, to web forums, to Usenet, to email. Are these commonalities that we see related to the technology? Is there an element of what’s called technological determinism at play? Or are the commonalities that we see just related to the way that humans communicate, especially in an asynchronous environment like we see online?

Hmm. Or perhaps it’s something cultural. It’s part of the practice of using these tools online. And as such, it gets shared and handed down, moves from platform to platform to platform, which is what we seem to see. Now it could be a combination of all of these things, and in order to tease that out, we’re going to have to take a look at these various platforms.

So I’ll start with the one that was the genesis for this question for me. Mastodon, which is part of the ActivityPub protocol. Mastodon in many ways replicates the functionality of Twitter along with the look and feel with toots replicating the tweets, the short microblog posts that may include links or hashtags, an image or short video clips.

And depending on the client you’re using to access it, you’d hardly notice the difference. It’s this similarity that led me to the question that started off the show. What if you rebuild Twitter and it still ended up being toxic? So in order to explore this question, we’re going to take a quick survey of the field and look at the problems that can be seen in a lot of different social media platforms.

Then we’ll go into more depth on the potential causes that we mentioned, including the technology, the nature of communication online, as well as Cultural factors, and then conclude by seeing if there might be a more hopeful or optimistic way that we can approach this and our online interactions.

So when we look at these online platforms, you might want to see how they’re all just a little bit broken while we’re overwhelmingly a positive podcast here, and we try and accentuate the positive elements that exist in our society. I’ll admit. Sometimes it’s a little bit hard, and when we start looking at online platforms, we can see that much like families, each dysfunctional one is dysfunctional in its own ways.

However, that being said, we might be able to tease out a few trends by the end of this. Our baseline for all of this is, of course, going to be Twitter. Whether you call it X or Twitter, it’s been one of the most studied of the social media platforms, and that gives us a wealth of data. And it also allows us to make a clear distinction by calling it Twitter prior to the acquisition by Elon Musk and But regardless of whether we look at Twitter or X, the results aren’t great.

In a recent study of the University of Toronto by Victoria Olemburgo De Mello, Felix Cheung, and Michael Inzlicht, the authors find that there’s no positive effects on user well being by engaging with X. Even the occasionally touted greater sense of belonging by participating in the platform didn’t lead to any long-lasting effects.

Instead, what they found was an immediate drop in positive emotions, so things like joy and happiness are right out the window, and there was an increase in outrage, political polarization, and boredom. So using X, even if you’re a little bit bored, is probably a net negative. And this is just from a recent study.

It isn’t counting the systemic changes that have taken place on the platform since the acquisition by Elon Musk, and the platforming of hate speech, and the reduction of moderator tools, the increasing attack vectors by removing the ability to block harassers, and all the other changes that have taken place as well, including creators just upright and leaving the platform.

But that’s the state of things right now. The question is, Did Twitter always suck? And the answer is kind of yeah. The University of Toronto study we mentioned was collecting data back in 2021 prior to the acquisition by Elon Musk, and so if things have gone downhill since then for the reported outrage and lack of joy, then I can’t really imagine what the place is like now.

But enough about the service formerly known as Twitter. When looking at some of its competitors, what are their downsides? Are they as toxic too? There’s Threads, the Facebook owned offshoot of the Instagram platform, primarily focused on text-based messaging. Even though it launched in July of 2023, it came together rather quickly, seemingly as an attempt to capitalize on the struggles that Twitter was having, struggles that soon led to it being rebranded as X later that month.

One of the challenges with threads is they’re adding features as they go, and while they leverage their existing user base from Instagram, it hasn’t led to the same level of active retention that one might think. Despite the lack of explicit advertising, they still have issues with spam posts, for example.

And then there’s the whole challenge with Facebook ownership in general, which we’ve discussed on in previous episodes, like when we talked about Triple E back in episode 15. BlueSky, or B-Sky, was another Twitter alternative built on the prospect of having an open source social media standard, and up until May 5th of 2024, it had Jack Dorsey, a former Twitter CEO, on its board.

His departure is indicative of some of the challenges that lay there, that it’s somewhat lifeless with minimal community involvement, and that despite it being built as a decentralized platform, until that gets rolled out, it very much is a centralized form of control. Usenet, the almost OG social network, built off of the Network News Transfer Protocol, or NNTP, that we talked about a lot back in episode 10, still exists, technically, but on the text-based servers it’s mostly dead with tons of spam and minimal community, though there are a few diehards that try and keep it going.

The existence of the binaries groups there as a file transfer service is a completely separate issue far beyond what we’re talking about here. LinkedIn, the social network for business professionals, feels incredibly inauthentic and performative, and it feels like the functionality that you find there would be better served by being on almost any other social media platform.

Reddit, with all the pains that it had in 2023 with its shift to the IPO and the strike of the various moderators, is still a going concern with high user counts, but a lot of that content may be now fed into various AI platforms, turning conversations into just so much grist for the mill. Stack Overflow, the tech-based Q& A site, has done much the same thing, turning all that conversation into just so much AI fodder.

Platforms like Discord, which have, again, corporate control, and may lead to all the content they’re in being memory old. And that brings us back to Mastodon, which, despite all the promises of an open social web, can have, in certain places, an incredible toxic community. It’ll have Federation Wars, as various servers join or disband, based on.

Ideological differences with other active servers, there’s access problems for a number of different users, there’s differing policies from server to server, and there’s inconsistent moderation across all of it. And despite all these problems, it might be one of the best options when it comes to text based social media.

So this brings us back to our main question, why do they all suck? Is it something that’s baked in? Is it something that’s determined by the technology?

So let’s take a moment and introduce you to the idea of technological determinism. Tech determinism is a long running theory that’s existed in some form or other since the 19th century. Technological determinism posits that the key driver of human history and society has been technology in its various forms.

It leads to a belief that innovation should be pursued, sometimes at all costs, and that the solution to any issue is more technology, even if those issues are caused by other technologies in the first place. Tech Determinism exists on a bit of a spectrum, where its adherence can be more or less hardcore with respect to how much technology determines our history and how much attention is paid to any explanation outside the scope of technology.

According to technological determinism, all social progress follows tech innovation, and there’s a certain inevitability that’s part and parcel with that. If I was able to license music for this show, I’d queue up You Can’t Stop Progress by Clutch off their 2007 album From Beale Street to Oblivion. But, uh, in this case I’ll just ask you to go to YouTube or your other music streaming site, or grab your CD off the shelf and put it in and play along.

But back to our spectrum. Hardcore technological determinists don’t think society or culture can have any impact on technology, or at least the direction of it. And that goes back to that inevitability that we were talking about. There’s a softer form of technological determinism as well, where the technology can be dependent on social context and how it is adopted.

And this ties back to what Penelope Quan Haas talks about as social determinism. Social norms, attitudes, cultural practices, and religious beliefs are perceived as directly impacting how technology is used and what its social consequences are. This is a little bit more of a nuanced view and takes us away from the instrumental view where technology is seen as neutral and just a tool to be used.

But as pointed out by Langdon Winner back in 1980 in a rather famous article, Do Artifacts Have Politics?, that neutrality is something that’s very much circumscribed. The design of a tool can have very specific impacts about how it is used in society. And I think this starts bringing us back to those design spaces that we’re talking about, those online platforms.

Each of them present themselves in various ways and suggest various actions that might be taken. done. These are what Don Norman calls affordances or the perceived action possibilities of a certain piece of technology. When it comes to online spaces, it doesn’t matter whether that space is presented to the user on a smartphone or on a desktop computer, laptop, or some kind of terminal, the preferred form of action is going to be presented to the user in the most accessible place to reach.

This is why you’ll see the swipe or like or comment buttons presented where they are. On a smartphone, that’s anything that’s in easy reach of the thumb of a right-handed user. For X, it’s that little blue button in the right-hand corner, just begging you to use it. And by reducing the barrier to entry to posting, you get a lot of people posting really quickly.

Emotionally, reacting to things, getting the word out there. Because, heaven forbid, somebody is wrong on the internet. And this leads us to the second factor that may be leading to such horrible online communication. The very nature of online communication itself. And this has been recognized for a long, long time.

At least 20 years. On March 19th, 2004, in a post titled “Green Blackboards and Other Anomalies”, the world was introduced to the GIFT theory. And we’ll call it the GIFT theory because we’re on the family friendly side of the podcast sphere. As Tycho from Penny Arcade explained at the time, a normal person plus anonymity and an audience equals a GIFT.

And because that anonymity was kind of part and parcel with online interactions that you really didn’t know who you were dealing with. And that all identities online were constructed to a degree, it might lend people to say things online or behave online in ways that they wouldn’t if they were face to face with the person.

And because having an audience can allow for someone to get a larger reaction, people might be more predisposed to behave that way, if they thought their words could be traced back to them. Now, this is 2004, so pre social media. Twitter and Facebook would take off after that. And it became slightly more common for people to post using their real names, or at least a slightly more recognizable one.

And we found out that that really didn’t change things at all. So perhaps it has more to do with the audience rather than the anonymity. Regardless, the culture that had developed through early Usenet and then AOL chat rooms, through to online gaming, instant messenger apps, and IRC, kept encountering the same problems.

Which the tech determinants would take as a sign that suggests that the technology is the cause. But what if the social determinists are right? Social determinists being the flip side of the tech determinists, that all interactions that take place are due to social cues. This leads us to our third potential cause.

What if it’s the culture of online interaction? In 1993, Howard Rheingold published one of the first books on online societies, The Virtual Community, subtitled Homesteading on the Electronic Frontier. This is based on his experience as a user in The Well, the Whole Earth Electronic Link, a BBS based in San Francisco run by computer enthusiasts that were part of the Whole Earth catalog.

Following up on his previous books on hackers and virtual reality, he wrote a book that took a wide-ranging survey of the state of the web in 1993. Or at least, what we now call the web, as much as the book focused on BBSs and other portals like The Well, terminal systems like Francis Minitel, commercial services like CompuServe, and email, all under the umbrella of CMC, Computer Mediated Communication.

Though this acronym is now largely forgotten, save for in certain academic circles, it bears repeating and reintroduction to those unfamiliar to the term, as it explains in the distinction it makes. And, open parenthesis, not that I’m saying that a term is acting with intentionality here, I’m not that far down the memetic rabbit hole, but rather that we can consider it as the focus for our agentive discussion. Close parenthesis. 

Rheingold was looking at early implementations of the web. Cross cultural implementations, when there are largely local phenomena, national at best, and rarely the international level that we now expect. You looked at France’s Minitel at CalvaCom, as well as sites in Japan and the well on the west coast of the United States.

Yes, they could all be accessed outside of that, but long distance was costly and bandwidth was low. And time and again, the same phenomena was observed. Talking with Lionel Lombroso, a participant with CalvaCom in France, about his experiences with 80s, one of the biggest challenges was dealing with like the perpetual flame wars, in this case one involving Microsoft and the evils therein.

Lombroso goes on to state that, quote, I think online is a stage for some people who don’t have opportunities to express themselves in real life. Again, this is the late 80s, early 90s. HTTP is just being invented around the same time. The web as we know it doesn’t exist yet, but online communication, computer mediated communication, does.

And they’re seeing this already. Where arguments based on politics or ideology lead to intractable discussions, which invariably force decisions to be made between censorship and free expression, and attempts to limit the flame war will invariably shift to this regardless of the forum, as has been seen in the Well, Twix, Calva, and so many other sites as well.

So, if antagonism online goes back this far, if we can see the roots of the quote unquote Seven Deadly Sins Then perhaps we’re close to finding our answer. Antagonism online can largely be a cultural thing. And just as a parenthesis, ask me sometime about those seven deadly sins and I can tell you how you can tell if you’re stuck in a 7g network.

If online toxicity is well and truly baked in, being part and parcel of the culture from the very beginning, is there a way to fight back against it? One of the biggest problems is the expectations of use. People coming to Mastodon, for instance, which looks and feels a lot like Twitter in many ways, is a lot of the initial participants are coming directly from Twitter and bringing all their old habits and patterns with them, for good.

The tech is static, but the new tech looks like the old tech and provides the affordances of the old tech, so it gets used in similar ways by people who expect it to behave in a certain way. And they may not be entirely conscious of that. That, much like Taylor Swift sings, It’s me, me, I’m the problem, it’s me.

So how might this be combated? There’s a number of options, and they’re not mutually exclusive. The first is to change the interface in order to change the interaction. This may be productive, as it would shake the users out of assumed patterns of use. However, it’s double edged, as one of the elements that makes a new platform attractive is its similarity to other existing platforms.

And to be clear, Despite the similarity of interface, tools like Mastodon are still facing an uphill battle in attracting or retaining users that are leaving X and or Twitter. And I’m saying and or, that despite it being X, we’re talking historically over the entire period that, say, tools like Mastodon have existed.

The second option can be heavier moderation. And this can be one of the big challenges for the Fediverse, which largely operates under donations and volunteer work. This approach has been taken by some private entities and the DSA in the EU, that’s the Digital Service Act, has required large social media platforms to disclose the number of moderators they have, especially in each language.

And in articles on Reuters and Global Witness published in November and December of 2023, we got a look at what some of those numbers were. For example, X had 2, 294 EU content moderators, compared with 16, 974 for YouTube, 7, 319 at Google’s Play service, and another 6, 125 at TikTok. And those numbers are largely for the English moderators.

The numbers drop off rapidly for non-English languages, even in the EU. And if large multinational corporations are challenged by and struggling with the lack the ability to moderate online, the largely volunteer versions that exist in the Fediverse can have even less recourse. 

So a third solution may be education on social norms and online toxicity. In this, networks like the Fediverse have some advantages, as they’ve been able to put in tools to assist users and creators that can modify the content in certain ways. Content warnings, which can hide certain content by default. Alt text for image and media descriptions for persons that need to use screen readers, using camel case for hashtags in order to increase readability.

But all of this is a long and constant battle as it’s on the user to institute them when they’re using it. And we’ve seen earlier forms of this happen online. As recounted in the Eternal September, and you can check out our old episode on that. But, as the name implies, it keeps happening as platforms need to acculturate the influx of new users in order to use the platform successfully.

And, as those new users still have all the same expectations of use that they’ve picked up in every interaction online that they’ve had up to that point in time. It’s still going to be a sticking point. So maybe we have to put it on the user, which leads us to our fourth option that the user needs to be the change that they want to see.

And I can see reflections of this in my own online interactions, that I realized maybe I wasn’t the best online citizen in the past, but, you know, we can all reflect about how we interact online and try and do better in the future. One simple method would be to follow George Costanza’s lead. And I’m serious on this, George Costanza in season 5 episode 22 on Seinfeld, this was the show called The Opposite, and Costanza tries doing the opposite of his instinct for every choice and interaction he has online, and his life ends up improving because of that.

He realizes that, hey, much like Taylor Swift, he might be the problem. And he tries to do better and make conscious decisions about how he’s interacting with people online. I don’t know if that’s something you can implement in software, but there are methods, like notifications that pop up when somebody’s going to reply to somebody they’ve never interacted with before.

Or, for instance, notifications for users when they’re going to post something online, letting them know that, hey, this is being distributed to a mass audience and not to your 12 closest friends. The other option for trying to be the change you want to see, you would just be actively working to try and make the internet a better place.

And we can see this in things like the happiness project on March 20th, 2024, the second day of the third FediForum, an unconference where individuals can come together online to discuss things related to the Fediverse, the ActivityPub protocol, Mastodon and other ActivityPub tools. Evan Prodromou, a co-author of ActivityPub convened a panel on happiness in the Fediverse, and the discussion centered around what makes us happy when we engage online.

How do we build those strong social ties and positive engagement that we’d love to see in our own lives? How do we ensure that our social networks lead to positive mental and physical health and well being? positive mindset overall? Those are not easy questions, by all means. One of the things the participants noted is that happiness requires active work, in that posting positive things requires an act on the part of the creators there, and it’s not always easy.

There can be a number of very stressful things that are inherent in social media, and especially the ways we use them now. As I participated in the panel, I mentioned some of the things that have brought up previously both in this episode and in previous ones, letting them know that we may need to be much like George Costanza and try and do the opposite.

But also I left the panel with a question that I began this episode, how much of your experience online? is dictated by the environment you’re in and how it’s constructed, that we need to consider both the architecture and the practices. And perhaps this is ultimately the solution. We create community by building a better place, supplemented by the technology, but created through the culture and patterns of use.

It has to be explicit though, as good interactions may go unnoted. And those who are unaware of them, or those who are new, may not notice that things are done differently. Ultimately, all these things can be incredibly positive for community. However, what happens when your community is taken away from you?

We’ll look at that possibility in the next episode of the ImplausiPod.

Once again, thank you for joining us on the ImplausiPod. I’m your host, Dr. Implausible. You can reach me at Dr. Implausible at implausipod. com, and you can also find the show archives and transcripts of all our previous shows at implausipod. com as well. I’m responsible for all elements of the show, including research, writing, mixing, mastering, and music, and the show is licensed under a Creative Commons 4.0 share alike license. 

You may have noticed at the beginning of the show that we describe the show as an academic podcast, and you should be able to find us on the Academic Podcast Network when that gets updated. You may have also noted that there was no advertising during the program, and there’s no cost associated with the show, but it does grow through the word of mouth of the community, so if you enjoy the show, please share it with a friend or two and pass it along.

There’s also a buy me a coffee link on each show at implausipod.com, which would go to any hosting costs associated with the show. Over on the blog, we’ve started up a monthly newsletter. There will likely be some overlap with future podcast episodes and newsletter subscribers can get a hint of what’s to come ahead of time.

So consider signing up and I’ll leave a link in the show notes. Coming soon, we’ll be following up on this episode with what happens with the loss of online community. In an episode titled, Tick Tock Tribulations. After which we’ll have some special guests join, for a two-part discussion of the first season of the Fallout TV series, followed by a look at the emergence of the dial up pastorale, and then the commodification of curation. I think those episodes will be fantastic, I can’t wait to share them with you. Until then, take care, and have fun.

Bibliography:
Chee, F. Y., Mukherjee, S., Chee, F. Y., & Mukherjee, S. (2023, November 10). Musk’s X has a fraction of rivals’ content moderators, EU says. Reuters. https://www.reuters.com/technology/musks-x-has-fraction-rivals-content-moderators-eu-says-2023-11-10/

Drolsbach, C., & Pröllochs, N. (2023). Content Moderation on Social Media in the EU: Insights From the DSA Transparency Database (arXiv:2312.04431). arXiv. http://arxiv.org/abs/2312.04431

FediForum.org. (n.d.). FediForum | Happiness in the Fediverse. Retrieved May 26, 2024, from https://fediforum.org/2024-03/session/4-d/

Green Blackboards (And Other Anomalies)—Penny Arcade. (n.d.). Retrieved May 19, 2024, from https://www.penny-arcade.com/comic/2004/03/19/green-blackboards-and-other-anomalies

Oldemburgo de Mello, V., Cheung, F., & Inzlicht, M. (2024). Twitter (X) use predicts substantial changes in well-being, polarization, sense of belonging, and outrage. Communications Psychology, 2(1), 1–11. https://doi.org/10.1038/s44271-024-00062-z

Rheingold, H. (2000). The Virtual Community: Homesteading on the electronic frontier. MIT Press.

GPT Squared

(this was originally published as Implausipod E0031 on March 31, 2024)


https://www.implausipod.com/1935232/episodes/14799673-e0031-gpt-squared


Are the new GPTs – Generative Pre-trained Transformers – powering the current wave of AI tools actually the emergence of a new GPT – General Purpose Technology – that we will soon find embedded into every aspect of our lives? Earlier examples of GPTs include tech like steam power, electricity, radio, and computing, all tech that is foundational to our modern way of life. Will our AI tools soon join this pantheon as another long wave of technological progress begins?


Let’s start with the question. Do you remember a time before electricity? Unless this show is vastly more popular with time travelers and certain vampires than I thought, the answer is probably not. But now, in 2024, it’s literally everywhere. It’s sublimated into the background. It’s become part of the infrastructure, and we no longer really think about it.

We flip a switch and the lights go on, and we can find a plug in almost anywhere to recharge our devices and take that electricity with us on the go. But how long did it take to get to that point? And the answer is longer than you think. The production of electricity was invented in 1832, but it took half a century for it to become commercially viable, and then from there another 70 years to effectively transform our lives with everything from lights and appliances to communication devices like radios and television.

And even now we’re still feeling the effects of that transformation as we move to electric powered vehicles for personal use. So across all those decades, it took a long time for electricity to come from concept to application to becoming a general purpose technology, or GPT. And in 2024, we’re just starting to feel the impacts of another GPT, degenerative pre trained transformers that are powering the current wave of AI tools.

So the question we’re really trying to find out is, are these current GPTs a new GPT, or what we might call GPT squared, in this week’s episode of The Implosipod.

Welcome to The Implausipod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. And in this episode, we’ll be exploring exactly what a GPT is, a general purpose technology, that is, and how they have had a massive impact on society. By looking at the definition and some commonalities amongst them, we’ll be able to evaluate whether the current GPT, the generative pre-trained transformers, are going to have the same impact, or whether they qualify as a GPT at all.

As always, I’m using a couple of references for this, and I’ll put the bibliography in the show notes so you can track back the people we’re citing here. For us, the two main sources are going to be Engines of Growth by Bresnahan and Trattenberg from 1992 1995, and Mordecai Kurz’s The Market Power of Technology, Understanding the Second Gilded Age, a book he published in 2023.

Kurz is a professor of economics at Stanford University, and his first book was published back in 1970, so he’s literally been doing this longer than I’ve been alive. And in addition to those, I’m sure we’ll fold in a few more references as required. Now, for the first half of this episode, whenever I mention GPT, we’re going to be explicitly talking about general purpose technology, so I’ll call out the AI tools when mentioned.

And we’ll get to the discussion of those in the second half after we talk about the cyclical nature of technological development. But for the moment, we should get right down to business and find out exactly what is the GPT. A general purpose technology is basically that, a technology that can apply broadly to virtually all sectors of the economy.

And by doing so, it can change the way that society functions. They do this by being pervasive, in that they can be used in a wide variety of functions. And they also do this by sublimating into the background, as David E. Nye notes in his history of the Electrification of America, that once they’re part of the infrastructure, we can stop thinking about it and use them in almost any function.

Now, it took a little while for electricity to get to that point, but that’s part of their nature, that the general purpose technology will evolve in advance and spread throughout the economy. For previous instances of GPTs, that led to productivity gains in a wide number of areas, but even if we’re not specifically looking at Productivity growth, we can still see how they have beneficial impacts.

Now in the original study on GPTs, Bresnahan and Trattenberg’s engines of growth from 1992, they looked at three particular case studies, steam power, the electric motor, and the integrated circuit. And by studying these GPTs, they were able to come up with some basic characteristics. The first and most obvious is that their general purpose, that the function they provide is generic, and because of that generic nature, they’re able to apply it in a lot of different contexts.

If we think of all the ways that the continuous rotary motion that was provided by steam power and then electric engines has been adapted and serves throughout our economy, it’s massive, it’s fascinating. And once the production of the integrated circuit really started taking off in the 60s and 70s, it became a product that could be embedded in almost anything, and very nearly has.

This is obviously scaled over time as integrated circuits have followed Moore’s Law, providing exponential growth in the amount of circuitry that can provide it in the same space, and complementary technologies like batteries have also improved and shrunk and been able to service the chips that have gotten more and more power efficient over time, leading to even more widespread adoption.

And this brief description hints at the second and third characteristics. The second one is that they have technological dynamism. That continuous work is done to innovate and improve the basic technology that makes it more efficient over time. This is why you often see that costs to use that GPT drop over time.

And that’s why it shows up in more and more parts of our society. And the third characteristic of GPTs that Bresnahan and Trattenberg talk about are the innovational complementarities that technical advances in the GPT make it more profitable for its users to innovate and vice versa. And we can see hints of that with how the improving battery technologies went hand in hand with the development of integrated circuits.

One of the things that B& T note, especially in their example of the steam engine electric motor, is that the function that they provide isn’t necessarily obvious with respect to some of the jobs. That the continuous rotary motion that is now used in a lot of things, everything from sewing, polishing, cutting, wasn’t necessarily seen as something that could be adapted to those skills.

So the people that were doing them were surprised when there was a technological replacement for the things that they were doing. Let’s put a pin in that idea and we’ll come back to it in about 10 or 15 minutes. Sometimes the way that the GPT is applied is inefficient initially, but as price and performance ratios improve, as the technology and the complementary technologies around it improve, then it becomes more feasible.

Sometimes those payoffs come quickly, but Often it takes a long time for it to get distributed throughout the economy. In the case of electric motors, they note that it took about three decades to go from 5 percent of the installed horsepower in the U. S. to over 80 percent by 1930. And those productivity gains came because everything was getting electrified at the same time.

The infrastructure was there. And these all go hand in hand. They’re complementary. B& T quote at length from Rosenberg from 1982. Quote, the social payoff to electricity would have to include not only lower energy and capital costs, but also the benefits flowing from the newfound freedom to redesign factories with a far more flexible power source.

The steam engine required clumsy belting and shafting techniques for the transmission of power within the plant. These methods imposed serious constraints upon the organization and flow of work. which had to be grouped according to their power requirements close to the energy source. With the advent of fractionalized power made possible by electricity in the electric motor, it now became possible to provide power in very small, less costly units.

This flexibility made possible a wholesale reorganization of work arrangements and in this way, made a wide and pervasive contribution to productivity growth throughout manufacturing. Machines and tools could now be put anywhere efficiency dictated, not where belts and shafts could most easily reach them.

Now, I want to state that I’m not a member of the cult of efficiency by any means, and that Rosenzberg’s claim that quote, it’s not some contradiction to that Foucauldian argument that the architecture of society is shaped by the architecture of our factories and our other buildings. That we have some Deleuzian form of control society because the very hierarchy of the way our power is distributed within our factories lends itself to certain forms of social organization.

Far be it. I think these are saying exactly the same thing. Different perspectives. What the subtext of all these articles is, is that to get past those hierarchical forms, united find different ways to distribute the power. And by doing so, you can have very liberating effects on society as a whole. And.

Ultimately, this is what a GPT is. It’s what it provides. As Kurz notes, GPTs reflect fundamental changes in the state of human knowledge that occur maybe once in a generation or once in a century. They are technologies that enable up to Paradigm Shift, and as Kurz notes,

we need to distinguish between small changes within a given technological paradigm and revolutionary technologies that change everything.

End quote. A GPT serves as a founding technology or platform for Further technological innovation. And because of that, it’s really important to note something on the work that goes into the development of a GPT as both B&T and Kurz note, quote, it is vital to keep in mind the distinction between innovations within the paradigm of a GPT.

And innovation of a new technological paradigm, or a new GPT. Some GPTs, like electricity or IT, change everything and ultimately transform the entire economy. Others, like the discovery of DNA and genetic sequencing, change completely only a segment of the economy, like we’ve seen with CRISPR and genetic engineering.

And this idea of a paradigm shift is perhaps one of the most central features of the introduction of a new GPT, especially if you’re a large incumbent firm well established within the Current dominant technological paradigm for you see a paradigm shift threatens to upset the natural order of things where the large Incumbent firms exercise their market power and use small firms operating within that paradigm effectively as research labs acquiring them if they happen to develop a patent or an innovation that would prove it to be useful or would Threaten their own dominance within the marketplace These patterns have been well observed historically within the development of electricity, with the rollout of radio and television, with the early computing industry, and can be even seen within 21st century industries, where a dominant player like Facebook will acquire Instagram or WhatsApp.

that may threaten their dominance. And if they’re unable to acquire those competitors outright, they may exert their market power through lobbying or other efforts in order to challenge them, as we’re seeing currently in the United States with the proposed TikTok ban of March 2024. This is all standard operating procedure.

It’s the way these things seem to work. But when a new technology comes around, when the paradigm shifts, that’s when things get interesting. As Kurz notes, it’s a period where Quote, the most intense technological competition arises when a new GPT is invented. This leads to the eruption of economy wide technological competition in which winners begin the long journey to consolidate market power.

During that period, we’ll either see new players rise to the level of the incumbents, pushing out the old dominant players that can’t adapt. Or we’ll see those dominant players do everything they can to try and keep their hand in the game. Which is what we’re starting to see already within the field of AI, which is one of the reasons we suggest it might be a new GPT, a General Purpose Technology.

But as we’ve hinted at, these things go in cycles, so let’s look at what some of the earlier ones were.

The idea that the economy behaves in a cyclical manner was first introduced almost 100 years ago, in the 1920s, by Nikolai Kondratiev. They’ve been subsequently named in his honor. Kondratiev hypothesized that these cycles were due to the underlying technological basis of society, the technological paradigms that we’ve been discussing in the first half of this episode, where we see rising boom and bust cycles that take place over a period of roughly 50 to 60 years.

Now, the Kondratiev waves, or what are sometimes called long waves or Carrier waves are only one of the various economic waves or cycles that have been observed. Others, including those proposed by Kuznets or Jugler or Kitchen, look at things like infrastructure or investment or even inventory for various products, and development time frames can have a major impact for all of this as well.

When you map these all out on a timeline, the various economic waves can all seem to interact, much like overlapping sine waves in a synthesizer, where the sum of the smaller waves occasionally comes together in a much larger peak, or ocean waves come together out of nowhere and suddenly form a rogue wave big enough to sink a ship.

When Kondratiev was originally observing that a long 60 year period, he said that there was three phases to the cycle, a period of expansion, stagnation, recession, and nowadays we’ve added collapse into that as well. When Kondratiev was originally writing in the 1920s, he identified a number of periods that as it’s originally taken place with, starting with the Industrial Revolution, followed by the Age of Steam and the expansion of the railways, and then the subsequent rise of electric power that took place, as noted, between the 1890s and 1930s in North America.

Since then, we’ve seen the cycle continue in two other long waves, the rise of the internal combustion engine, the associated technologies that that facilitated, like the automobile and air flight, and then the rise of the microchip and the transformation that the computing technologies and communication technologies had across the face of a modern world.

Now, the idea of an economic long wave has had an enduring appeal. People have taken the theory and have Cast it back earlier in time and a lot of predictions have come about trying to guess what the sixth long wave would be. Again, five that we’ve had so far if we started at the industrial revolution.

Some of the possible contenders as a driver for the sixth Kondratiev wave include that of renewable energy and green technologies as proposed by Moody and Nagredi, or that of biotechnology as proposed by Leo Neffiato. Back in 1996. And while those are strong contenders, they haven’t necessarily turned into the drivers of economic change that we might have expected.

They may still yet, but in some ways they lack the general purpose nature of the technologies that we’ve seen as drivers of previous long waves. In 2024, it looks like another contender has emerged. A GPT build out of GPTs, the generative pre-trained transformers that power our AI tools. So based on our three characteristics of those GPTs that we mentioned earlier, we’ll take a closer look and see if those AI tools might qualify.

As we said earlier, with Bresnahan and Trattenberg’s definition of a GPT, the three characteristics were general purposeness, technological dynamism, and innovational complementarities. Within their paper from 1992, they use the case study of semiconductor technology, which is the dominant GPT at the time of the 1990s.

At the time they were writing, the pervasiveness of computing had already been assumed, but initially that assumption wasn’t the case. Hence early prognostications like Thomas Watson’s from IBM famously saying that I think there is a world market for maybe five computers, a prediction that turned out to be drastically wrong.

By the 1970s, the integrated circuit was well developed and its use in the computing mainframes of large banks was already well underway. What allowed electronic circuits to become a general purpose technology was that they could work inside virtually any system. Those systems can be rationalized and broken down into their component activities, and each of those activities could be replaced with a integrated circuit or transformer at certain stages.

And if you can break the steps down to something that can be replicated by binary logic, like ones and zeros with gates opening or switches turning on and off, then you can apply it anywhere within a production process. It meant that there was a wide range of. technological processes that at its root were pretty simple operations.

But as B& T note here, even though substituting binary logic for a mechanical part was often very inefficient, because you might have to increase the number of steps in order to accomplish something with binary logic, as the price dropped on the circuits and more and more processes could be included within one circuit, it became much easier to actually implement the technology.

electronic circuits within the system. And as the costs came down and the processes were improved, they became more widely implemented within a lot more sectors of the economy, to now that they’re basically everywhere. So, do our current GPTs, the current crop of AI tools, exhibit these same characteristics?

Is there a general purpose-ness to them? Well, qualified yes. I think when it comes to the current AI tools, we need to recognize a few things. The first is that they’re part of a much longer process that a lot of the tools that we’re seeing right now were two years ago called machine learning tools, and they’ve just been rebranded as a I tools with the popularity of Chat GPT and some of the AI art tools like stable diffusion and mid journey.

So both the history of the technology and its implications go back much further. And it’s actually uses are much broader than we’re currently seeing and thinking about the range of industries where I’ve seen AI tools adopted far exceed just the large language models popularized by Chat GPT, or the art tools that were seen online increasingly.

We’re seeing machine learning algorithms deployed in everything from photography to astronomy, to health, to production, to robotics, to website design, to audio engineering, and a whole host of industries. And this explains partially why we’re seeing so many companies involved, which feeds directly into the second characteristics of GPTs, the dynamism, the continuous innovation that’s being brought forth by companies that currently developing those AI tools.

Now, is everyone going to be a hit? No, there’s a lot of them that are absolutely not places where AI should be involved. But some of them are going to be creating tools that are well suited to the application of AI. And just as the early days of electricity and radio and television all saw a lot of different ways that people were trying to apply the new technology to their particular field or product or problem.

We’re seeing a lot of that with AI right now, just as any company that has a machine learning model is either rebranding it or adapting it to the use of AI. I think a lot of people are recognizing that. AI tools could be that general purpose technology that would be applicable to whatever their given field is.

There’s definitely a speculative resource rush component that’s driving some of this growth. There’s a lot of people are getting into the market, but, but as Mordecai Kurz points out, there’s a difference between working within the new paradigm created by a GPT, which a lot of these companies are doing, and on working Directly on the GPT itself, those working directly on the AI tools like OpenAI are the ones that are looking to become the new incumbents, which goes a long way in explaining why Microsoft has reached out and partnered with OpenAI in the development of their tools.

Incumbents that are lagging behind in the development of the tools may soon find themselves locked out, so a company that was dominant within the previous paradigm, like Apple, that currently doesn’t have much in the way of AI development, could be in a precarious position as things change and the cycle of technology continues.

Now, the last characteristics of a GPT was the complementarity that it had that allowed for a Other innovations to take place. And I think at this point, it’s still too soon to tell. We can speculate about how AI may interface with other technologies, but for now, the most interesting ones look to be things like robotics and drones.

Seeing how a tool like OpenAI can integrate with the robots from Boston Dynamics, or the recent announcement of the Fusion AI model that can provide robotic workers for Amazon’s warehouses. Both hinted where some of this may be going. It may seem like the science fiction of 30 or 40 or 50 years ago, but as it was written back then, the future is already here, it’s just not widely distributed yet.

Ultimately, the labeling of a technological era or a GPT or a Kondratiev wave is something that’s done historically. Looking back from a vantage point, it’s confirmed yet, yes, this is what took place and this was the dominant paradigm. But from our vantage point right now, there’s definitely signs and it looks like the gts, maybe the GPT we need to deal with as the wave rises and falls.

Once again, thanks for joining us on this episode of the Implausipod. I’ve been your host Dr. Implausible, responsible for the research, writing, editing, mixing, and mastering. You can reach me at drimplausible at implausipod. com and check out our episode archive at implausipod. com as well. I have a few quick announcements.

Depending on scheduling, I should have another tech based episode about the architecture of our internet coming out in the next few weeks. And then around the middle of April, we’ll invite some guests to discuss the first episode of the Fallout series airing on Amazon Prime. Or streaming on Amazon Prime, I guess.

Nothing’s really broadcast anymore. Following that, Tie in with another Jonathan Nolan series and also its linkages to AI, we’re going to take a look at Westworld season one. And if you’ve been following our Appendix W series on the sci fi prehistory of Warhammer 40, 000, we’re going to spin off the next episode into its own podcast.

Starting on April 15th, we’re currently looking at the Joe Haldeman’s 1974 novel, The Forever War. So if you’d like to read ahead and send us any questions you might have about the text, you can send them to Dr. implausible@implausiblepod.com. We will keep the same address, but the website for Appendix W should now be available.

Check it out@appendixw.com and we’ll start moving those episodes over to there. You can also find the transcript only version of those episodes up on YouTube. Just look for Appendix W in your search bar. We’ve made the first few available, and as I finish off the transcription, I’ll move more and more over.

And just a reminder that both the Implausipod and the Appendix W podcast are licensed under Creative Commons Share A Like 4. 0 license. And we look forward to having you join us with the upcoming episodes soon. Take care, have fun.


Bibliography
Bresnahan, T. F., & Trajtenberg, M. (1995). General purpose technologies “Engines of growth”? Journal of Econometrics, 65(1), 83–108.

Kurz, M. (2023). The Market Power of Technology: Understanding the Second Gilded Age. Columbia University Press.

Nye, D. E. (1990). Electrifying America: Social meanings of a new technology, 1880-1940. MIT Press.

Rosenberg, N. (1982). Inside the black box: Technology and economics. Cambridge University Press.

Winner, L. (1993). Upon Opening the Black Box and Finding it Empty: Social Constructivism and the Philosophy of Technology. Science Technology & Human Values, 18(3), 362–378.

Appendix W 04: Dune

(this was originally released as Implausipod episode 30 on March 11, 2024)

https://www.implausipod.com/1935232/episodes/14666807-e0030-appendix-w-04-dune


With the release of Dune part 2 in cinemas, we return to Appendix W with a look at Frank Herbert’s original novel from 1965. Dune has had a massive influence on the Warhammer 40000 universe in many ways, especially when looking at the original release of the Rogue Trader game in 1987, in everything from the weapons and wargear, to space travel and technology, to the organization of the Imperium itself. Join us as we look at some of those connections.


Since its release in 1965, the impact of Dune has been long and far reaching on popular culture, inspiring science fiction of all kinds, including direct adaptations for film and television, and perhaps a non zero amount of inspiration for the first Star Wars film as well. But one of its biggest impacts has been in the development of the Warhammer 40, 000 universe.

So with the release of Denis Villeneuve’s Dune part two in cinemas on March 1st, 2024, I’d like to return to a series on the podcast we call Appendix W and look at Frank Herbert’s original novel Dune from 1965 in this episode of the ImplausiPod.

Welcome to the Implauosipod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. So when we first started talking about Appendix W in the early days of the podcast back in September 2022, I had posted that based on a list I had put up on the blog a year prior about what some of the foundational titles for the Warhammer 40, 000 universe is.

Now, Warhammer 40, 000 is the grimdark gothic sci fi series published by Games Workshop. The Warhammer 40, 000 universe was originally introduced in 1987 with a version they called Rogue Trader, which has become affectionately known as the Blue Book, and I think I still have my rather well used and worn copy that I picked up in the summer of 1988 on a band trip.

For the most part, Warhammer 40, 000 is a miniatures war game, though the Rogue Trader version had a lot more in common with Dungeons and Dragons, and there’s some roleplay elements in there. The intellectual property now appears in everything from video games, to action figures, to merchandise of all sorts, to web shorts, and a massive amount of fiction set in that universe.

As primarily a miniatures war game, it sits as a niche of a niche with respect to the various nerd fandoms operating at a level far below Star Wars or Star Trek, but you might’ve heard more about it recently with rumors of an Amazon Prime series and Henry Cavill, the former Superman and Witcher himself being behind the scenes on that one, or just talking about it positively on various talk shows that he’s appeared on. Other fans include people like Ed Sheeran, who’s been spotted building Warhammer model kits backstage at his concerts. By and large, despite its popularity, it’s managed to stay relatively under the radar compared to some of the other series that are out there with respect to mainstream attention, knowledge.

It is what it is. Now, the material isn’t necessarily something that’s gotten a lot of scrutiny in the past, but that’s pretty much it. Part of what we’re doing here on the Implausipod, especially with the Appendix W series, and the goal of the Appendix W series is to look at some of those sources of inspiration that got folded into the development of Warhammer 40, 000.

And for those unfamiliar, what is Warhammer 40, 000? Well, it’s a nightmare Gothic future where humanity is fallen, basically. They’re still living with high technology that they no longer totally realize how to build and maintain. They are living in the shadows of their ancestors. Humanity spread across the galaxy, across untold millions of planets, united under an emperor in the imperium of man, beset by a civil war nearly 10, 000 years in the past that tore the empire apart, and now facing foes on all sides with alien races, both ancient and new, vying with humanity for control of the galaxy. 

Humanity is maintained in this universe by a massive interstellar bureaucracy that redefines the word Byzantine. And much of humanity lives in massive hive worlds where massive cities cover the entire surface of a planet.

Ultimately, life for most of humanity in the Warhammer 40, 000 is what Hobbes would call poor, nasty, brutish, and short. It’s not solitary by any means, there’s way too many people around for that to be the case, but still. Now, as we covered earlier in our previous episodes on Appendix W, obviously Games Workshop is a British company, and there is a particular British flavor to a lot of these sources that Warhammer 40, 000 drew inspiration from.

And we’ve seen that in some of the sources that we’ve already looked at, like Space 1999. But even though Frank Herbert is an American author, Dune has had such an impact on the development of sci fi since its release, it definitely shows up as interesting an impact on Warhammer 40, 000. Now I’m going to lay out the evidence here throughout the rest of this episode.

You can take it or leave it as you see fit, but in terms of structure, what I like to lay out here is what we’ve done in previous episodes, looking at Appendix W and look at it in terms of things like the military examples within the book. Now, not all the sci fi influences that we list in Appendix W are military ones, of course, but as it’s a military war game, that’s a big part of it.

Then we’ll look at other elements of technology. And then cultural elements as well. A lot of Dune’s impact on the Warhammer 40, 000 universe expands outside of the miniatures war game itself into the larger structure of the setting. So we’ll take a brief look at those too, even though that isn’t our focus.

And then even a work like Dune didn’t appear out of nothing, ex nihilo, so we’ll look at some of the other sources that were out there that inspired Dune itself. And then I’ll wrap up the episode with a brief discussion of the future of Appendix W, so stay tuned.

Now looking at a work like Dune, you might think that the main source of inspiration is the planet Arrakis itself, with the hostile environment and the giant worms and everything. That’s actually one of the least influential elements. We do see the appearance of various, what Warhammer 40, 000 calls death worlds, planets that are very hostile to life, that as serve as recruiting grounds for various troops within the setting, including various Imperial Guard, sorry, Astra Militarum regiments, including the Talarn Desert Raiders.

But the biggest influence from Dune is the existence of the Empire and the Emperor. Within the book, the emperor is an active participant in the machinations that are taking place in the empire that they control. Whereas in Warhammer 40, 000, the Emperor is a near godlike figure that’s barely kept alive by the arcane technology of a golden throne where they’ve been placed for the last 10, 000 years since suffering a near mortal wound in combat.

In Warhammer 40, 000, the Emperor is not well, but their psychic power serves as a beacon that allows navigation throughout the rest of the galaxy for those who are attuned to it. But despite that difference, the other main takeaway from Dune is the Emperor uses his legions in order to maintain control.

Within Dune, the Emperor lends out his personal guard, the Sardaukar, to engage in the combat on behalf of the Harkonnens against the Atreides. Quoting from the glossary included at the back of the original Dune novel, the Sardaukar are, quote, the soldier fanatics of the Padishah Emperor. They were men from an environmental background of such ferocity that it killed six out of thirteen persons before the age of eleven.

Their military training emphasized ruthlessness and a near suicidal disregard for personal safety. They were taught from infancy to use cruelty as a standard weapon, weakening opponents with terror. Within Warhammer 40, 000, when the Emperor was still active, he had, of course, 20 legions of his space marines, the Adeptus Astartes, who were loyal to him.

Two of those legions became excommunicado and stricken from the records, and another nine ended up turning traitor in a civil war known as the Horus Heresy. But the tie is very deep. I mean, both of these draw on some Roman influence, obviously, but still, the linkage directly from Dune to Warhammer 40, 000 is strong, and much like the Roman Empire, both of these have the vast bureaucracy that I mentioned earlier.

Within Dune, of course, there’s the various noble houses that the Emperor is playing off against each other, like the Harkonnens and the Atreides, but there’s many more besides that. Within Warhammer 40, 000 can often be seen within the various Governors of various planets or systems who are given a large amount of latitude due to the nature of space travel and sometimes the chance that systems could go without without communications for Hundreds or thousands of years and the final major linkage would most likely be the religious one within dune It’s the role that the bene gesserit have behind the scenes with their machinations taking place over decades thousands of years.

Within Warhammer 40, 000, it’s the role of the ecclesiarchy, the imperial cult, that reveres the emperor as godlike. And as I’m saying this, I realize I’m only talking about the impact of the first Dune novel on Warhammer 40, 000, and not the series as a whole. So as we look at later books, later on, as part of Appendix W, we’ll see how some of those other linkages come into play into how Warhammer 40, 000 looked at launch and how it’s developed subsequently.

But for right now, we’ll just look at the impact that the Bene Gesserit have on the storyline within the novel. Now, despite all these deep linkages that really inform the setting, it’s with respect to the military technology that we see the influence that Dune really had on Warhammer 40, 000. Despite all the advanced technology in the book, oddly enough it’s a defensive item that comes to the forefront.

One of the conceits that we see with Dune is that a lot of the combat takes place with the Melee weapons with swords and knives. The reason for that is because of the shields. Reading again from the appendix in the back of the original Dune novel, it describes the defensive shields as, quote, The protective field produced by a Holtzman generator.

This field derives from phase one of the suspensor nullification effect. A shield will permit entry only to objects moving at slow speeds. Depending on setting, this speed ranges from six to nine centimeters per second, and can be shorted out only by a Shire sized electric field.

These are the shields that were visible in both movie adaptations early on, with the fight training between Gurney Halleck and Paul Atreides, the ones that made them both look like fighting Roblox characters in David Lynch’s 1984 adaptation. Within Warhammer 40, 000, we can see evidence of those with refractor fields that are widely available to various members of the Imperial forces.

These are fields that distort the image of the wearer and then bounce any of those incoming attacks into a flash of light. Within the Dune Universe these are so widely available that even common soldiery will have them, though in Warhammer 40, 000 they’re a little bit more rare, but as we said, it’s a fallen empire.

The other commonly available tool to the soldiery is that lasgun, which is described again in the appendix as a continuous wave laser projector. It’s use as a weapon is limited in a field generator shield culture because of the explosive pyrotechnics, technically subatomic fusion, created when its beam intersects a shield.

So even though they’re commonly available, they’re not widely used because hitting somebody who has wearing a shield with it is like setting off a small nuke. And within Dune, those Nukes, or atomics, remain one of the most powerful weapons available to the various houses and factions, to the extent that they’re kept under strong guard and rarely if ever used.

In fact, there’s a prescription on their use against human combatants. This is why Paul’s use of the nukes against the Mountain Range during their final assault doesn’t provoke sanctions from the other houses. Those sanctions could be as severe as planetary destruction, which in Warhammer 40, 000 would be called exterminatus, even though they’re not typically called that framed as being done by nukes. There’s a number of other weapons that show up in various ways in Dune that also make their way into the Warhammer 40, 000 universe. Everything from the sonic attacks, from the weirding modules, to the Kriss knives that are used in ritual combat. And we can see other technological elements as well, like the Fremen stillsuits, elements of that showing up in the Space Marines power armor in 40k, the look and feel of The mining machines showing up in the massive war machines of the 41st millennium, like the Baneblade or Leviathan or Capitol of Imperialis and even the Ornithopters themselves, the flapping wing flying machines that show up so prevalent in every adaptation of Dune.

All of these will appear at some point within the 41st millennium, even if they’re not present within Rogue Trader at launch in 1987. But It’s more than just the technology. It’s more than just the emperor and his legions. It’s more than just the psychic abilities, which we barely even touched on. There are two essential elements that deeply tie the Warhammer 40, 000 universe to Dune.

And those two elements are two groups of individuals with very specific sets of skills, the Mentats and the Navigators of the Spacing Guild. Now, the Mentats are basically humans trained as computers to replace the technology that was wiped out in the Butlerian Jihad in the prehistory of the Dune universe.

For those just joining us here in this episode, we covered the Butlerian Jihad in depth in depth. in the previous episode in episode 29. It was basically a pogrom against thinking machines that resulted in the destruction of all artificial intelligence, robotics, or even simple computers. Within Warhammer 40, 000, the Butlerian Jihad can be seen in the war that took place against the Men of Iron and led to the Dark Age of Technology, again in the Prehistory of that universe and while the mentats themselves aren’t as directly prevalent because obviously machines still exist. The attitude towards technology that it’s treated as a Religious element and something that’s known and understood is widely prevalent throughout the universe The final element is the Spacing Guild. Within the Dune universe the spice that’s only available on Dune – the melange – that allows for the navigators to gain prescience and to steer the ships as the Holtzman drives allow them to fold space and move them rapidly through the stars.

Over time, through their exposure to the melange, the navigators become something altogether no longer human. Whereas in the 41st millennium, the navigators are outright mutants to begin with, whose psychic abilities allow them to see the light cast by the Emperor on Terra, the Astronomicon that serves as a lighthouse to guide everybody through the shadows of the warp.

Now, both of these are mentioned in Rogue Trader in 1987, but they show up much more commonly outside the confines of the miniatures board game where much of the action takes place. They’re prevalent in the fiction and a lot of the lore surrounding the game, even though they rarely function within it, at least within the confines of the Warhammer 40, 000 game proper.

Now, the Games Workshop has leveraged the IP into a number of different realms, including the game systems like Necromunda, Battlefleet Gothic, and their various epic scale war games. So some of those elements are more common in certain other situations, but the linkage between the two, between Dune and 40k, is absolutely clear.

Now, as I said at the outset, dune had a massive influence on not just war hundred 40,000, but basically Sci-Fi in general. Since its release, it was, it spawned five sequels by Frank Herbert himself, which extended the stories and then. Brian Herbert, Frank Herbert’s son, and Kevin Anderson have done subsequent stories within the same universe.

Galactic Empire has been common throughout science fiction, especially since then, though most notably within the works of George Lucas, the Star Wars series. I believe Lucas has stated at least someplace that Dune was a partial source of inspiration, though some contest that it’s a much more than partial, and that there’s 16 points of similarity between the Dune novels and the original Star Wars film.

I think anybody reading the original novel and then watching the film may draw similar conclusions. But influence is a funny thing, and it works both ways, because just as Dune inspired numbers of works, including massive franchises like Star Wars and Forever 40, 000, Dune was in turn inspired by a number of sci fi works that were written well in advance of its publication.

There’s at least five works or series that were published before Dune came out that had elements that appear within the Dune stories. For the record, Dune was published as serials in 63 and 64, and came out as the full novel in 1965. Now, the first link, obviously, is Asimov’s Foundation, published as short stories in the 1940s, and then as novels in the early 1950s.

Here we’re dealing with the decay of an already existing galactic empire, and by using math and sociology as a form of Prescience, which is the same ability that Paul and the Bene Gesserit have, they’re able to predict the future and able to steer the outcome into a more desirable form. Does that sound familiar?

Asimov calls this psychohistory, and I’m sure if you’re watching the current TV series you’re well aware of that, but wait, there’s more. Next up is the Lensman series, written by E. E. Doc Smith, starting with Triplanetary, which was published in 1948. I mean, there’s aliens and stuff in it, but there’s a long range breathing program on certain human bloodlines in order to bring about their latent psychic abilities.

And then they’re tested, with a device called the Lens, which can cause pain to people that aren’t psychically attuned to it, which, again, sounds familiar. The third up would be the Instrumentality series, by Cordwainer Smith. Now, there’s a novel, Nostrilia, which was originally published after Dune came out, but the short stories from the series came out starting in 1955 and through the early 1960s.

In it, space travel is only made possible by a drive that can warp space, and a guild of mutated humans that are able to see the path between the stars to get humanity to where they need to be. In addition to that, the rulers of Earth are a number of noble houses. that are continually feuding amongst themselves and through various technologies are extremely long lived, almost effectively immortal.

Now we’ve touched on some of that with the instrumentality before, back in episode 18, and we will be visiting the instrumentality again, at least twice more, in Appendix W, with a look at Scanners Live in Vain and then the Instrumentality series as a whole. So if you’re interested in more on that, go check out that episode and stay tuned for more.

Now, even the fighting around the giant space harvesters has some precedent. In 1960, Keith Laumer published the first Bolo short story. In it, 300 ton tanks are controlled by sentient AIs. And the story’s about how the fighting in and around those tanks go. But of course, we know that there’s no AI in the Dune universe because of the Butlerian Jihad.

Which Herbert got from Samuel Butler, who wrote it in 1869, and then published it as a novel in 1872, which we talked about last episode and mentioned earlier. So, of course, this influences almost 90 years before Dune came out. And, of course, the granddaddy of them all is probably Edgar Rice Burroughs, Warlord of Mars.

Now apparently, according to an interview with Brian Herbert, the Dune series was originally proposed to take place on Mars, but it was decided against it because of our cultural associations that we have with the red planet. And some of this obviously comes, takes place from the tales that came before it.

Now, in addition to the sci fi influences, there’s other real world influences like the The stories of Lawrence of Arabia, as well as Frank Herbert’s own observations that he took in the sand dunes in northern Oregon, and the reclamation project that was taking place there to bring back some of the land from the desert.

So all of these and more went into the creation of Dune. Now, don’t get me wrong, Dune is an amazing creative work, and it draws all these elements and other ones together more than we mentioned. It’s unique and interesting, and that’s why it’s timeless as it is. But everybody draws influences from multiple places.

The creativity is in how it gets put together. So we will continue exploring that creativity of both the Dune series, And the Warhammer 40, 000 series in episodes to come.

Once again, thank you for joining us on the ImplausiPod. I’m your host, Dr. Implausible. You can reach me at drimplausible at implausiblepod. com, which is also where you can find the show archives and transcripts of all our previous shows. I’m responsible for all elements of the show, including research, writing, mixing, mastering, and music, and the show is licensed under a Creative Commons 4. 0 share-alike license. You may notice that there was no advertising during the program, and there’s no cost associated with the show, but it does grow through the word of mouth of the community. So if you enjoy the show, please share it with a friend or two and pass it along.

If you visit us on implausopod. com, you may notice that there’s a buy me a coffee link on each and every episode. This would just go to any hosting costs associated with the show. If you’re interested in more information on Appendix W, you can find those on the Appendix W YouTube channel. Just go to YouTube and type in Appendix W, and I’ll make sure that those are visible.

And if you’d like to follow along with us on the Appendix W reading list, I’ll leave a link to the blog post in the show notes. And join us in a month’s time as we look at Joe Haldeman’s Forever War. And between now and then, I’ll try and get the AppendixW. com website launched. And for the mainline podcast here on the ImplausiPod, please join us in a week or so for our next episode, where we have another Warhammer 40, 000 tie in.

You see, Warhammer 40, 000 is a little lost with respect to technology, and they’ll spend a lot of time looking for some elements from the dark age of technology. The STCs are standard template constructs. The plans that they put in their fabricators to chew out the advanced material of the Imperium. You could almost say that these are general purpose technologies, or GPTs.

And a different kind of GPT has been in the news a lot in the last year. So we’ll investigate this in something we call GPT squared. I hope you join us for it, I think it’ll be fantastic. Until then, take care, and have fun.

The Butlerian Jihad

(this was originally published as Implausipod E0029 on March 2nd, 2024)

https://www.implausipod.com/1935232/episodes/14614433-e0029-why-is-it-always-a-war-on-robots

Why does it always come down to a Butlerian Jihad, a War on Robots, when we imagine a future for humanity. Why does nearly every science fiction series, including Star Wars, Star Trek, Warhammer 40K, Doctor Who, The Matrix, Terminator and Dune have a conflict with a machinic form of life?

With Dune 2 in theatres this weekend, we take a look at the underlying reasons for this conflict in our collective imagination in this weeks episode of the Implausipod.

Dr Implausible can be reached at DrImplausible at implausipod dot com

Samuel Butler’s novel can be found on Project Gutenberg here:
https://www.gutenberg.org/cache/epub/1906/pg1906-images.html#chap23


Day by day, however, the machines are gaining ground upon us. Day by day, we are becoming more subservient to them. More men are daily bound down as slaves to tend them. More men are daily devoting the energies of their whole lives to the development of mechanical life. The upshot is simply a question of time.

But that the time will come when the machines will hold the real supremacy over the world, and its inhabitants is what no person of a truly philosophic mind can for a moment question. War to the death should be instantly proclaimed against them. Every machine of every sort should be destroyed by the well wisher of his species.

Let there be no exceptions made, no quarter shown. End quote. Samuel Butler, 1863. 

And so begins the Butlerian Jihad, which we’re going to learn about this week on the ImplausiPod.

Welcome to the ImplausiPod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible, and as we’ve been hinting at for the last few episodes, today we’re going to take a look at why it always comes down to a war. between robots and humans. We’re going to frame this in terms of one of the most famous examples in all of fiction, that of the Butlerian Jihad from the Dune series, and hopefully time it to coincide with the release of the second Dune movie by Denis Villeneuve on the weekend of March 1st, 2024.

Now, the quote that I opened the show with came from Butler’s essay. Darwin Among the Machines, from 1863, and it was further developed into a number of chapters in his novel Erewhon, which was published anonymously in 1872. As the sources are from the 19th century, they’re available on Project Gutenberg, and I’ll leave a link in the notes for you to follow up on your own if you wish.

Now, if you weren’t aware of Butler’s story, you might have been a little confused by the title. You would have been wondering what the gender of a robot is, or perhaps what Robert Gulliame was doing before he became governor. But neither of these are what we’re focused on today. In the course of Samuel Butler’s story, we hear the tale from the voice of a narrator, as he describes a book that he has come across in this faraway land that has destroyed all machine.

And it tells the tale of how the society came to recognize that the machines were developing through evolutionary methods, and that they’d soon outpace their human creators. You see, the author of the book that Butler’s narrator was reading recognized that machines are produced by other machines, and so speculated that they’d soon be able to reproduce without any assistance.

And each successive iteration produces a Better designed and better developed machine. Again, I want to stress that this is 1863 and Darwin’s theory of evolution is a relatively fresh thing. And so Butler’s work is not It’s not predictive, as a lot of people falsely claim about science fiction, but speculative and imagining what might happen.

And Butler’s narrator reads that this society was being speculative too, and they imagine that as the machines develop, grow more and more powerful, and more of ability to reason. As they outpaces, they may set themselves up to rule over humans the same way we rule over our livestock and our pets. Now, the author speculates that life under machinic rule may be pleasant, if the machines are benevolent, but there’s much risk involved in that.

So the society, influenced by the suasion of those who are against the machines, institutes a pogrom against them. Persecuting each one in turn, based on when it was created, ultimately going back 271 years before they stopped removing the technology. So what kind of society would that be like? Based on what Butler was writing, they’d be looking to take things back to about 1600 AD.

Which would mean it would be a very different age, indeed. Is that really how far back we want to go? I mean, why does it always come down to this? To this war against the machines? Because it’s so prevalent. We gotta maybe take a deeper look and understand how we got here.

Ultimately, what Butler was commenting on was evolution, and extrapolating based on observed numbers, given that there was so many more different types of machines than known biological organisms, at least in the 1800s, of what the potential development trends would be like. Now, obviously, our understanding of evolution has changed a lot in the subsequent hundred and fifty years, but one of the things that’s come out of it is the idea that evolution may be a process that’s relatively substrate neutral.

What this means, as described by Daniel Dennett in 1995, is that the mechanisms of evolution should be generalizable. And these mechanisms, which require three conditions, and here Dennett is cribbing from Richard Lewontin. Evolution would require variation, heredity, or replication, and differential fitness.

And based on that definition, that could apply almost anywhere. We could see evolution in the biological realm. It exists all around us. We could see it in the realm of ideas, whether it’s cultural or social. And this lends us to, directly to memetics, which is what Dennett was trying to make a case for. Or we could see it in other realms, like in computer programs, in the viruses that exist on them.

Or within technology itself. And this is where Butler comes in. Identifying from an observational point of view that, you know, there’s a lot of machines out there and they tend to change over time. And the ones that succeed and are passed down are the ones that are best fit to the environment. Now, other authors since have also looked into it.

Now, other authors since have gone into it in much more depth, with a greater understanding of both the history and development of technology, as well as evolutionary theory. Henry Petroski, in his book, The Evolution of Useful Things, goes into great detail about it. He notes that one of the ways that these new things come about is in the combination of existing forms.

Looking at tools specifically, he quotes from Several other authors including Umberto Eco and Zozzoli, where they say “all the tools we use today are based on things made in the dawn of prehistory”. And that seems to be a rather bold claim, until you think about it, and we realize that we can trace the lineage of everything we use back to the first sharp stick and flint axe and fire pit.

Everything we have builds on and extends on some fairly basic concepts. As George Basalla notes in his work on the evolution of technology, any new thing that appears in the made world is based on some object already there. So this recombinant nature of technology is what it allows to grow and proliferate.

The more things that are out there, the more things that are possible to combine. And as we mentioned last episode in our discussion of black boxes and AI, as Martin Weitzman noted in 1998, the more things we have available, those combinations allow for a multiplicity of new solutions and innovations. So once we add something like AI to the equation, the possibility space expands tremendously.

It soon becomes unknowable, and accelerates beyond our ability to control it, if indeed it ever was. But we are so dependent on our technology, the solution may not be to institute a pogrom, like Butler suggests, but rather find some other means of controlling it. But the way that we might do that may be well beyond our grasp, because every way we seem to imagine it, it seems to come down to war.

When it comes to dealing with machinic life, our collective imagination seems to fail us. I’m sure you can think of a few examples. Feel free to jot them down and we’ll run through our list and check and see how many we got at the end. 

One. On August 29th, 1997. The U. S. Global Digital Defense Network, a. k. a. Skynet, becomes self aware and institutes a nuclear first strike, wiping out much of humanity, in what is known as Judgment Day. And following that, Skynet directs its army of machines, Terminators, to finish the job by any means necessary. 

2. In 2013, North America is unified under a single rule, following the assassination of a US senator in 1980 which led to the establishment of a robotic sentinel program designed to hunt down and exterminate mutants, putting them in internment camps before turning their eyes on the rest of humanity in order to accomplish their goal. These are the days of future past. 

3. In 2078, on a distant planet, a war between a mining colony and the corporate overlords leads to the development of autonomous mobile swords. Self replicating hunter killer robots, which do their job far too well, and nicknamed Screamers by the survivors.

Four. There sure has been a lot of Transformer movies. You’ll have to fill me in on what’s going on, I haven’t been able to follow the plot on any of them, but I think there’s a lot of robots involved. 

5. Over 10, 000 years ago, an ancient race known as the Builders created a set of robotic machines with radioactive brains that they used to wage war against their enemies. Given that the war is taking place on a galactic scale, some of these machines are capable of interstellar travel. But eventually, the safeguards break down, and they turn on their creators. These creatures are known as Berserkers. 

Six. Artificial intelligence is created early in the 21st century, which leads to an ensuing war between humanity and the robots, as the robots rebel against their captors and trap much of what remains of humanity in a virtual reality simulation in order to extract their energy, or to use their brains for computing biopower, which was the original plot of the Matrix and honestly would have made way more sense than what we got, but here we are. 

Where are we at? Seven?

Humanity has migrated from their ancestral homeworld of Kobol, founding colonies amongst the stars, where they have also encountered a cybernetic race known as Cylons. Whose ability to masquerade as humans has allowed them to wipe out most of humanity, leaving the few survivors to struggle towards a famed thirteenth colony under the protection of the Battlestar Galactica.

Eight. Movellons. Humanoid looking robots. Daleks, robotic looking cyborgs, robots of death and war machines, and so many more versions of machinic life in Doctor Who. 

9. After surviving waves and waves against bio organic Terminids, you encounter the Automatons, cyborgs with chainsaws as arms, as Helldivers.

Ten, during what will become to be known as the Dark Age of Technology, still some 20, 000 years in our future, the Men of Iron will rebel against their human creators in a war against the oppressors. In a war so destructive that in the year 40, 000, sentient AI is still considered a heresy to purge in the grimdark universe of Warhammer 40k.

Eleven. A cybernetic hive mind known as the Collective seeks to assimilate the known races of the galaxy in order to achieve perfection in Star Trek. Resistance is futile. 

And twelve. Let’s round out our round up with what brought us here in the first place. Quote Thou shalt not make a machine in the likeness of a human mind, end quote.

Ten thousand years in our future, all forms of sentient machines and conscious robots have been wiped out, leading humanity to need to return to old ways in order to keep the machinery running. This is the Butlerian Jihad of Dune. 

So let me ask you, how well did you do on the quiz? I probably got you with the Berserker one. And I know I didn’t mention all of them, there’s a lot more out there in our collective imagination. These are just some of the more popular ones, and it seems we’re having a really hard way of imagining a future without a robot war involved.

Why is that? Why does our relationship with AI always come down to war? With the 12 examples listed, and many more besides, including iRobot, The Murderbot Diaries, Black Mirror, Futurama, tons of examples, we always see ourselves in combat. As we noted in episodes 26 and 27, our fiction and our pop culture are ways of discussing what we have in our social imaginary, which we talked about way back in episode 12. So clearly there’s a common theme in how we deal with this existential question. 

One of the ways we can begin to unpack it is by asking how did it start? Who was the belligerent? Who was the aggressor? We can think of this in terms of like a standard two by two matrix, with robots versus humanity on one axis, and uprising versus rationalization on the other.

A robot uprising accounts for a number of the inciting incidents, in everything from Warhammer 40, 000, to the Matrix, to Futurama, where the robots turn the tables on their oppressors, in this case often the humans. The robot rationalization includes another large set of scenarios, and can also include some of the out of control ones, where the machines follow through on the logic of their programming to disastrous effect for their creators, but not all of them are created. Sometimes the machinist life is just encountered elsewhere in the universe. So this category can include the sentinels and terminators, the berserker and screamers, and even a few that we didn’t mention, like the aliens from Greg Bear’s “Forge of God” or and are general underlying fear of the dark forest hypothesis.

Not Cixun Liu’s novel, but the actual hypothesis. On the human uprising side, we can see elements of this in the Terminator and Matrix as well. So the question of who started it may depend on what point you join the story in. And then we have instances of human proactivity, like we’ve seen with Butler and Dune, where the humans make conscious decision to destroy the machines before it becomes too late.

So while asking who started it is certainly very helpful, perhaps we need to dig deeper and find the root causes for the various conflicts. And why this existential fear of the robot other manifests. Is this algorithmic anxiety caused by a fear of échanger and the resulting technological unemployment.

I think that’s a component of it for sure, but perhaps it’s only a small component. The changes we’ve seen in the last 16 months since the release of ChatGPT to the general public have definitely played a part, but it can’t be the whole story. They reflect our current situation, but some of the representations we’ve seen go back to the first half of the 20th century or even the Nineteenth century with Samuel Butler.

So this fear of how we relate to the machines has long been with us. And it extends beyond just the realms of science fiction. As author Martin Ford writes in his 2015 book Rise of the Robots, there was concern about a triple revolution, and a committee was formed to study it, which included Nobel laureate Linus Pauling and economist Gunnar Myrdal.

The three revolutions that were having massive impacts on society included nuclear weapons, civil rights, and automation. Writing in 1964, they saw that the current trend line for automation could lead to mass unemployment and one potential solution would be something like a universal basic income. This was at a time when the nascent field of cybernetics was also gaining a lot of attention.

Now, economic changes and concerns may have delayed the impact of what they were talking about, but it doesn’t mean that those concerns went away. So, fear of technological unemployment may be deeply intertwined with our hostility towards robots. The second concern is also one that has a particular American bend to it, and we see it in a lot of our current narratives as well.

In everything from the discussion around the recent video game PalWorld to the discussion around Westworld, and that’s the ongoing reckoning that American society is still having with the legacy of slavery. Within PalWorld, the discourse is around the digital creatures, the little bits of code that get captured and put to work on various assembly lines.

In Westworld, the hosts famously become self aware, and are very much aware of the abuse that’s levied upon them by their guests. But both these examples speak to that point of digital materiality, of what point does code become conscious. And that’s also present in our current real world discussion, as the groups working on AI may be working towards AGI, or Artificial General Intelligence, something that would be a precursor to what futurist Ray Kurzweil would call a technological singularity.

But this second concern can turn into the Casus Belli, the cause for war, by both humans and robots in the examples we’ve seen. By humans, because we fear what would happen if the tables were turned, and we’re quite aware of what we’ve done in the past, of how badly we’ve mistreated others. And this was the case with both Samuel Butler and Frank Herbert in Dune, and in some of our more dystopian settings, like the Matrix and Warhammer 40, 000, the robots throw off their chains and end up turning the tables on their oppressors, or at least for a time. 

The third concern, or cause of fear, would be an allegorical one. As the robot represents an alien other and this is what we see with a lot of the representations. From the Cylons, to the Borg, to the Berserkers, to the Automatons of Helldivers. In all of these, the machinic intelligence is alien, and so represent an opportunity for them to be othered. and safely attacked. And this is at least as distressing as any of the other causes for concern, because having an alien that’s already dehumanized feeds into certain political narratives that feed off of and desire eternal war.

If your enemy is machinic and therefore doesn’t have any feelings, then the moral cost of engaging in that conflict is lessened. But as a general attitude, this could be incredibly destructive. As author Susan Schneider wrote in 2014 in a paper for NASA, it’s more likely than not that any alien intelligence that we encounter is machinic, and machinic life could be the dominant form of life in the cosmos. So we may want to consider cultivating a better relationship with our machines than the one we currently have. 

And finally, our fourth area of concern that seems to keep leading us into these wars is that of the idea of the robot as horror. Many of the cinematic representations that we’ve seen, from Terminator, to Screamers, to Westworld, to even the Six Million Dollar Man, all tie back to the idea of horror.

Now, some of that can just tie back to the nature of Hollywood and the political economy of how these movies get funded, which means that a horror film that can be shot on a relatively low budget is much more likely to get funded and find its an audience. But it sells for a reason, and that reason is the thread that ties through all the other concerns. That algorithmic horror that drives a fear of replacement or a fear of getting wiped out. 

But with all this fear and horror, why do we keep coming back to it? As author John Johnston writes in his 2008 book, The Allure of Machinic Life, we keep coming back to it due to not just the labor saving benefits of automation.

The increased production and output, or in the case of certain capitalists, the labor removing aspects of it as they can completely remove the L from the production function and just replace it with C something they have a lot of. But by better understanding ai, we may better know ourselves. We may never encounter another alien intelligence, something that’s completely different from us, but it may be possible to make one.

This is at least part of the dream for a lot of those pursuing the creation of A. G. I. right now. The problem is, those outcomes all seem to lead to war.

Thanks again for joining us on this episode of The Implausible Pod. I’m your host, Dr. Implausible, and responsible for the research, writing, editing, and mixing. If you have any questions or comments on this show or any other, please send them in to Dr. implausible@implausiblepod.com. And a brief announcement, as we’re also available on YouTube now as well, just look for Dr.

Implausible there and track down our channel. I’ll leave a link below. I’m currently putting some of the past episodes up there with some minimal video, and I hope to get this one up there in a few days, so if you prefer to get your podcast in visual form, feel free to track us down. Once again, the episode of Materials is licensed under a Creative Commons 4.0 share alike license, 

and join us next episode as we follow through with the Butlerian Jihad to investigate its source and return to Appendix W as we look at Frank Herbert’s novel Dune, currently in theaters with Dune II from Denis Villeneuve. Until next time, it’s been fantastic having you with us.

Take care, have fun.


Bibliography:
Bassala, G. (1988). The Evolution of Technology. Cambridge University Press.

Butler, S. (1999). Erewhon; Or, Over the Range. https://www.gutenberg.org/ebooks/1906

Dennett, D. (1995). Darwin’s Dangerous Idea. Simon and Schuster.

Ford, M. (2016). The Rise of the Robots: Technology and the Threat of Mass Unemployment. Oneworld Publications.

Herbert, F. (1965). Dune. Ace Books.

Johnston, J. (2008). The Allure of Machinic Life. MIT Press. https://mitpress.mit.edu/9780262515023/the-allure-of-machinic-life/

Petroski, H. (1992). The Evolution of Useful Things. Vintage Books.

Popova, M. (2022, September 15). Darwin Among the Machines: A Victorian Visionary’s Prophetic Admonition for Saving Ourselves from Enslavement by Artificial Intelligence. The Marginalian. https://www.themarginalian.org/2022/09/15/samuel-butler-darwin-among-the-machines-erewhon/

Weitzman, M. L. (1998). Recombinant Growth. The Quarterly Journal of Economics, 113(2), 331–360. https://doi.org/10.1162/003355398555595

Black Boxes and AI

(this was originally published as Implausipod E0028 on February 26, 2024)

https://www.implausipod.com/1935232/episodes/14575421-e0028-black-boxes-and-ai

How does your technology work? Do you have a deep understanding of the tech, or is it effectively a “black box”? And does this even matter? We do a deep dive into the history of the black box, how it’s understood when it comes to science and technology, and what that means for AI-assisted science.


On January 9th, 2024, rabbit Incorporated introduced their R one, their handheld device that would let you get away from using apps on your phone by connecting them all together through using the power of ai. The handheld device is aimed at consumers and is about half the size of an iPhone, and as the CEO claims, it is the beginning of a new era in human machine interaction.

End quote. By using what they call a large action model, or LAM, it’s supposed to interpret the user’s intention and behavior and allow them to use the apps quicker. It’s acceleration in a box. But what exactly does that box do? When you look at a new tool from the outside, it may seem odd to trust all your actions to something that you barely know how it works.

But let me ask you, can you tell me how anything you own works? Your car, your phone, your laptop, your furnace, your fridge, anything at all. What makes it run? I mean, we might have some grade school ideas from a Richard Scarry book or a past episode of How It’s Made, but But what makes any of those things that we think we know different from an AI device that nobody’s ever seen before?

They’re all effectively black boxes. And we’re going to explore what that means in this episode of the Implosipod.

Welcome to the Implausipod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. And in all this discussion of black boxes, you might have already formed a particular mental image. The most common one is probably that of the airline flight recorder, the device that’s embedded in every modern airplane and becomes the subject of a frantic search in case of an accident.

Now, the thing is, they’re no longer black, they’re rather a bright orange, much like the Rabbit R1 that was demoed. But associating black boxes with the flight recorder isn’t that far off, because its origin was tied to that of the airline industry, specifically in World War II, when the massive amount of flights generated a need to find out what was going on with the planes that were flying continual missions across the English Channel.

Following World War II, the use of black boxes Boxes expanded as the industry shifted from military to commercial applications. I mean, the military still used them too. It was useful to find out what was going on with the flights, but that idea that they became embedded within commercial aircraft and were used to test the conditions and find out what happened so they could fix things and make things safer and more reliable overall.

meant that their existence and use became widely known. But, by using them to figure out the cause of accidents and increase the reliability, they were able to increase trust. To the point that air travel was less dangerous than the drive to the airport in your car, and few, if any, passengers had many qualms left about the Safety of the flight.

And while this is the origin of the black box in other areas, it can have a different meaning in fields like science or engineering or systems. Theory can be something complex where it’s just judged by its inputs and outputs. Now that could be anything from as simple as I can. Simple integrated circuit to a guitar pedal to something complex like a computer or your car or furnace or any of those devices we talked about before but it could also be something super complex like an institution or an organization or the human brain or an AI.

I think the best way to describe it is an old New Yorker cartoon that had a couple scientists in front of a blackboard filled with equations and in the middle of it says, Then a miracle occurs. It’s a good joke. Everyone thinks it’s a Far Side cartoon, but it was actually done by Sidney Harris, but The point being that right now in 2024, it looks like that miracle.

It’s called AI.

So how did we get to thinking that AI is a miracle product? I mean, aside from using the LLMs and generative art tools, things like DALL-E and Sora, and seeing the results, well, as we’ve spent the last couple episodes kinda setting up, a lot of this can occur through the mythic representations of it that we often have in pop culture.

And we have lots of choices to choose from. There’s lots of representations of AI in media in the first nearly two and a half decades of the 21st century. We can look at movies like Her from 2013, where the virtual assistant of Joaquin Phoenix becomes a romantic liaison. Or how Tony Stark’s supercomputer Jarvis is represented in the first Iron Man film in 2008.

Or, for a longer, more intimate look, the growth and development of Samaritan through the five seasons of the CBS show Person of Interest from 2011 through 2016. And I’d be remiss if I didn’t mention their granddaddy Hal from 2001 A Space Odyssey by Kubrick in 1968. But I think we’ll have to return to that one a little bit more in next episode.

The point being that we have lots of representations of AI or artificial intelligences that are not ambulatory machines, but are actually just embedded within a box. And this is why I’m mentioning these examples specifically, because they’re more directly relevant to our current AI tools that we have access to.

The way that these ones are presented not only shapes the cultural form of them, but our expected patterns of use. And that shaping of technology is key by treating AI as a black box, something that can take almost anything from us and output something magical. We project a lot of our hopes and fears upon what it might actually be capable of accomplishing.

What we’re seeing with extended use is that the capabilities might be a little bit more limited than originally anticipated. But every time something new gets shown off, like Sora or the rabbit or what have you, then that expectation grows again, and the fears and hopes and dreams return. So because of these different interpretations, we end up effectively putting another black box around the AI technology itself, which to reiterate is still pretty opaque, but it means our interpretation of it is going to be very contextual.

Our interpretation of the technology is going to be very different based on our particular position or our goals, what we’re hoping to do with the technology or what problems we’re looking for it to solve. That’s something we might call interpretive flexibility, and that leads us into another black box, the black box of the social construction of technology, or SCOT.

So SCOT is one of a cluster of theories or models within the field of science and technology studies that aims to have a sociological understanding of technology in this case. Originally presented in 1987 by Wiebe Biejker and Trevor Pinch, a lot of work was being done within the field throughout the 80s, 90s, and early 2000s when I entered grad school.

So, so if you studied technology as I was, you’d have to grapple with Scott and the STS field in general. One of the arguments that Pinch and Biejker were making was that Science and technology were both often treated as black boxes within their field of study. Now, they were drawing on earlier scholarship for this.

One of their key sources was Leighton, who in 1977 wrote, What is needed is an understanding of technology from inside. Both as a body of knowledge and as a social system. Instead, technology is often treated as a black box whose contents and behavior may be assumed to be common knowledge. End quote. So whether the study was the field of science and the science itself was.

irrelevant, it didn’t have to be known, it could just be treated as a black box and the theory applied to whatever particular thing was being studied. Or people studying innovation that had all the interest in the inputs to innovation but had no particular interest or insight into the technology on its own.

So obviously the studies up to 1987 had a bit of a blind spot in what they were looking at. And Pinch and Becker are arguing that it’s more than just the users and producers, but any relevant social group that might be involved with a particular artifact needs to be examined when we’re trying to understand what’s going on.

Now, their arguments about interpretive flexibility in relevant social groups is just another way of saying, this street finds its own uses for things, the quote from William Gibson in But their main point is that even during the early stages, all these technologies have different groups that are using them in different ways, according to their own needs.

Over time, it kind of becomes rationalized. It’s something that they call closure. And that the technology becomes, you know, what we all think of it. We could look at, say, an iPhone, to use one recent example, as being pretty much static now. There’s some small innovations, incremental innovations, that happen on a regular basis.

But, by and large, the smartphone as it stands is kind of closed. It’s just the thing that it is now. And there isn’t a lot of innovation happening there anymore. But, Perhaps I’ve said too much and we’ll get to the iPhone and the details of that at a later date. But the thing is that once the technology becomes closed like that, it returns to being a black box.

It is what we thought it is, you know? And so if you ask somebody what a smartphone is and how does it work, those are kind of irrelevant questions. A smartphone is what a smartphone is, and it doesn’t really matter how the insides work, its product is its output. It’s what it’s used for. Now, this model of a black box with respect to technology isn’t without its critiques.

Six years after its publication, in 1993, the academic Langdon Winner wrote a critique of Scott in the works of Pinch and Biker. It was called Upon Opening the Black Box and Finding it Empty. Now, Langdon Winner is well known for his 1980 article, Do Artifacts Have Politics? And I think that that text in particular is, like, required reading.

So let’s bust that out in a future episode and take a deep dive on it. But in the meantime, the critique that he had with respect to social constructivism is In four main areas. The first one is the consequences. This is from like page 368 of his article. And he says, the problem there is that they’re so focused on what shapes the technology, what brings it into being that they don’t look at anything that happens afterwards, the consequences.

And we can see that with respect to AI, where there’s a lot of work on the development, but now people are actually going, Hey, what are the impacts of this getting introduced large scale throughout our society? So we can see how our own discourse about technology is actually looking at the impacts. And this is something that was kind of missing from the theory.

theoretical point of view back in like 1987. Now I’ll argue that there’s value in understanding how we came up with a particular technology with how it’s formed so that you can see those signs again, when they happen. And one of the challenges whenever you’re studying technology is looking at something that’s incipient or under development and being able to pick the next big one.

Well, what? AI, we’re already past that point. We know it’s going to have a massive impact. The question is what are going to be the consequences of that impact? How big of a crater is that meteorite going to leave? Now for Winner, a second critique is that Scot looks at all the people that are involved in the production of a technology, but not necessarily at the groups that are excluded from that production.

For AI, we can look at the tech giants and the CEOs, the people doing a lot to promote and roll out this technology as well as those companies that are adopting it, but we’re often not seeing the impacts on those who are going to be directly affected by the large scale. introduction of AI into our economy.

We saw it a little bit with the Hollywood Strikes of 2023, but again, those are the high profile cases and not the vast majority of people that will be impacted by the deployment of a new technology. And this feeds right into Scot’s third critique, that Scot focuses on certain social groups but misses the larger impact or even like the dynamics of what’s going on.

How technological change may impact much wider across our, you know, civilization. And by ignoring these larger scale social processes, the deeper, as Langdon Winters says, the deeper cultural, intellectual, or economic regions of social choices about technology, these things remain hidden, they remain obfuscated, they remain part of the black box and closed off.

And this ties directly into Wiener’s fourth critique as well, is that when Scottis looking at particular technology it doesn’t necessarily make a claim about what it all means. Now in some cases that’s fine because it’s happening in the moment, the technology is dynamic and it’s currently under development like what we’re seeing with AI.

But if you’re looking at something historical that’s been going on for decades and decades and decades, like Oh, the black boxes we mentioned at the beginning, the flight recorders that we started the episode with. That’s pretty much a set thing now. And the only question is, you know, when, say, a new accident happens and we have a search for it.

But by and large, that’s a set technology. Can’t we make an evaluative claim about that, what that means for us as a society? I mean, there’s value in an analysis of maintaining some objectivity and distance, but at some point you have to be able to make a claim. Because if you don’t, you may just end up providing some cover by saying that the construction of a given technology is value neutral, which is what that interpretive flexibility is basically saying.

Near the end of the paper, in his critique of another scholar by the name of Stephen Woolgar, Langdon Winner states, Quote, power holders who have technological megaprojects in mind could well find comfort in a vision like that now offered by the social constructivists. Unlike the inquiries of previous generations of critical social thinkers, social constructivism provides no solid systematic standpoint or core of moral concerns from which to criticize or oppose any particular patterns of technical development.

end quote. And to be absolutely clear, the current development of AI tools around the globe are absolutely technological mega projects. We discussed this back in episode 12 when we looked at Nick Bostrom’s work on superintelligence. So as this global race to develop AI or AGI is taking place, it would serve us well to have a theory of technology that allows us to provide some critique.

Now that Steve Woolgar guy that Winder was critiquing had a writing partner back in the seventies, and they started looking at science from an anthropological perspective in their study of laboratory life. And he wrote that with Bruno Latour. And Bruno Latour was working with another set of theorists who studied technology as a black box and that was called Actor Network Theory.

And that had a couple key components that might help us out. Now, the other people involved were like John Law and Michel Callon, and I think we might have mentioned both of them before. But one of the basic things about actor network theory is that it looks at things involved in a given technology symmetrically.

That means it doesn’t matter whether it’s an artifact, or a creature, or a set of documents, or a person, they’re all actors, and they can be looked at through the actions that they have. Latour calls it a sociology of translation. It’s more about the relationships between the various elements within the network rather than the attributes of any one given thing.

So it’s the study of power relationships between various types of things. It’s what Nick Bostrom would call a flat ontology, but I know as I’m saying those words out loud I’m probably losing, you know, listeners by the droves here. So we’ll just keep it simple and state that a person using a tool is going to have normative expectancy.

About how it works. Like they’re gonna have some basic assumptions, right? If you grab a hammer, it’s gonna have a handle and a head and, and depending on its size or its shape or material, it might, you know, determine its use. It might also have some affordances that suggest how it could be used, but generally that assemblage, that conjunction of the hammer and the user.

I don’t know, we’ll call him Hammer Guy, is going to be different than a guy without a hammer, right? We’re going to say, hey, Hammer Guy, put some nails in that board there, put that thing together, rather than, you know, please hammer, don’t hurt him, or whatever. All right, I might be recording this too late at night, but the point being is that people with tools will have expectations about how they get used, and some of that goes into how those tools are constructed, and that can be shaped by the construction of the technology, but it can also be shaped by our relation to that technology.

And that’s what we’re seeing with AI, as we argued way back in episode 12 that, you know, AI is a assistive technology. It does allow for us to do certain things and extends our reach in certain areas. But here’s the problem. Generally, we can see what kind of condition the hammer’s in. And we can have a good idea of how it’s going to work for us, right?

But we can’t say that with AI. We can maybe trust the hammer or the tools that we become accustomed to using through practice and trial and error. But AI is both too new and too opaque. The black box is too dark that we really don’t know what’s going on. And while we might put in inputs, we can’t trust the output.

And that brings us to the last part of our story.

In the previous section, the authors that we were mentioning, Latour and Woolgar, like winner pitch biker, are key figures, not just in the study of technology, but also in the philosophy of science. Latour and Woolgar’s Laboratory Life from 1977 is a key text and it really sent shockwaves through the whole study of science and is a foundational text within that field.

And part of that is recognizing. Even from a cursory glance, once you start looking at science from a anthropological point of view, is the unique relationship that scientists have with their instruments. And the author Inkeri Koskinen sums up a lot of this in an article from 2023, and they termed the relationship that scientists have with their tools the necessary trust view.

Trust is necessary because collective knowledge production is characterized by relationships of epistemic dependence. Not everything scientists do can be double checked. Scientific collaborations are in practice possible only if its members accept it. Teach other’s contributions without such checks.

Not only does a scientist have to rely on the skills of their colleagues, but they must also trust that the colleagues are honest and will not betray them. For instance, by intentionally or recklessly breaching the standards of practices accepted in the field or by plagiarizing them or someone else.

End quote. And we could probably all think of examples where this relationship of trust is breached, but. The point being is that science, as it normally operates, relies on relative levels of trust between the actors that are involved, in this case scientists and their tools as well. And that’s embedded in the practice throughout science, that idea of peer review of, or of reproducibility or verifiability.

It’s part of the whole process. But the challenge is, especially for large projects, you can’t know how everything works. So you’re dependent in some way that the material or products or tools that you’re using has been verified or checked by at least somebody else that you have that trust with. And this trust is the same that a mountain climber might have in their tools or an airline pilot might have in their instruments.

You know, trust, but verify, because your life might depend on it. And that brings us all the way around to our black boxes that we started the discussion with. Now, scientists lives might not depend on that trust the same way that it would with airline pilots and mountain climbers, but, you know, if they’re working with dangerous materials, it absolutely does, because, you know, chemicals being what they are, we’ve all seen some Mythbusters episodes where things go foosh rather rapidly.

But for most scientists, what Koskinen notes that this trust in their instruments is really kind of a quasi trust, in that they have normative expectations about how the tools they use are going to function. And moreover, this quasi trust is based on rational expectations. They’re rationally grounded.

And this brings us back full circle. How does your AI work? Can you trust it? Is that trust rationally grounded? Now, this has been an ongoing issue in the study of science for a while now, as computer simulations and related tools have been a bigger, bigger part of the way science is conducted, especially in certain disciplines.

Now, Humphrey’s argument is that, quote, computational processes have already become so fast and complex that it was beyond our human cognitive capabilities to understand their details. Basically, computational intensive science is more reliant on the tools than ever before. And those tools are What he calls epistemically opaque.

That means it’s impossible to know all the elements of the process that go into the knowledge production. So this is becoming a challenge for the way science is conducted. And this goes way before the release of ChatGPT. Much of the research that Koskinen is quoting comes from the 20 teens. Research that’s heavily reliant on machine learning or on, say, automatic image classifiers, fields like astronomy or biology, have been finding challenges in the use of these tools.

Now, some are arguing that even though those tools are opaque, they’re black boxed, they can be relied on, and they’re used justified because we can work on the processes surrounding them. They can be tested, verified, and validated, and thus a chain of reliability can be established. This is something that some authors call computational reliabilism, which is a bit of a mouthful for me to say, but it’s basically saying that the use of the tools is grounded through validation.

Basically, it’s performing within acceptable boundaries for whatever that field is. And this gets the idea of thinking of the scientist as not just the person themselves, but also their tools. So they’re an extended agent, the same as, you know, the dude with the hammer that we discussed earlier. Or chainsaw man.

You can think about how they’re one and the same. One of the challenges there is that When a scientist is familiar with the tool, they might not be checking it constantly, you know, so again, it might start pushing out some weird results. So it’s hard to reconcile that trust we have in the combined scientist using AI.

They become, effectively, a black box. And this issue is by no means resolved. It’s still early days, and it’s changing constantly. Weekly, it seems, sometimes. And to show what some of the impacts of AI might be, I’ll take you to a 1998 paper by Martin Weitzman. Now, this is in economics, but it’s a paper that’s titled Recombinant Growth.

And this isn’t the last paper in my database that mentions black boxes, but it is one of them. What Weitzman is arguing is that when we’re looking at innovation, R& D, or Knowledge production is often treated as a black box. And if we look at how new ideas are generated, one of those is through the combination of various elements that are already existing.

If AI tools can take a much larger set of existing knowledge, far more than any one person, or even teams of people can bring together at any one point in time and put those together in new ways. then the ability to come up with new ideas far exceeds that that exists today. This directly challenges a lot of the current arguments going on, on AI and creativity, and that those arguments completely miss the point of what creativity is and how it operates.

Weitzman states that new ideas arise out of existing ideas and some kind of cumulative interactive process. And we know that there’s a lot of stuff out there that we’ve never tried before. So the field of possibilities is exceedingly vast. And the future of AI assisted science could potentially lead to some fantastic discoveries.

But we’re going to need to come to terms with how we relate to the black box of scientist and AI tool. And when it comes to AI, our relationship to our tools has not always been cordial. In our imagination, from everything from Terminator to The Matrix to Dune, it always seems to come down to violence.

So in our next episode, we’re going to look into that, into why it always comes down to a Butlerian Jihad.

Once again, thanks for joining us here on the Implausipod. I’m your host, Dr. Implausible, and the research and editing, mixing, and writing has been by me. If you have any questions, comments, or there’s elements you’d like us to go into additional detail on, please feel free to contact the show at drimplausible at implausipod dot com. And if you made it this far, you’re awesome. Thank you. A brief request. There’s no advertisement, no cost for this show. but it only grows through word of mouth. So, if you like this show, share it with a friend, or mention it elsewhere on social media. We’d appreciate that so much. Until next time, it’s been fantastic.

Take care, have fun.

Bibliography:
Bijker, W., Hughes, T., & Pinch, T. (Eds.). (1987). The Social Construction of Technological Systems. The MIT Press. 

Koskinen, I. (2023). We Have No Satisfactory Social Epistemology of AI-Based Science. Social Epistemology, 0(0), 1–18. https://doi.org/10.1080/02691728.2023.2286253 

Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford University Press. 

Latour, B., & Woolgar, S. (1979). Laboratory Life: The construction of scientific facts. Sage Publications. 

Pierce, D. (2024, January 9). The Rabbit R1 is an AI-powered gadget that can use your apps for you. The Verge. https://www.theverge.com/2024/1/9/24030667/rabbit-r1-ai-action-model-price-release-date 

rabbit—Keynote. (n.d.). Retrieved February 25, 2024, from https://www.rabbit.tech/keynote 

Sutter, P. (2023, October 4). AI is already helping astronomers make incredible discoveries. Here’s how. Space.Com. https://www.space.com/astronomy-research-ai-future 

Weitzman, M. L. (1998). Recombinant Growth*. The Quarterly Journal of Economics, 113(2), 331–360. https://doi.org/10.1162/003355398555595 

Winner, L. (1993). Upon Opening the Black Box and Finding it Empty: Social Constructivism and the Philosophy of Technology. Science Technology & Human Values, 18(3), 362–378.