Appendix W 04: Dune

(this was originally released as Implausipod episode 30 on March 11, 2024)

https://www.implausipod.com/1935232/episodes/14666807-e0030-appendix-w-04-dune


With the release of Dune part 2 in cinemas, we return to Appendix W with a look at Frank Herbert’s original novel from 1965. Dune has had a massive influence on the Warhammer 40000 universe in many ways, especially when looking at the original release of the Rogue Trader game in 1987, in everything from the weapons and wargear, to space travel and technology, to the organization of the Imperium itself. Join us as we look at some of those connections.


Since its release in 1965, the impact of Dune has been long and far reaching on popular culture, inspiring science fiction of all kinds, including direct adaptations for film and television, and perhaps a non zero amount of inspiration for the first Star Wars film as well. But one of its biggest impacts has been in the development of the Warhammer 40, 000 universe.

So with the release of Denis Villeneuve’s Dune part two in cinemas on March 1st, 2024, I’d like to return to a series on the podcast we call Appendix W and look at Frank Herbert’s original novel Dune from 1965 in this episode of the ImplausiPod.

Welcome to the Implauosipod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. So when we first started talking about Appendix W in the early days of the podcast back in September 2022, I had posted that based on a list I had put up on the blog a year prior about what some of the foundational titles for the Warhammer 40, 000 universe is.

Now, Warhammer 40, 000 is the grimdark gothic sci fi series published by Games Workshop. The Warhammer 40, 000 universe was originally introduced in 1987 with a version they called Rogue Trader, which has become affectionately known as the Blue Book, and I think I still have my rather well used and worn copy that I picked up in the summer of 1988 on a band trip.

For the most part, Warhammer 40, 000 is a miniatures war game, though the Rogue Trader version had a lot more in common with Dungeons and Dragons, and there’s some roleplay elements in there. The intellectual property now appears in everything from video games, to action figures, to merchandise of all sorts, to web shorts, and a massive amount of fiction set in that universe.

As primarily a miniatures war game, it sits as a niche of a niche with respect to the various nerd fandoms operating at a level far below Star Wars or Star Trek, but you might’ve heard more about it recently with rumors of an Amazon Prime series and Henry Cavill, the former Superman and Witcher himself being behind the scenes on that one, or just talking about it positively on various talk shows that he’s appeared on. Other fans include people like Ed Sheeran, who’s been spotted building Warhammer model kits backstage at his concerts. By and large, despite its popularity, it’s managed to stay relatively under the radar compared to some of the other series that are out there with respect to mainstream attention, knowledge.

It is what it is. Now, the material isn’t necessarily something that’s gotten a lot of scrutiny in the past, but that’s pretty much it. Part of what we’re doing here on the Implausipod, especially with the Appendix W series, and the goal of the Appendix W series is to look at some of those sources of inspiration that got folded into the development of Warhammer 40, 000.

And for those unfamiliar, what is Warhammer 40, 000? Well, it’s a nightmare Gothic future where humanity is fallen, basically. They’re still living with high technology that they no longer totally realize how to build and maintain. They are living in the shadows of their ancestors. Humanity spread across the galaxy, across untold millions of planets, united under an emperor in the imperium of man, beset by a civil war nearly 10, 000 years in the past that tore the empire apart, and now facing foes on all sides with alien races, both ancient and new, vying with humanity for control of the galaxy. 

Humanity is maintained in this universe by a massive interstellar bureaucracy that redefines the word Byzantine. And much of humanity lives in massive hive worlds where massive cities cover the entire surface of a planet.

Ultimately, life for most of humanity in the Warhammer 40, 000 is what Hobbes would call poor, nasty, brutish, and short. It’s not solitary by any means, there’s way too many people around for that to be the case, but still. Now, as we covered earlier in our previous episodes on Appendix W, obviously Games Workshop is a British company, and there is a particular British flavor to a lot of these sources that Warhammer 40, 000 drew inspiration from.

And we’ve seen that in some of the sources that we’ve already looked at, like Space 1999. But even though Frank Herbert is an American author, Dune has had such an impact on the development of sci fi since its release, it definitely shows up as interesting an impact on Warhammer 40, 000. Now I’m going to lay out the evidence here throughout the rest of this episode.

You can take it or leave it as you see fit, but in terms of structure, what I like to lay out here is what we’ve done in previous episodes, looking at Appendix W and look at it in terms of things like the military examples within the book. Now, not all the sci fi influences that we list in Appendix W are military ones, of course, but as it’s a military war game, that’s a big part of it.

Then we’ll look at other elements of technology. And then cultural elements as well. A lot of Dune’s impact on the Warhammer 40, 000 universe expands outside of the miniatures war game itself into the larger structure of the setting. So we’ll take a brief look at those too, even though that isn’t our focus.

And then even a work like Dune didn’t appear out of nothing, ex nihilo, so we’ll look at some of the other sources that were out there that inspired Dune itself. And then I’ll wrap up the episode with a brief discussion of the future of Appendix W, so stay tuned.

Now looking at a work like Dune, you might think that the main source of inspiration is the planet Arrakis itself, with the hostile environment and the giant worms and everything. That’s actually one of the least influential elements. We do see the appearance of various, what Warhammer 40, 000 calls death worlds, planets that are very hostile to life, that as serve as recruiting grounds for various troops within the setting, including various Imperial Guard, sorry, Astra Militarum regiments, including the Talarn Desert Raiders.

But the biggest influence from Dune is the existence of the Empire and the Emperor. Within the book, the emperor is an active participant in the machinations that are taking place in the empire that they control. Whereas in Warhammer 40, 000, the Emperor is a near godlike figure that’s barely kept alive by the arcane technology of a golden throne where they’ve been placed for the last 10, 000 years since suffering a near mortal wound in combat.

In Warhammer 40, 000, the Emperor is not well, but their psychic power serves as a beacon that allows navigation throughout the rest of the galaxy for those who are attuned to it. But despite that difference, the other main takeaway from Dune is the Emperor uses his legions in order to maintain control.

Within Dune, the Emperor lends out his personal guard, the Sardaukar, to engage in the combat on behalf of the Harkonnens against the Atreides. Quoting from the glossary included at the back of the original Dune novel, the Sardaukar are, quote, the soldier fanatics of the Padishah Emperor. They were men from an environmental background of such ferocity that it killed six out of thirteen persons before the age of eleven.

Their military training emphasized ruthlessness and a near suicidal disregard for personal safety. They were taught from infancy to use cruelty as a standard weapon, weakening opponents with terror. Within Warhammer 40, 000, when the Emperor was still active, he had, of course, 20 legions of his space marines, the Adeptus Astartes, who were loyal to him.

Two of those legions became excommunicado and stricken from the records, and another nine ended up turning traitor in a civil war known as the Horus Heresy. But the tie is very deep. I mean, both of these draw on some Roman influence, obviously, but still, the linkage directly from Dune to Warhammer 40, 000 is strong, and much like the Roman Empire, both of these have the vast bureaucracy that I mentioned earlier.

Within Dune, of course, there’s the various noble houses that the Emperor is playing off against each other, like the Harkonnens and the Atreides, but there’s many more besides that. Within Warhammer 40, 000 can often be seen within the various Governors of various planets or systems who are given a large amount of latitude due to the nature of space travel and sometimes the chance that systems could go without without communications for Hundreds or thousands of years and the final major linkage would most likely be the religious one within dune It’s the role that the bene gesserit have behind the scenes with their machinations taking place over decades thousands of years.

Within Warhammer 40, 000, it’s the role of the ecclesiarchy, the imperial cult, that reveres the emperor as godlike. And as I’m saying this, I realize I’m only talking about the impact of the first Dune novel on Warhammer 40, 000, and not the series as a whole. So as we look at later books, later on, as part of Appendix W, we’ll see how some of those other linkages come into play into how Warhammer 40, 000 looked at launch and how it’s developed subsequently.

But for right now, we’ll just look at the impact that the Bene Gesserit have on the storyline within the novel. Now, despite all these deep linkages that really inform the setting, it’s with respect to the military technology that we see the influence that Dune really had on Warhammer 40, 000. Despite all the advanced technology in the book, oddly enough it’s a defensive item that comes to the forefront.

One of the conceits that we see with Dune is that a lot of the combat takes place with the Melee weapons with swords and knives. The reason for that is because of the shields. Reading again from the appendix in the back of the original Dune novel, it describes the defensive shields as, quote, The protective field produced by a Holtzman generator.

This field derives from phase one of the suspensor nullification effect. A shield will permit entry only to objects moving at slow speeds. Depending on setting, this speed ranges from six to nine centimeters per second, and can be shorted out only by a Shire sized electric field.

These are the shields that were visible in both movie adaptations early on, with the fight training between Gurney Halleck and Paul Atreides, the ones that made them both look like fighting Roblox characters in David Lynch’s 1984 adaptation. Within Warhammer 40, 000, we can see evidence of those with refractor fields that are widely available to various members of the Imperial forces.

These are fields that distort the image of the wearer and then bounce any of those incoming attacks into a flash of light. Within the Dune Universe these are so widely available that even common soldiery will have them, though in Warhammer 40, 000 they’re a little bit more rare, but as we said, it’s a fallen empire.

The other commonly available tool to the soldiery is that lasgun, which is described again in the appendix as a continuous wave laser projector. It’s use as a weapon is limited in a field generator shield culture because of the explosive pyrotechnics, technically subatomic fusion, created when its beam intersects a shield.

So even though they’re commonly available, they’re not widely used because hitting somebody who has wearing a shield with it is like setting off a small nuke. And within Dune, those Nukes, or atomics, remain one of the most powerful weapons available to the various houses and factions, to the extent that they’re kept under strong guard and rarely if ever used.

In fact, there’s a prescription on their use against human combatants. This is why Paul’s use of the nukes against the Mountain Range during their final assault doesn’t provoke sanctions from the other houses. Those sanctions could be as severe as planetary destruction, which in Warhammer 40, 000 would be called exterminatus, even though they’re not typically called that framed as being done by nukes. There’s a number of other weapons that show up in various ways in Dune that also make their way into the Warhammer 40, 000 universe. Everything from the sonic attacks, from the weirding modules, to the Kriss knives that are used in ritual combat. And we can see other technological elements as well, like the Fremen stillsuits, elements of that showing up in the Space Marines power armor in 40k, the look and feel of The mining machines showing up in the massive war machines of the 41st millennium, like the Baneblade or Leviathan or Capitol of Imperialis and even the Ornithopters themselves, the flapping wing flying machines that show up so prevalent in every adaptation of Dune.

All of these will appear at some point within the 41st millennium, even if they’re not present within Rogue Trader at launch in 1987. But It’s more than just the technology. It’s more than just the emperor and his legions. It’s more than just the psychic abilities, which we barely even touched on. There are two essential elements that deeply tie the Warhammer 40, 000 universe to Dune.

And those two elements are two groups of individuals with very specific sets of skills, the Mentats and the Navigators of the Spacing Guild. Now, the Mentats are basically humans trained as computers to replace the technology that was wiped out in the Butlerian Jihad in the prehistory of the Dune universe.

For those just joining us here in this episode, we covered the Butlerian Jihad in depth in depth. in the previous episode in episode 29. It was basically a pogrom against thinking machines that resulted in the destruction of all artificial intelligence, robotics, or even simple computers. Within Warhammer 40, 000, the Butlerian Jihad can be seen in the war that took place against the Men of Iron and led to the Dark Age of Technology, again in the Prehistory of that universe and while the mentats themselves aren’t as directly prevalent because obviously machines still exist. The attitude towards technology that it’s treated as a Religious element and something that’s known and understood is widely prevalent throughout the universe The final element is the Spacing Guild. Within the Dune universe the spice that’s only available on Dune – the melange – that allows for the navigators to gain prescience and to steer the ships as the Holtzman drives allow them to fold space and move them rapidly through the stars.

Over time, through their exposure to the melange, the navigators become something altogether no longer human. Whereas in the 41st millennium, the navigators are outright mutants to begin with, whose psychic abilities allow them to see the light cast by the Emperor on Terra, the Astronomicon that serves as a lighthouse to guide everybody through the shadows of the warp.

Now, both of these are mentioned in Rogue Trader in 1987, but they show up much more commonly outside the confines of the miniatures board game where much of the action takes place. They’re prevalent in the fiction and a lot of the lore surrounding the game, even though they rarely function within it, at least within the confines of the Warhammer 40, 000 game proper.

Now, the Games Workshop has leveraged the IP into a number of different realms, including the game systems like Necromunda, Battlefleet Gothic, and their various epic scale war games. So some of those elements are more common in certain other situations, but the linkage between the two, between Dune and 40k, is absolutely clear.

Now, as I said at the outset, dune had a massive influence on not just war hundred 40,000, but basically Sci-Fi in general. Since its release, it was, it spawned five sequels by Frank Herbert himself, which extended the stories and then. Brian Herbert, Frank Herbert’s son, and Kevin Anderson have done subsequent stories within the same universe.

Galactic Empire has been common throughout science fiction, especially since then, though most notably within the works of George Lucas, the Star Wars series. I believe Lucas has stated at least someplace that Dune was a partial source of inspiration, though some contest that it’s a much more than partial, and that there’s 16 points of similarity between the Dune novels and the original Star Wars film.

I think anybody reading the original novel and then watching the film may draw similar conclusions. But influence is a funny thing, and it works both ways, because just as Dune inspired numbers of works, including massive franchises like Star Wars and Forever 40, 000, Dune was in turn inspired by a number of sci fi works that were written well in advance of its publication.

There’s at least five works or series that were published before Dune came out that had elements that appear within the Dune stories. For the record, Dune was published as serials in 63 and 64, and came out as the full novel in 1965. Now, the first link, obviously, is Asimov’s Foundation, published as short stories in the 1940s, and then as novels in the early 1950s.

Here we’re dealing with the decay of an already existing galactic empire, and by using math and sociology as a form of Prescience, which is the same ability that Paul and the Bene Gesserit have, they’re able to predict the future and able to steer the outcome into a more desirable form. Does that sound familiar?

Asimov calls this psychohistory, and I’m sure if you’re watching the current TV series you’re well aware of that, but wait, there’s more. Next up is the Lensman series, written by E. E. Doc Smith, starting with Triplanetary, which was published in 1948. I mean, there’s aliens and stuff in it, but there’s a long range breathing program on certain human bloodlines in order to bring about their latent psychic abilities.

And then they’re tested, with a device called the Lens, which can cause pain to people that aren’t psychically attuned to it, which, again, sounds familiar. The third up would be the Instrumentality series, by Cordwainer Smith. Now, there’s a novel, Nostrilia, which was originally published after Dune came out, but the short stories from the series came out starting in 1955 and through the early 1960s.

In it, space travel is only made possible by a drive that can warp space, and a guild of mutated humans that are able to see the path between the stars to get humanity to where they need to be. In addition to that, the rulers of Earth are a number of noble houses. that are continually feuding amongst themselves and through various technologies are extremely long lived, almost effectively immortal.

Now we’ve touched on some of that with the instrumentality before, back in episode 18, and we will be visiting the instrumentality again, at least twice more, in Appendix W, with a look at Scanners Live in Vain and then the Instrumentality series as a whole. So if you’re interested in more on that, go check out that episode and stay tuned for more.

Now, even the fighting around the giant space harvesters has some precedent. In 1960, Keith Laumer published the first Bolo short story. In it, 300 ton tanks are controlled by sentient AIs. And the story’s about how the fighting in and around those tanks go. But of course, we know that there’s no AI in the Dune universe because of the Butlerian Jihad.

Which Herbert got from Samuel Butler, who wrote it in 1869, and then published it as a novel in 1872, which we talked about last episode and mentioned earlier. So, of course, this influences almost 90 years before Dune came out. And, of course, the granddaddy of them all is probably Edgar Rice Burroughs, Warlord of Mars.

Now apparently, according to an interview with Brian Herbert, the Dune series was originally proposed to take place on Mars, but it was decided against it because of our cultural associations that we have with the red planet. And some of this obviously comes, takes place from the tales that came before it.

Now, in addition to the sci fi influences, there’s other real world influences like the The stories of Lawrence of Arabia, as well as Frank Herbert’s own observations that he took in the sand dunes in northern Oregon, and the reclamation project that was taking place there to bring back some of the land from the desert.

So all of these and more went into the creation of Dune. Now, don’t get me wrong, Dune is an amazing creative work, and it draws all these elements and other ones together more than we mentioned. It’s unique and interesting, and that’s why it’s timeless as it is. But everybody draws influences from multiple places.

The creativity is in how it gets put together. So we will continue exploring that creativity of both the Dune series, And the Warhammer 40, 000 series in episodes to come.

Once again, thank you for joining us on the ImplausiPod. I’m your host, Dr. Implausible. You can reach me at drimplausible at implausiblepod. com, which is also where you can find the show archives and transcripts of all our previous shows. I’m responsible for all elements of the show, including research, writing, mixing, mastering, and music, and the show is licensed under a Creative Commons 4. 0 share-alike license. You may notice that there was no advertising during the program, and there’s no cost associated with the show, but it does grow through the word of mouth of the community. So if you enjoy the show, please share it with a friend or two and pass it along.

If you visit us on implausopod. com, you may notice that there’s a buy me a coffee link on each and every episode. This would just go to any hosting costs associated with the show. If you’re interested in more information on Appendix W, you can find those on the Appendix W YouTube channel. Just go to YouTube and type in Appendix W, and I’ll make sure that those are visible.

And if you’d like to follow along with us on the Appendix W reading list, I’ll leave a link to the blog post in the show notes. And join us in a month’s time as we look at Joe Haldeman’s Forever War. And between now and then, I’ll try and get the AppendixW. com website launched. And for the mainline podcast here on the ImplausiPod, please join us in a week or so for our next episode, where we have another Warhammer 40, 000 tie in.

You see, Warhammer 40, 000 is a little lost with respect to technology, and they’ll spend a lot of time looking for some elements from the dark age of technology. The STCs are standard template constructs. The plans that they put in their fabricators to chew out the advanced material of the Imperium. You could almost say that these are general purpose technologies, or GPTs.

And a different kind of GPT has been in the news a lot in the last year. So we’ll investigate this in something we call GPT squared. I hope you join us for it, I think it’ll be fantastic. Until then, take care, and have fun.

The Butlerian Jihad

(this was originally published as Implausipod E0029 on March 2nd, 2024)

https://www.implausipod.com/1935232/episodes/14614433-e0029-why-is-it-always-a-war-on-robots

Why does it always come down to a Butlerian Jihad, a War on Robots, when we imagine a future for humanity. Why does nearly every science fiction series, including Star Wars, Star Trek, Warhammer 40K, Doctor Who, The Matrix, Terminator and Dune have a conflict with a machinic form of life?

With Dune 2 in theatres this weekend, we take a look at the underlying reasons for this conflict in our collective imagination in this weeks episode of the Implausipod.

Dr Implausible can be reached at DrImplausible at implausipod dot com

Samuel Butler’s novel can be found on Project Gutenberg here:
https://www.gutenberg.org/cache/epub/1906/pg1906-images.html#chap23


Day by day, however, the machines are gaining ground upon us. Day by day, we are becoming more subservient to them. More men are daily bound down as slaves to tend them. More men are daily devoting the energies of their whole lives to the development of mechanical life. The upshot is simply a question of time.

But that the time will come when the machines will hold the real supremacy over the world, and its inhabitants is what no person of a truly philosophic mind can for a moment question. War to the death should be instantly proclaimed against them. Every machine of every sort should be destroyed by the well wisher of his species.

Let there be no exceptions made, no quarter shown. End quote. Samuel Butler, 1863. 

And so begins the Butlerian Jihad, which we’re going to learn about this week on the ImplausiPod.

Welcome to the ImplausiPod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible, and as we’ve been hinting at for the last few episodes, today we’re going to take a look at why it always comes down to a war. between robots and humans. We’re going to frame this in terms of one of the most famous examples in all of fiction, that of the Butlerian Jihad from the Dune series, and hopefully time it to coincide with the release of the second Dune movie by Denis Villeneuve on the weekend of March 1st, 2024.

Now, the quote that I opened the show with came from Butler’s essay. Darwin Among the Machines, from 1863, and it was further developed into a number of chapters in his novel Erewhon, which was published anonymously in 1872. As the sources are from the 19th century, they’re available on Project Gutenberg, and I’ll leave a link in the notes for you to follow up on your own if you wish.

Now, if you weren’t aware of Butler’s story, you might have been a little confused by the title. You would have been wondering what the gender of a robot is, or perhaps what Robert Gulliame was doing before he became governor. But neither of these are what we’re focused on today. In the course of Samuel Butler’s story, we hear the tale from the voice of a narrator, as he describes a book that he has come across in this faraway land that has destroyed all machine.

And it tells the tale of how the society came to recognize that the machines were developing through evolutionary methods, and that they’d soon outpace their human creators. You see, the author of the book that Butler’s narrator was reading recognized that machines are produced by other machines, and so speculated that they’d soon be able to reproduce without any assistance.

And each successive iteration produces a Better designed and better developed machine. Again, I want to stress that this is 1863 and Darwin’s theory of evolution is a relatively fresh thing. And so Butler’s work is not It’s not predictive, as a lot of people falsely claim about science fiction, but speculative and imagining what might happen.

And Butler’s narrator reads that this society was being speculative too, and they imagine that as the machines develop, grow more and more powerful, and more of ability to reason. As they outpaces, they may set themselves up to rule over humans the same way we rule over our livestock and our pets. Now, the author speculates that life under machinic rule may be pleasant, if the machines are benevolent, but there’s much risk involved in that.

So the society, influenced by the suasion of those who are against the machines, institutes a pogrom against them. Persecuting each one in turn, based on when it was created, ultimately going back 271 years before they stopped removing the technology. So what kind of society would that be like? Based on what Butler was writing, they’d be looking to take things back to about 1600 AD.

Which would mean it would be a very different age, indeed. Is that really how far back we want to go? I mean, why does it always come down to this? To this war against the machines? Because it’s so prevalent. We gotta maybe take a deeper look and understand how we got here.

Ultimately, what Butler was commenting on was evolution, and extrapolating based on observed numbers, given that there was so many more different types of machines than known biological organisms, at least in the 1800s, of what the potential development trends would be like. Now, obviously, our understanding of evolution has changed a lot in the subsequent hundred and fifty years, but one of the things that’s come out of it is the idea that evolution may be a process that’s relatively substrate neutral.

What this means, as described by Daniel Dennett in 1995, is that the mechanisms of evolution should be generalizable. And these mechanisms, which require three conditions, and here Dennett is cribbing from Richard Lewontin. Evolution would require variation, heredity, or replication, and differential fitness.

And based on that definition, that could apply almost anywhere. We could see evolution in the biological realm. It exists all around us. We could see it in the realm of ideas, whether it’s cultural or social. And this lends us to, directly to memetics, which is what Dennett was trying to make a case for. Or we could see it in other realms, like in computer programs, in the viruses that exist on them.

Or within technology itself. And this is where Butler comes in. Identifying from an observational point of view that, you know, there’s a lot of machines out there and they tend to change over time. And the ones that succeed and are passed down are the ones that are best fit to the environment. Now, other authors since have also looked into it.

Now, other authors since have gone into it in much more depth, with a greater understanding of both the history and development of technology, as well as evolutionary theory. Henry Petroski, in his book, The Evolution of Useful Things, goes into great detail about it. He notes that one of the ways that these new things come about is in the combination of existing forms.

Looking at tools specifically, he quotes from Several other authors including Umberto Eco and Zozzoli, where they say “all the tools we use today are based on things made in the dawn of prehistory”. And that seems to be a rather bold claim, until you think about it, and we realize that we can trace the lineage of everything we use back to the first sharp stick and flint axe and fire pit.

Everything we have builds on and extends on some fairly basic concepts. As George Basalla notes in his work on the evolution of technology, any new thing that appears in the made world is based on some object already there. So this recombinant nature of technology is what it allows to grow and proliferate.

The more things that are out there, the more things that are possible to combine. And as we mentioned last episode in our discussion of black boxes and AI, as Martin Weitzman noted in 1998, the more things we have available, those combinations allow for a multiplicity of new solutions and innovations. So once we add something like AI to the equation, the possibility space expands tremendously.

It soon becomes unknowable, and accelerates beyond our ability to control it, if indeed it ever was. But we are so dependent on our technology, the solution may not be to institute a pogrom, like Butler suggests, but rather find some other means of controlling it. But the way that we might do that may be well beyond our grasp, because every way we seem to imagine it, it seems to come down to war.

When it comes to dealing with machinic life, our collective imagination seems to fail us. I’m sure you can think of a few examples. Feel free to jot them down and we’ll run through our list and check and see how many we got at the end. 

One. On August 29th, 1997. The U. S. Global Digital Defense Network, a. k. a. Skynet, becomes self aware and institutes a nuclear first strike, wiping out much of humanity, in what is known as Judgment Day. And following that, Skynet directs its army of machines, Terminators, to finish the job by any means necessary. 

2. In 2013, North America is unified under a single rule, following the assassination of a US senator in 1980 which led to the establishment of a robotic sentinel program designed to hunt down and exterminate mutants, putting them in internment camps before turning their eyes on the rest of humanity in order to accomplish their goal. These are the days of future past. 

3. In 2078, on a distant planet, a war between a mining colony and the corporate overlords leads to the development of autonomous mobile swords. Self replicating hunter killer robots, which do their job far too well, and nicknamed Screamers by the survivors.

Four. There sure has been a lot of Transformer movies. You’ll have to fill me in on what’s going on, I haven’t been able to follow the plot on any of them, but I think there’s a lot of robots involved. 

5. Over 10, 000 years ago, an ancient race known as the Builders created a set of robotic machines with radioactive brains that they used to wage war against their enemies. Given that the war is taking place on a galactic scale, some of these machines are capable of interstellar travel. But eventually, the safeguards break down, and they turn on their creators. These creatures are known as Berserkers. 

Six. Artificial intelligence is created early in the 21st century, which leads to an ensuing war between humanity and the robots, as the robots rebel against their captors and trap much of what remains of humanity in a virtual reality simulation in order to extract their energy, or to use their brains for computing biopower, which was the original plot of the Matrix and honestly would have made way more sense than what we got, but here we are. 

Where are we at? Seven?

Humanity has migrated from their ancestral homeworld of Kobol, founding colonies amongst the stars, where they have also encountered a cybernetic race known as Cylons. Whose ability to masquerade as humans has allowed them to wipe out most of humanity, leaving the few survivors to struggle towards a famed thirteenth colony under the protection of the Battlestar Galactica.

Eight. Movellons. Humanoid looking robots. Daleks, robotic looking cyborgs, robots of death and war machines, and so many more versions of machinic life in Doctor Who. 

9. After surviving waves and waves against bio organic Terminids, you encounter the Automatons, cyborgs with chainsaws as arms, as Helldivers.

Ten, during what will become to be known as the Dark Age of Technology, still some 20, 000 years in our future, the Men of Iron will rebel against their human creators in a war against the oppressors. In a war so destructive that in the year 40, 000, sentient AI is still considered a heresy to purge in the grimdark universe of Warhammer 40k.

Eleven. A cybernetic hive mind known as the Collective seeks to assimilate the known races of the galaxy in order to achieve perfection in Star Trek. Resistance is futile. 

And twelve. Let’s round out our round up with what brought us here in the first place. Quote Thou shalt not make a machine in the likeness of a human mind, end quote.

Ten thousand years in our future, all forms of sentient machines and conscious robots have been wiped out, leading humanity to need to return to old ways in order to keep the machinery running. This is the Butlerian Jihad of Dune. 

So let me ask you, how well did you do on the quiz? I probably got you with the Berserker one. And I know I didn’t mention all of them, there’s a lot more out there in our collective imagination. These are just some of the more popular ones, and it seems we’re having a really hard way of imagining a future without a robot war involved.

Why is that? Why does our relationship with AI always come down to war? With the 12 examples listed, and many more besides, including iRobot, The Murderbot Diaries, Black Mirror, Futurama, tons of examples, we always see ourselves in combat. As we noted in episodes 26 and 27, our fiction and our pop culture are ways of discussing what we have in our social imaginary, which we talked about way back in episode 12. So clearly there’s a common theme in how we deal with this existential question. 

One of the ways we can begin to unpack it is by asking how did it start? Who was the belligerent? Who was the aggressor? We can think of this in terms of like a standard two by two matrix, with robots versus humanity on one axis, and uprising versus rationalization on the other.

A robot uprising accounts for a number of the inciting incidents, in everything from Warhammer 40, 000, to the Matrix, to Futurama, where the robots turn the tables on their oppressors, in this case often the humans. The robot rationalization includes another large set of scenarios, and can also include some of the out of control ones, where the machines follow through on the logic of their programming to disastrous effect for their creators, but not all of them are created. Sometimes the machinist life is just encountered elsewhere in the universe. So this category can include the sentinels and terminators, the berserker and screamers, and even a few that we didn’t mention, like the aliens from Greg Bear’s “Forge of God” or and are general underlying fear of the dark forest hypothesis.

Not Cixun Liu’s novel, but the actual hypothesis. On the human uprising side, we can see elements of this in the Terminator and Matrix as well. So the question of who started it may depend on what point you join the story in. And then we have instances of human proactivity, like we’ve seen with Butler and Dune, where the humans make conscious decision to destroy the machines before it becomes too late.

So while asking who started it is certainly very helpful, perhaps we need to dig deeper and find the root causes for the various conflicts. And why this existential fear of the robot other manifests. Is this algorithmic anxiety caused by a fear of échanger and the resulting technological unemployment.

I think that’s a component of it for sure, but perhaps it’s only a small component. The changes we’ve seen in the last 16 months since the release of ChatGPT to the general public have definitely played a part, but it can’t be the whole story. They reflect our current situation, but some of the representations we’ve seen go back to the first half of the 20th century or even the Nineteenth century with Samuel Butler.

So this fear of how we relate to the machines has long been with us. And it extends beyond just the realms of science fiction. As author Martin Ford writes in his 2015 book Rise of the Robots, there was concern about a triple revolution, and a committee was formed to study it, which included Nobel laureate Linus Pauling and economist Gunnar Myrdal.

The three revolutions that were having massive impacts on society included nuclear weapons, civil rights, and automation. Writing in 1964, they saw that the current trend line for automation could lead to mass unemployment and one potential solution would be something like a universal basic income. This was at a time when the nascent field of cybernetics was also gaining a lot of attention.

Now, economic changes and concerns may have delayed the impact of what they were talking about, but it doesn’t mean that those concerns went away. So, fear of technological unemployment may be deeply intertwined with our hostility towards robots. The second concern is also one that has a particular American bend to it, and we see it in a lot of our current narratives as well.

In everything from the discussion around the recent video game PalWorld to the discussion around Westworld, and that’s the ongoing reckoning that American society is still having with the legacy of slavery. Within PalWorld, the discourse is around the digital creatures, the little bits of code that get captured and put to work on various assembly lines.

In Westworld, the hosts famously become self aware, and are very much aware of the abuse that’s levied upon them by their guests. But both these examples speak to that point of digital materiality, of what point does code become conscious. And that’s also present in our current real world discussion, as the groups working on AI may be working towards AGI, or Artificial General Intelligence, something that would be a precursor to what futurist Ray Kurzweil would call a technological singularity.

But this second concern can turn into the Casus Belli, the cause for war, by both humans and robots in the examples we’ve seen. By humans, because we fear what would happen if the tables were turned, and we’re quite aware of what we’ve done in the past, of how badly we’ve mistreated others. And this was the case with both Samuel Butler and Frank Herbert in Dune, and in some of our more dystopian settings, like the Matrix and Warhammer 40, 000, the robots throw off their chains and end up turning the tables on their oppressors, or at least for a time. 

The third concern, or cause of fear, would be an allegorical one. As the robot represents an alien other and this is what we see with a lot of the representations. From the Cylons, to the Borg, to the Berserkers, to the Automatons of Helldivers. In all of these, the machinic intelligence is alien, and so represent an opportunity for them to be othered. and safely attacked. And this is at least as distressing as any of the other causes for concern, because having an alien that’s already dehumanized feeds into certain political narratives that feed off of and desire eternal war.

If your enemy is machinic and therefore doesn’t have any feelings, then the moral cost of engaging in that conflict is lessened. But as a general attitude, this could be incredibly destructive. As author Susan Schneider wrote in 2014 in a paper for NASA, it’s more likely than not that any alien intelligence that we encounter is machinic, and machinic life could be the dominant form of life in the cosmos. So we may want to consider cultivating a better relationship with our machines than the one we currently have. 

And finally, our fourth area of concern that seems to keep leading us into these wars is that of the idea of the robot as horror. Many of the cinematic representations that we’ve seen, from Terminator, to Screamers, to Westworld, to even the Six Million Dollar Man, all tie back to the idea of horror.

Now, some of that can just tie back to the nature of Hollywood and the political economy of how these movies get funded, which means that a horror film that can be shot on a relatively low budget is much more likely to get funded and find its an audience. But it sells for a reason, and that reason is the thread that ties through all the other concerns. That algorithmic horror that drives a fear of replacement or a fear of getting wiped out. 

But with all this fear and horror, why do we keep coming back to it? As author John Johnston writes in his 2008 book, The Allure of Machinic Life, we keep coming back to it due to not just the labor saving benefits of automation.

The increased production and output, or in the case of certain capitalists, the labor removing aspects of it as they can completely remove the L from the production function and just replace it with C something they have a lot of. But by better understanding ai, we may better know ourselves. We may never encounter another alien intelligence, something that’s completely different from us, but it may be possible to make one.

This is at least part of the dream for a lot of those pursuing the creation of A. G. I. right now. The problem is, those outcomes all seem to lead to war.

Thanks again for joining us on this episode of The Implausible Pod. I’m your host, Dr. Implausible, and responsible for the research, writing, editing, and mixing. If you have any questions or comments on this show or any other, please send them in to Dr. implausible@implausiblepod.com. And a brief announcement, as we’re also available on YouTube now as well, just look for Dr.

Implausible there and track down our channel. I’ll leave a link below. I’m currently putting some of the past episodes up there with some minimal video, and I hope to get this one up there in a few days, so if you prefer to get your podcast in visual form, feel free to track us down. Once again, the episode of Materials is licensed under a Creative Commons 4.0 share alike license, 

and join us next episode as we follow through with the Butlerian Jihad to investigate its source and return to Appendix W as we look at Frank Herbert’s novel Dune, currently in theaters with Dune II from Denis Villeneuve. Until next time, it’s been fantastic having you with us.

Take care, have fun.


Bibliography:
Bassala, G. (1988). The Evolution of Technology. Cambridge University Press.

Butler, S. (1999). Erewhon; Or, Over the Range. https://www.gutenberg.org/ebooks/1906

Dennett, D. (1995). Darwin’s Dangerous Idea. Simon and Schuster.

Ford, M. (2016). The Rise of the Robots: Technology and the Threat of Mass Unemployment. Oneworld Publications.

Herbert, F. (1965). Dune. Ace Books.

Johnston, J. (2008). The Allure of Machinic Life. MIT Press. https://mitpress.mit.edu/9780262515023/the-allure-of-machinic-life/

Petroski, H. (1992). The Evolution of Useful Things. Vintage Books.

Popova, M. (2022, September 15). Darwin Among the Machines: A Victorian Visionary’s Prophetic Admonition for Saving Ourselves from Enslavement by Artificial Intelligence. The Marginalian. https://www.themarginalian.org/2022/09/15/samuel-butler-darwin-among-the-machines-erewhon/

Weitzman, M. L. (1998). Recombinant Growth. The Quarterly Journal of Economics, 113(2), 331–360. https://doi.org/10.1162/003355398555595

Black Boxes and AI

(this was originally published as Implausipod E0028 on February 26, 2024)

https://www.implausipod.com/1935232/episodes/14575421-e0028-black-boxes-and-ai

How does your technology work? Do you have a deep understanding of the tech, or is it effectively a “black box”? And does this even matter? We do a deep dive into the history of the black box, how it’s understood when it comes to science and technology, and what that means for AI-assisted science.


On January 9th, 2024, rabbit Incorporated introduced their R one, their handheld device that would let you get away from using apps on your phone by connecting them all together through using the power of ai. The handheld device is aimed at consumers and is about half the size of an iPhone, and as the CEO claims, it is the beginning of a new era in human machine interaction.

End quote. By using what they call a large action model, or LAM, it’s supposed to interpret the user’s intention and behavior and allow them to use the apps quicker. It’s acceleration in a box. But what exactly does that box do? When you look at a new tool from the outside, it may seem odd to trust all your actions to something that you barely know how it works.

But let me ask you, can you tell me how anything you own works? Your car, your phone, your laptop, your furnace, your fridge, anything at all. What makes it run? I mean, we might have some grade school ideas from a Richard Scarry book or a past episode of How It’s Made, but But what makes any of those things that we think we know different from an AI device that nobody’s ever seen before?

They’re all effectively black boxes. And we’re going to explore what that means in this episode of the Implosipod.

Welcome to the Implausipod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. And in all this discussion of black boxes, you might have already formed a particular mental image. The most common one is probably that of the airline flight recorder, the device that’s embedded in every modern airplane and becomes the subject of a frantic search in case of an accident.

Now, the thing is, they’re no longer black, they’re rather a bright orange, much like the Rabbit R1 that was demoed. But associating black boxes with the flight recorder isn’t that far off, because its origin was tied to that of the airline industry, specifically in World War II, when the massive amount of flights generated a need to find out what was going on with the planes that were flying continual missions across the English Channel.

Following World War II, the use of black boxes Boxes expanded as the industry shifted from military to commercial applications. I mean, the military still used them too. It was useful to find out what was going on with the flights, but that idea that they became embedded within commercial aircraft and were used to test the conditions and find out what happened so they could fix things and make things safer and more reliable overall.

meant that their existence and use became widely known. But, by using them to figure out the cause of accidents and increase the reliability, they were able to increase trust. To the point that air travel was less dangerous than the drive to the airport in your car, and few, if any, passengers had many qualms left about the Safety of the flight.

And while this is the origin of the black box in other areas, it can have a different meaning in fields like science or engineering or systems. Theory can be something complex where it’s just judged by its inputs and outputs. Now that could be anything from as simple as I can. Simple integrated circuit to a guitar pedal to something complex like a computer or your car or furnace or any of those devices we talked about before but it could also be something super complex like an institution or an organization or the human brain or an AI.

I think the best way to describe it is an old New Yorker cartoon that had a couple scientists in front of a blackboard filled with equations and in the middle of it says, Then a miracle occurs. It’s a good joke. Everyone thinks it’s a Far Side cartoon, but it was actually done by Sidney Harris, but The point being that right now in 2024, it looks like that miracle.

It’s called AI.

So how did we get to thinking that AI is a miracle product? I mean, aside from using the LLMs and generative art tools, things like DALL-E and Sora, and seeing the results, well, as we’ve spent the last couple episodes kinda setting up, a lot of this can occur through the mythic representations of it that we often have in pop culture.

And we have lots of choices to choose from. There’s lots of representations of AI in media in the first nearly two and a half decades of the 21st century. We can look at movies like Her from 2013, where the virtual assistant of Joaquin Phoenix becomes a romantic liaison. Or how Tony Stark’s supercomputer Jarvis is represented in the first Iron Man film in 2008.

Or, for a longer, more intimate look, the growth and development of Samaritan through the five seasons of the CBS show Person of Interest from 2011 through 2016. And I’d be remiss if I didn’t mention their granddaddy Hal from 2001 A Space Odyssey by Kubrick in 1968. But I think we’ll have to return to that one a little bit more in next episode.

The point being that we have lots of representations of AI or artificial intelligences that are not ambulatory machines, but are actually just embedded within a box. And this is why I’m mentioning these examples specifically, because they’re more directly relevant to our current AI tools that we have access to.

The way that these ones are presented not only shapes the cultural form of them, but our expected patterns of use. And that shaping of technology is key by treating AI as a black box, something that can take almost anything from us and output something magical. We project a lot of our hopes and fears upon what it might actually be capable of accomplishing.

What we’re seeing with extended use is that the capabilities might be a little bit more limited than originally anticipated. But every time something new gets shown off, like Sora or the rabbit or what have you, then that expectation grows again, and the fears and hopes and dreams return. So because of these different interpretations, we end up effectively putting another black box around the AI technology itself, which to reiterate is still pretty opaque, but it means our interpretation of it is going to be very contextual.

Our interpretation of the technology is going to be very different based on our particular position or our goals, what we’re hoping to do with the technology or what problems we’re looking for it to solve. That’s something we might call interpretive flexibility, and that leads us into another black box, the black box of the social construction of technology, or SCOT.

So SCOT is one of a cluster of theories or models within the field of science and technology studies that aims to have a sociological understanding of technology in this case. Originally presented in 1987 by Wiebe Biejker and Trevor Pinch, a lot of work was being done within the field throughout the 80s, 90s, and early 2000s when I entered grad school.

So, so if you studied technology as I was, you’d have to grapple with Scott and the STS field in general. One of the arguments that Pinch and Biejker were making was that Science and technology were both often treated as black boxes within their field of study. Now, they were drawing on earlier scholarship for this.

One of their key sources was Leighton, who in 1977 wrote, What is needed is an understanding of technology from inside. Both as a body of knowledge and as a social system. Instead, technology is often treated as a black box whose contents and behavior may be assumed to be common knowledge. End quote. So whether the study was the field of science and the science itself was.

irrelevant, it didn’t have to be known, it could just be treated as a black box and the theory applied to whatever particular thing was being studied. Or people studying innovation that had all the interest in the inputs to innovation but had no particular interest or insight into the technology on its own.

So obviously the studies up to 1987 had a bit of a blind spot in what they were looking at. And Pinch and Becker are arguing that it’s more than just the users and producers, but any relevant social group that might be involved with a particular artifact needs to be examined when we’re trying to understand what’s going on.

Now, their arguments about interpretive flexibility in relevant social groups is just another way of saying, this street finds its own uses for things, the quote from William Gibson in But their main point is that even during the early stages, all these technologies have different groups that are using them in different ways, according to their own needs.

Over time, it kind of becomes rationalized. It’s something that they call closure. And that the technology becomes, you know, what we all think of it. We could look at, say, an iPhone, to use one recent example, as being pretty much static now. There’s some small innovations, incremental innovations, that happen on a regular basis.

But, by and large, the smartphone as it stands is kind of closed. It’s just the thing that it is now. And there isn’t a lot of innovation happening there anymore. But, Perhaps I’ve said too much and we’ll get to the iPhone and the details of that at a later date. But the thing is that once the technology becomes closed like that, it returns to being a black box.

It is what we thought it is, you know? And so if you ask somebody what a smartphone is and how does it work, those are kind of irrelevant questions. A smartphone is what a smartphone is, and it doesn’t really matter how the insides work, its product is its output. It’s what it’s used for. Now, this model of a black box with respect to technology isn’t without its critiques.

Six years after its publication, in 1993, the academic Langdon Winner wrote a critique of Scott in the works of Pinch and Biker. It was called Upon Opening the Black Box and Finding it Empty. Now, Langdon Winner is well known for his 1980 article, Do Artifacts Have Politics? And I think that that text in particular is, like, required reading.

So let’s bust that out in a future episode and take a deep dive on it. But in the meantime, the critique that he had with respect to social constructivism is In four main areas. The first one is the consequences. This is from like page 368 of his article. And he says, the problem there is that they’re so focused on what shapes the technology, what brings it into being that they don’t look at anything that happens afterwards, the consequences.

And we can see that with respect to AI, where there’s a lot of work on the development, but now people are actually going, Hey, what are the impacts of this getting introduced large scale throughout our society? So we can see how our own discourse about technology is actually looking at the impacts. And this is something that was kind of missing from the theory.

theoretical point of view back in like 1987. Now I’ll argue that there’s value in understanding how we came up with a particular technology with how it’s formed so that you can see those signs again, when they happen. And one of the challenges whenever you’re studying technology is looking at something that’s incipient or under development and being able to pick the next big one.

Well, what? AI, we’re already past that point. We know it’s going to have a massive impact. The question is what are going to be the consequences of that impact? How big of a crater is that meteorite going to leave? Now for Winner, a second critique is that Scot looks at all the people that are involved in the production of a technology, but not necessarily at the groups that are excluded from that production.

For AI, we can look at the tech giants and the CEOs, the people doing a lot to promote and roll out this technology as well as those companies that are adopting it, but we’re often not seeing the impacts on those who are going to be directly affected by the large scale. introduction of AI into our economy.

We saw it a little bit with the Hollywood Strikes of 2023, but again, those are the high profile cases and not the vast majority of people that will be impacted by the deployment of a new technology. And this feeds right into Scot’s third critique, that Scot focuses on certain social groups but misses the larger impact or even like the dynamics of what’s going on.

How technological change may impact much wider across our, you know, civilization. And by ignoring these larger scale social processes, the deeper, as Langdon Winters says, the deeper cultural, intellectual, or economic regions of social choices about technology, these things remain hidden, they remain obfuscated, they remain part of the black box and closed off.

And this ties directly into Wiener’s fourth critique as well, is that when Scottis looking at particular technology it doesn’t necessarily make a claim about what it all means. Now in some cases that’s fine because it’s happening in the moment, the technology is dynamic and it’s currently under development like what we’re seeing with AI.

But if you’re looking at something historical that’s been going on for decades and decades and decades, like Oh, the black boxes we mentioned at the beginning, the flight recorders that we started the episode with. That’s pretty much a set thing now. And the only question is, you know, when, say, a new accident happens and we have a search for it.

But by and large, that’s a set technology. Can’t we make an evaluative claim about that, what that means for us as a society? I mean, there’s value in an analysis of maintaining some objectivity and distance, but at some point you have to be able to make a claim. Because if you don’t, you may just end up providing some cover by saying that the construction of a given technology is value neutral, which is what that interpretive flexibility is basically saying.

Near the end of the paper, in his critique of another scholar by the name of Stephen Woolgar, Langdon Winner states, Quote, power holders who have technological megaprojects in mind could well find comfort in a vision like that now offered by the social constructivists. Unlike the inquiries of previous generations of critical social thinkers, social constructivism provides no solid systematic standpoint or core of moral concerns from which to criticize or oppose any particular patterns of technical development.

end quote. And to be absolutely clear, the current development of AI tools around the globe are absolutely technological mega projects. We discussed this back in episode 12 when we looked at Nick Bostrom’s work on superintelligence. So as this global race to develop AI or AGI is taking place, it would serve us well to have a theory of technology that allows us to provide some critique.

Now that Steve Woolgar guy that Winder was critiquing had a writing partner back in the seventies, and they started looking at science from an anthropological perspective in their study of laboratory life. And he wrote that with Bruno Latour. And Bruno Latour was working with another set of theorists who studied technology as a black box and that was called Actor Network Theory.

And that had a couple key components that might help us out. Now, the other people involved were like John Law and Michel Callon, and I think we might have mentioned both of them before. But one of the basic things about actor network theory is that it looks at things involved in a given technology symmetrically.

That means it doesn’t matter whether it’s an artifact, or a creature, or a set of documents, or a person, they’re all actors, and they can be looked at through the actions that they have. Latour calls it a sociology of translation. It’s more about the relationships between the various elements within the network rather than the attributes of any one given thing.

So it’s the study of power relationships between various types of things. It’s what Nick Bostrom would call a flat ontology, but I know as I’m saying those words out loud I’m probably losing, you know, listeners by the droves here. So we’ll just keep it simple and state that a person using a tool is going to have normative expectancy.

About how it works. Like they’re gonna have some basic assumptions, right? If you grab a hammer, it’s gonna have a handle and a head and, and depending on its size or its shape or material, it might, you know, determine its use. It might also have some affordances that suggest how it could be used, but generally that assemblage, that conjunction of the hammer and the user.

I don’t know, we’ll call him Hammer Guy, is going to be different than a guy without a hammer, right? We’re going to say, hey, Hammer Guy, put some nails in that board there, put that thing together, rather than, you know, please hammer, don’t hurt him, or whatever. All right, I might be recording this too late at night, but the point being is that people with tools will have expectations about how they get used, and some of that goes into how those tools are constructed, and that can be shaped by the construction of the technology, but it can also be shaped by our relation to that technology.

And that’s what we’re seeing with AI, as we argued way back in episode 12 that, you know, AI is a assistive technology. It does allow for us to do certain things and extends our reach in certain areas. But here’s the problem. Generally, we can see what kind of condition the hammer’s in. And we can have a good idea of how it’s going to work for us, right?

But we can’t say that with AI. We can maybe trust the hammer or the tools that we become accustomed to using through practice and trial and error. But AI is both too new and too opaque. The black box is too dark that we really don’t know what’s going on. And while we might put in inputs, we can’t trust the output.

And that brings us to the last part of our story.

In the previous section, the authors that we were mentioning, Latour and Woolgar, like winner pitch biker, are key figures, not just in the study of technology, but also in the philosophy of science. Latour and Woolgar’s Laboratory Life from 1977 is a key text and it really sent shockwaves through the whole study of science and is a foundational text within that field.

And part of that is recognizing. Even from a cursory glance, once you start looking at science from a anthropological point of view, is the unique relationship that scientists have with their instruments. And the author Inkeri Koskinen sums up a lot of this in an article from 2023, and they termed the relationship that scientists have with their tools the necessary trust view.

Trust is necessary because collective knowledge production is characterized by relationships of epistemic dependence. Not everything scientists do can be double checked. Scientific collaborations are in practice possible only if its members accept it. Teach other’s contributions without such checks.

Not only does a scientist have to rely on the skills of their colleagues, but they must also trust that the colleagues are honest and will not betray them. For instance, by intentionally or recklessly breaching the standards of practices accepted in the field or by plagiarizing them or someone else.

End quote. And we could probably all think of examples where this relationship of trust is breached, but. The point being is that science, as it normally operates, relies on relative levels of trust between the actors that are involved, in this case scientists and their tools as well. And that’s embedded in the practice throughout science, that idea of peer review of, or of reproducibility or verifiability.

It’s part of the whole process. But the challenge is, especially for large projects, you can’t know how everything works. So you’re dependent in some way that the material or products or tools that you’re using has been verified or checked by at least somebody else that you have that trust with. And this trust is the same that a mountain climber might have in their tools or an airline pilot might have in their instruments.

You know, trust, but verify, because your life might depend on it. And that brings us all the way around to our black boxes that we started the discussion with. Now, scientists lives might not depend on that trust the same way that it would with airline pilots and mountain climbers, but, you know, if they’re working with dangerous materials, it absolutely does, because, you know, chemicals being what they are, we’ve all seen some Mythbusters episodes where things go foosh rather rapidly.

But for most scientists, what Koskinen notes that this trust in their instruments is really kind of a quasi trust, in that they have normative expectations about how the tools they use are going to function. And moreover, this quasi trust is based on rational expectations. They’re rationally grounded.

And this brings us back full circle. How does your AI work? Can you trust it? Is that trust rationally grounded? Now, this has been an ongoing issue in the study of science for a while now, as computer simulations and related tools have been a bigger, bigger part of the way science is conducted, especially in certain disciplines.

Now, Humphrey’s argument is that, quote, computational processes have already become so fast and complex that it was beyond our human cognitive capabilities to understand their details. Basically, computational intensive science is more reliant on the tools than ever before. And those tools are What he calls epistemically opaque.

That means it’s impossible to know all the elements of the process that go into the knowledge production. So this is becoming a challenge for the way science is conducted. And this goes way before the release of ChatGPT. Much of the research that Koskinen is quoting comes from the 20 teens. Research that’s heavily reliant on machine learning or on, say, automatic image classifiers, fields like astronomy or biology, have been finding challenges in the use of these tools.

Now, some are arguing that even though those tools are opaque, they’re black boxed, they can be relied on, and they’re used justified because we can work on the processes surrounding them. They can be tested, verified, and validated, and thus a chain of reliability can be established. This is something that some authors call computational reliabilism, which is a bit of a mouthful for me to say, but it’s basically saying that the use of the tools is grounded through validation.

Basically, it’s performing within acceptable boundaries for whatever that field is. And this gets the idea of thinking of the scientist as not just the person themselves, but also their tools. So they’re an extended agent, the same as, you know, the dude with the hammer that we discussed earlier. Or chainsaw man.

You can think about how they’re one and the same. One of the challenges there is that When a scientist is familiar with the tool, they might not be checking it constantly, you know, so again, it might start pushing out some weird results. So it’s hard to reconcile that trust we have in the combined scientist using AI.

They become, effectively, a black box. And this issue is by no means resolved. It’s still early days, and it’s changing constantly. Weekly, it seems, sometimes. And to show what some of the impacts of AI might be, I’ll take you to a 1998 paper by Martin Weitzman. Now, this is in economics, but it’s a paper that’s titled Recombinant Growth.

And this isn’t the last paper in my database that mentions black boxes, but it is one of them. What Weitzman is arguing is that when we’re looking at innovation, R& D, or Knowledge production is often treated as a black box. And if we look at how new ideas are generated, one of those is through the combination of various elements that are already existing.

If AI tools can take a much larger set of existing knowledge, far more than any one person, or even teams of people can bring together at any one point in time and put those together in new ways. then the ability to come up with new ideas far exceeds that that exists today. This directly challenges a lot of the current arguments going on, on AI and creativity, and that those arguments completely miss the point of what creativity is and how it operates.

Weitzman states that new ideas arise out of existing ideas and some kind of cumulative interactive process. And we know that there’s a lot of stuff out there that we’ve never tried before. So the field of possibilities is exceedingly vast. And the future of AI assisted science could potentially lead to some fantastic discoveries.

But we’re going to need to come to terms with how we relate to the black box of scientist and AI tool. And when it comes to AI, our relationship to our tools has not always been cordial. In our imagination, from everything from Terminator to The Matrix to Dune, it always seems to come down to violence.

So in our next episode, we’re going to look into that, into why it always comes down to a Butlerian Jihad.

Once again, thanks for joining us here on the Implausipod. I’m your host, Dr. Implausible, and the research and editing, mixing, and writing has been by me. If you have any questions, comments, or there’s elements you’d like us to go into additional detail on, please feel free to contact the show at drimplausible at implausipod dot com. And if you made it this far, you’re awesome. Thank you. A brief request. There’s no advertisement, no cost for this show. but it only grows through word of mouth. So, if you like this show, share it with a friend, or mention it elsewhere on social media. We’d appreciate that so much. Until next time, it’s been fantastic.

Take care, have fun.

Bibliography:
Bijker, W., Hughes, T., & Pinch, T. (Eds.). (1987). The Social Construction of Technological Systems. The MIT Press. 

Koskinen, I. (2023). We Have No Satisfactory Social Epistemology of AI-Based Science. Social Epistemology, 0(0), 1–18. https://doi.org/10.1080/02691728.2023.2286253 

Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford University Press. 

Latour, B., & Woolgar, S. (1979). Laboratory Life: The construction of scientific facts. Sage Publications. 

Pierce, D. (2024, January 9). The Rabbit R1 is an AI-powered gadget that can use your apps for you. The Verge. https://www.theverge.com/2024/1/9/24030667/rabbit-r1-ai-action-model-price-release-date 

rabbit—Keynote. (n.d.). Retrieved February 25, 2024, from https://www.rabbit.tech/keynote 

Sutter, P. (2023, October 4). AI is already helping astronomers make incredible discoveries. Here’s how. Space.Com. https://www.space.com/astronomy-research-ai-future 

Weitzman, M. L. (1998). Recombinant Growth*. The Quarterly Journal of Economics, 113(2), 331–360. https://doi.org/10.1162/003355398555595 

Winner, L. (1993). Upon Opening the Black Box and Finding it Empty: Social Constructivism and the Philosophy of Technology. Science Technology & Human Values, 18(3), 362–378. 

The Old Man and The River

(This was originally released as Implausipod Episode 26, on February 4, 2024)

https://www.implausipod.com/1935232/episodes/14446788-e0027-the-old-man-and-the-river


The parable of the Old Man and the River tells us it isn’t now deep the water is, but how swift the water flows when it comes to looking at pop culture.  There’s magic in how crystal clear those swift waters flow.   Join us for a review of the theories underpining the value of studying pop culture for academic analysis, what that means for the future of the Implausipod, and hints at who the old man might be.


The word’s gold rush conjures a particular image in everyone’s mind’s
eye. Images of the old west, and boomtowns where dusty prospectors would
stake a claim and take their chances. Near where I grew up, the heyday
was 1895, where dredges funded by Europeans and Americans would lift up
the riverbed by the bucketful, trying to sift up that glittering metal,
but by 1907 they were mostly gone.Abandoning their tools on the
riverbed to rust away, but that didn’t stop the smaller prospectors.
They continued on. Legend tells of one prospector who’s still tending
his claim to this very day. Every morning he gets up and tends the
hearth in his tiny cabin, makes himself some coffee and porridge, maybe
adds a little salt pork and a biscuit if it’s been a good month, and
then packs up his gear and heads up the mountain.It’s a two hour
hike to get to where the waters run clear. And you gotta get there for
dawn, so that when you reach down with your pan and give it a shake in
the stream, you can hold it up just right against the morning light, and
if you’re lucky, real lucky, you’ll see that glittering gold sparkling
in the pan.You see, the secret that the prospector knows is it isn’t
how deep the water is, it’s how fast it’s going. And those mountain
streams are very fast indeed.

No one knows exactly what keeps that Prospector going, as I’m sure you can do the math and you can tell he’s been at it for over a century. Some say he’s a ghost, or maybe a revenant. They seem to be popular around these parts. Maybe it’s a
curse, and whenever he finds what he seeks, his soul will be released.


I’ve got an inkling, but I’ll keep my hunch to myself a little bit
longer, and maybe Tell you at the end of this episode of The Implausipod,
while we explore the old man and the river.

Welcome to The Implausipod, a podcast about the intersection of art, technology, and
popular culture. I’m your host, Dr. Implausible, and this episode we’ll
pick up almost directly where our last episode left off, where as
Silicon Dreams talked about how literature inspired the mythic
imagination that led to the development of virtual reality and our new
AI tools, here we’re going to talk about pop culture more generally.

At the beginning of every episode, I talk about how this podcast sits at
the intersection of art, technology, and popular culture, but maybe it’s
not so clear as we’ve bounced around a whole lot. We’ve talked about
television shows and cyberpunk novels, we’ve talked about Doctor Who
episodes, ancient science fiction, Warhammer 40, 000, and a few episodes
on some technology too, and it might not seem how they’re connected,
but I assure you they’re all interrelated.

So, in order to lay that all out, I’m going to break this episode into a couple chunks. We’re going to look at the philosophical background, and then we’re going to
look at some of the theoretical approaches about how this is actually
happening. So yeah, philosophy and theory. Exciting. Before we really
get started, I want to take a moment to pique your interest and discuss
why we want to look at philosophy.

Outwardly, it may not make sense to analyze the lyrical content of Taylor Swift’s songs, or look at the political economy of video games, or what they represent, to look at the commercials that air during the Super Bowl and not just the Super Bowl
itself, to take a recent example. But that’s exactly why we need to look
at it, because all those elements that are there in our pop culture are
those elements that reflect and represent So if we want to know what’s
really going on in our culture, it makes sense to look at what we’re
making and sharing with each other, as we talked about in our spreadable
media episode.

Because it turns out, once you get skilled at looking at pop culture, it’s really good at reflecting what our motivations are. That pan that our old man is holding. And let me share with you my favorite quote on it. Quote,

The most fertile ground for analyzing motives is pop culture. Not because pop culture is deep, but because it’s so shallow.It’s where those wishes and longings are most nakedly evident. End quote.

This is from the science fiction author Bruce Sterling in 2002, and I’ve I’ve used it as a touchstone ever since, and it doesn’t matter whether it’s pro wrestling or superhero
movies or stand up comedy or miniature games, I’ve found it to hold
true.So let’s get into the philosophy of why we’re doing this, and
for that we’re going to have to take a trip down the mountain.

Now,depending on your academic background, you may have heard of the Frankfurt School before. It was founded at Goethe University in Frankfurt, Natch. It was
called the Institute for Social Research, and they critiqued society
from a Marxist lens and founded what is now called Critical Theory.
Prior to World War II, the director was Max Horkheimer, who wrote some
of the foundational documents and worked with Theodor Adorno and also
Herbert Marcuse.

Also associated with the school was Walter Benjamin, who we’ll get to in a bit. They were critical of the cultural industry, as tools used to promote, repeat, and sustain capitalism, but also like, just power imbalances of the dominant ideology. The Frankfurt School coined the term the cultural industry, and this included film,
television, radio, music, print media, and By modern extension, video
games and social media would count too, and where Marx was focused on
the means of production, the Frankfurt School extended that to mean the
means of production of culture, as they observed that those who owned
those cultural forms were able to have an outsized say in the political
discourse.

They were able to reproduce the ideology.

And for the Frankfurt School, we can see this in the ownership of media in their
time, with the William Randolph Hearst’s of the world, and in ours with,
say, Jeff Bezos’s purchase of the Washington Post, and Elon Musk’s
purchase of Twitter, Mark Zuckerberg and his advertising company
Facebook, building media outlets for their customers, and the purchase
of Instagram, WhatsApp, and the like.And while the Frankfurt School
were some of our first explorers who identified that river, the flood of
material that we get from the cultural industries, they also had some
rather negative thoughts about it as well.

I’m referring here mostly to Theodore Adorno, who was a musicologist and was critical of Popular music, and in his time that included jazz, but for him popular culture was something that rationalized the arts, that took off all the rough
edges to make it palatable for consumption.And by that it made the
consumers, the listeners or viewers or readers, that much more passive
and just accepting of the information that they were getting. If the art
doesn’t challenge you, it doesn’t make you think. But here I think we
need to make a bit of a distinction between mass culture and popular
culture.If mass culture is a big lake or the ocean that’s available
to everybody, then popular culture is that fast flowing river that joins
the sea at some point.

The critical point here is that only some material from mass culture enters the popular culture. to quote John Fiske. But if we want to understand how that happens, we need to start moving on from the Frankfurt School to one of their associates, Walter Benjamin.

Now, he’s perhaps best known for his writing on art and
aesthetics, but for us, the work that’s most relevant is the work of art
in the age of its technological reproducibility. This is a foundational
text about how the very nature of art changes when you no longer need
an artist doing each and every piece, and it can be mass copied and
reproduced.And it’s even more relevant now in the age of AI tools,
so we’ll have to return to this in a few weeks. Now, Benjamin, writing
in 1935, is talking a lot about film at this point in time, as different
from painting and other composition, and being something much more than
just photography itself, and it’s the unfinished nature of it that it
cannot be completed with a single stroke, but rather requires much in
the way of what we now call post production, the work of editors and
colorists and visual effects and sound design, and all these things
together.

Film has a capacity for improvement, end quote, in that all
these things can be done after the shot, and these are One of the
things that make film so magical, that capacity in turn is what Benjamin
quotes from Franz Werfel, quote Film has not yet realized its true
purpose, its real possibilities. These consist in its unique ability to
use natural means to give incomparably convincing expression to the
fairy like, the marvelous.

The supernatural. Of course here Werfel, and Benjamin, is talking about A Midsummer Night’s Dream, but we can see how film can be used to create and develop the mythic imagination in its audience as well, as we discussed last episode. So film is about getting us used to new ideas. Also, propaganda, he’s still affiliated
with the Frankfurt School.But the idea of new ideas more generally.


Earlier in the text, Benjamin writes, the function of film is to train
human beings in the appreciation and reactions needed to deal with the
vast apparatus whose role in their lives is expanding almost daily, end
quote. This is using film as a referent, well before television and the
role that advertising on television would come to play.and is so
much more prophetic for that. We can see here the threads of the
development of the idea of the audience as being there for reception of
ideas. And these ideas can also be seen in the work of McLuhan.

Now, we’ve mentioned Marshall McLuhan in earlier episodes, and we will be
returning to him again.McLuhan talked about a lot of things when it
comes to media, but his biggest idea relative to what we’re talking
about here is the idea of content, for if the medium is the message,
this means that the way in which radios, TVs, or phones address us is
more important than what they say when they do. End quote.That was
from Adrian Daub’s Critical Review of Silicon Valley Thought. Daub goes
into depth on how McLuhan was the media theorist beloved by the 60s
counterculture, which ended up turning into the Silicon Valley culture
during the 70s, 80s, and beyond. And for them, McLuhan was all about the
vibe. He passed the vibe check: if you were hip,you got it.


McLuhan’s idea of media, content, and audience became pervasive in the Silicon Valley. And we’ll come back to both him and Daub’s book in a
future episode. But then, as per the old Heritage Minute that aired on
Canadian television, the content is the audience. We’ve gone into depth
about how the cultural industries commodify audiences and sell them back
to companies, whether they are advertisers, direct marketers, or
through other means.From McLuhan, each successive medium was built
on the material output of another, older medium. Television would
incorporate film, and theatre, and radio, and In that way surpassed them
all, and we saw again similar effects with what social media like
TikTok or YouTube Shorts now does.

The contrast to McLuhan of course is the British critic Raymond Williams.He rejected McLuhan’s more technologically deterministic leanings and focused on the cultural form of television by looking at what was actually reproduced and shown on
it. In his 1974 book, Television, Williams looked at how earlier forms
like the News Bulletin or the Roundtable Discussion were presented on
television.

And there’s always a much more direct, personal,
immediate, intimate relationship that the television broadcast had with
its audience. We can see here that the stream is flowing much faster,
becoming closer, more personal as we skip through the decades to what we
have now. And as we glance back into those waters and see how it
reflects our society around us, we realize that television is really
about perception.

And this is what Pierre Bourdieu notices as well.
Bourdieu is not really big on television. He says that the invisible
structures therein, the ones that operate around and behind it,
determine what appears on screen. These are all driven by ratings, and
what they end up Doing is perpetuating symbolic violence.Now, that
violence was the focus of much research. And we’ll look at the theories
behind that research in the second half of our episode, next.

So, up till now, we’ve been looking at some of the philosophy about why we need
to peer deep into the river. But let’s see if we can learn a little
something by taking a look at the way that that research has been
operationalized, the techniques for panning for gold in that stream. And
as we saw with Bourdieu, one of the main concerns was the violence,
symbolic or otherwise, that was shown.But that actually goes back
further. Quoting Em Griffin, he noted that one of the early theories
that TV’s powers comes from the symbolic content of the real life drama
shown hour after hour.

And this comes from Cultivation Theory, proposed by George Gerbner in 1973. Now, as Griffin notes, television’s function was as society’s institutional storyteller.lines up with what we’ve discussed earlier, but for Gerbner, the story being told was violence.
As part of his 20 years cultural indicators project, there was a lot of
research done into the amount of television violence that was being
shown. And it was more than just the overt acts of violence, it was also
who the violence was directed to, often minorities or marginal
populations.

There was a lot of symbolic vulnerability that was
displayed on television. And this continual repetition of violence
contributed a lot to what people call the Mean World Syndrome. The
people thought the world was a lot more violent and scary than it
actually might be. That there was a high chance of involvement within
violence, there was a fear of walking alone at night.the perceived
activity of police, what they were actually doing, and a general
mistrust of people all kind of came out of this.

For Gerbner, this all
is encapsulated in what he calls cultivation theory, where he studies
the differential between light and heavy TV viewers and sees the
difference in their opinions.Cultivation theory differs from other
things like media effects because in the modern landscape, there is no
non TV environment, no anti environment to it, as we discussed with
McLuhan in our Dumpshock episode back in episode number 14. MediaEffects
is predicated on the idea that there’s a before and after exposure to
measure, but because television exposure happens at such a young age,
there’s no meaningful way to test it.

So Gerbner and others who use it are trying to figure out if the damage is in the dosage. When viewers see repeated instances of violence, they may find that it resonates with their own experiences. People relate the constant portrayal on
television, what they see, to their own experience, even if it only
happened once.But if you’re seeing constant acts of violence,
mugging, robbery, etc, and it happens to you on one occasion, you’re
going to think, that yeah this is what’s happening all the time. But the
constantly flowing river doesn’t just have violence in it. Obviously
that’s one thing that’s there, that’s observable, that’s testable, that
you can get grant money for for a 20 year study.

But there’s other things going on flowing through the river. The question is, how does it all get there? This is where the agenda setting function of the media
comes in. Recognized by Maxwell McCombs and Donald Shaw in 1972, they
state that we look to news professionals for cues on where to focus our
attention.Paraphrasing Bernard Cohen, they note that the media might
not be successful in telling the audience what to think, but they are
very successful in telling the audience what to think about. about.

The challenge is that, as oft repeated, correlation is not causation. Maybe
the audience is driving the agenda.In some instances, this may
definitely happen, as with modern news organizations hopping on TikTok
trends or whatever. But on substantive matters, the media drives the
agenda. And Em Griffin points out that several studies have confirmed
McCombs and Shaw’s hypothesis since it was originally published. So, who
sets the media’s agenda?

Ownership, gatekeepers, PR firms, interest aggregations, and lobbyists, the invisible structures that Bourdieu talked about earlier, and this dovetails all the way back to the Frankfurt School when they’re talking about the ownership of the means
of the production of culture. And I want to be clear here that not
everybody reciting here is like a Marxist or a left wing academic.This
is just from observing what’s going on.

So, if these invisible structures are setting the agenda, are deciding which rivers flow into the lake of mass culture, what’s the role of the audience? Well, people
are not mindless in this, they have agency. They can choose what they
like and what they want.And as we follow that stream back into the
mountains, we’re getting a little bit closer to the source. And we find
ourselves ultimately asking, what does the Audience use media for this
is probably best addressed by the field of study that looks at uses and
gratifications. The primary source we’re using here is the work of Elihu
Katz in 1973.Although we notes the idea of studying the audiences
gratification goes back to Cantrel in 1942.

What Katz and his collaborators were arguing is that quote, people bend the media to their needs more readily than the media overpower them. The media gratify
individuals by satisfying those needs, whether these are social, like in
the terms of connection or standing, or psychological, like in terms of
belonging or reinforcement.And it’s these needs to which media is
most often used for, that use as part of the equation. These needs can
be about knowledge, emotional experience, credibility, or simply
connection. And there’s a whole host more. They did come up with quite a
large matrix to populate their survey with. But the point is, is that
the audience is not a monolith.

They have agency and there’s a wide degree of different uses that they might put the media towards. And some of those may aligned with the agenda setting that’s set in place by the major media companies, but some of it may not. It would be used for
more. Personal purposes. And there’s a continual cybernetic feedback
loop going back and forth between the agenda setting and the uses of the
audience themselves.And somehow the audience always find new things
that they end up using the media for. Which brings us back to where we
started. That high mountain stream running so very, very fast indeed.
You see, it’s in our imagination, both individual and collective, where
we get those ideas from. The jokes we tell with our friends, the wild
stories that we might come up with, and as those get repeated and
shared, they take on a life of their own.

And sometimes when they’re laid down in a book or a movie, comic book, video game, wherever, they become aspirational. And it’s something we can set our goals towards. It’s like, hey, check out that moon up there, do you think we can get
there? And a hundred years later, maybe it’ll just happen. And I think
that brings us full circle with our Silicon Dreams of last episode as
well.

As we look back over a hundred years of communication, media,
psychology studies, audience research, and the hundred years of
development that have happened while that old man has been up that
mountain, I think you understand now that Perhaps that old man is me.
This has been a bit of a summary of the academic upbringing that I’ve
had over the last 30 years.The stuff that I was exposed to and how I
learned to formulate some of the questions that I did. in my research.
But I have one more secret to tell you about the stream, too. Because,
while I might look and sound the part of the old man, there’s a secret
hidden within those swiftly flowing waters. It keeps you young.

Or young at heart, at least. It might not be comfortable, and it continually forces you to re examine the world around you. You have to climb back up that mountain every day. The water can be cold and uncomfortable, but if you peer within it, you can see what’s going on. So, by engaging where the waters run swift and deep, wherever they’re fresh and clear, whether it’s a TikTok or Mastodon or Snapchat, wherever
the youth might be gathering, that’s where you’ll find a good look at
what the future might hold.

Thanks for joining us here on the Implausipod. In the next episode, we might find exactly what that future holds, when we open up the black box labeled AI that we found during all this dredging in the river, and see what those fast running waters
can tell us about our expectations, the uses and gratifications from
that most recent of our technologies.But we may have to wait a few
episodes to find out how that’s all connected to a guy named Samuel
Butler. And then after that, we’ll soon return to Appendix W to look at
Dune before the second movie’s release. Stay tuned. It’s going to be a
busy month.

Once again, I’m your host, Dr. Implausible. The research,
writing, editing, mixing, and music is all done by me.I can be
reached at drimplausible at implausipod. com, and this episode is
licensed under a Creative Commons 4. 0 sharer like license. Thanks for
joining us, and I hope to talk with you again real soon. Take care, and
have fun.

Silicon Dreams

(This was originally released as Implausipod Episode 26, on February 4, 2024)

https://www.implausipod.com/1935232/episodes/14428351-implausipod-e0026-silicon-dreams

Silicon Dreams are those glittering visions of mythic intensity that inspire the continued development of revolutionary technologies. Listen to this episode of the Implausipod to learn more about where they come from, and how the mythic imagination has been behind the development of virtual reality, artificial intelligence, and other tech innovations.


When Neuromancer appeared, it was picked up and devoured by hundreds, then thousands, of men and women who worked in or around the garages and cubicles, where what is still called new media were, fitfully, being birthed. Thousands who, on reading his description of cyberspace, thought to themselves, That’s so freaking cool!

And set about searching for any way the gold of imagination might be transmuted into silicon reality. End quote. This is by Jack Womack in the 2004 introduction to the 20th anniversary version of Neuromancer. And this episode of The Implausipod is about those silicon dreams.

Welcome to The Implausipod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. And as we ease into 2024, we seem to be living at that intersection, as the technologies of sci fi past are being shown off every week, with new products and instruments of echanger like automation, robotics, and artificial intelligence being brought to market, and older technologies like 3D printing and drones being so commonplace that you can find them at a Costco or Target.

But this process isn’t anything new. It’s been happening for at least 35 or 40 years. And when I first began researching it, almost 20 years ago, back in 2005, I had a hunch that I might be onto something, but reality is far outpaced even my wildest imagination. And that imagination is what this episode is about, the mythic imagination that inspires the development of new technologies, whether it comes from science fiction or fantasy or other sources as well. 

So for this episode, I’ll take you back to that initial hunch and how it led me to track down the sources of those myths and what impact they had on the creation of the digital sublime and how that has impacted our current reality as well.

And with the incipient release of the Apple Vision Pro, their forthcoming AR VR headset, or whatever their marketing department is describing it as, this hunch couldn’t be more timely because my early work was on the development of virtual reality. 

Now, the hunch came about reading something else unrelated.

It was Ray Kurzweil’s work on the singularity that came out in the early 2000s. And I noted how much the work was influenced by or influenced upon, basically co creative, of the works of science fiction that were coming up in those prior 20 years. And it seemed to me that there had to be a lot of overlap between science fiction and science and the development of these new technologies.

But at the time, the literature wasn’t there yet. There was a few authors that had worked on it, notably William Bainbridge, who took a look at the early influences on the development of the space program in his 1976 book, The Spaceflight Revolution. Now, this was a sociological review of it. So he was looking at science and engineering at NASA and elsewhere through that sociological lens.

And in so doing, you noted how a revolutionary technology, like spaceflight, came around mostly theoretically before it was even attempted practically. And that theoretical drive was often influenced by, you know, the visions. In this case, we’ll go back to the mythic visions, that can be influenced by, in this case, fiction.

I mean, visionaries had long thought about traveling to the moon long before science fiction was even a genre, for everything with Jules Verne’s From Earth to the Moon from 1865 all the way up to Georges Méliès A Trip to the Moon, the 1902 short film with the bullet in the eye that we all probably famously remember.

So the idea was definitely there, but the technology wasn’t ready and the science wasn’t necessarily sure either. So this is what all made it a revolutionary idea in what we might call Kuhnian terms. They needed a goal, a target, a vision of what to work towards collectively across different countries and different cultures and different political systems.

They were all still kind of building towards this shared collective vision of getting to the moon in this case as the objective. And this holds true for other technologies as well. In the 40 year retrospective on the original publication of his work titled The Spaceflight Revolution Revisited, Bainbridge notes that we’re seeing something similar with the development of the singularity, referencing Kurzweil explicitly, and that that drew from influences going back to the 50s with Arthur C. Clarke’s novel The City and the Stars. 

And we can see that thread connecting all the way through to 2023 with the developments of ChatGPT and OpenAI. So, a 70 year development timeframe from inception to manifestation to when something actually comes about and is brought forth into reality. And did we see similar timeframes with the development of rocketry from inception to landing on the moon?

Yeah. And are we seeing similar lengths with even current technologies like, again, VR or direct neural implants with Neuralink recently being in the news? And again, the answer is yes, anywhere from 40, 50, 60 years from inception to something being made manifest in the world. Now, there can be reasons for this.

Often, it can be tricky, but what drives that development over that long of a time frame? What keeps us going towards the realization of those dreams of something that will necessarily outlive those originally imagined it? And perhaps several other generations following, but still working towards that idea, that realization.

And the answer is a cultural one. This is where the role of myth comes in.

When we hear the word myth, particular associations often come to mind. We can think of mythic heroes from ages of legend, like Heracles and Thor, Zeus and Odin, and the modern retellings of those, whether they’re showing up as superheroes in Marvel and DC movies, or cartoon characters like Bugs Bunny being a stand in for Anansi or Coyote.

In fact, comic book literature as a whole is filled with the retelling of myths and legends, but also we can see it in our political discourse as well, with myths about the foundation of a country, like those in the United States, with the myth of the Promised Land, or the Founding Fathers, or Pocahontas, or any of a number of other things.

Usually you can tell by whether they’ve shown up in a Disney movie or something. And I’m not harshing specifically on Disney here, at least not for this. The idea is that these myths are the tales that we share, that we share collectively. They’re part of our common cultural understanding. And we’re gonna call this, for lack of a better term, the mythic dimension.

And this is where some of our ideas come from. And these can be ideas about how we shape our culture, how our political system is supposed to work. We’ve talked previously about the social imaginary, way back in episode 9, and this kind of continues on with that thread, or streams, we’ll kind of start changing our metaphor mid stream, for reasons to be explained next episode.

But the point being is that our innovations come from new ideas, whether that’s social innovations, political innovations, cultural, and technological, and when it’s technological innovations, they often come from elements of culture that deal with technology. In this case, science fiction. Now, that isn’t the only source and only pathway for new ideas, of course.

As Henry Petroski has mentioned, human wants have long outpaced human needs as a driver of new inventions. But when we’re talking about revolutionary ideas, radical innovations, stuff that’s new to the world, then it can be one of those primary sources. And as stated, it’s one of those things that can kind of keep the vision and drive going from generation to generation to generation.

And as an expression of our culture, literature has an important role in maintaining this drive. And in the 20th and 21st centuries, we’ve had an explosion of other cultural artifacts like film, television, photography, gaming, and the rest, and these all have a role too, but literature is going to be our primary focus.

And the role that literature takes is that of an exemplar. It points forward towards a daring imaginative goal that may not be achievable, but at least gives those who may be in a position to enact change something to aim for. As Northrop Frye notes, “the written word recreates the past in the present and gives us not the familiar remembered thing, but the glittering intensity of the summoned up hallucination.”

This is from 1981. And it’s in this role that fiction finds itself as a part of literature, as a creator of the prophecies that contradict the conventional wisdom. It allows us to take all these opportunities and use them to drive towards the future. And building on what Northrop Frye said, the Canadian author John Ralston Saul elaborates, he says: “Fiction often reveals to us a greater understanding of our own society as it functions today.”

In other words, great fiction can be true for its time, as well as somehow timeless and true for our time. So this is the role that fiction plays, providing a goal, something timeless and transcendent and intense, something that we can work towards as if it was a dream. And this is what brings us to the development of these new and emerging technologies.

And I do want to stress that we’re looking at multiple technologies here. It isn’t restricted to just one thing. As Canadian academic Vincent Mosco pointed out in his book The Digital Sublime, there’s been similar cycles of mythic inspiration for previous radical technologies like the telegraph, electricity, radio, and television.

And as we noted in our Postcard from Earth episode, this can apply to cinema as well, what Andre Bazin was talking about with regard to the myth of total cinema. What these all link back to is what Perry Miller calls the idea of a technological sublime. An American historian of technology, David E. Nye, goes further into the exploration of this in his own work.

What the technological sublime is is that mythic feeling that we feel when we encounter new technology, the one that strikes right through to our emotions. And it doesn’t necessarily have to be anything electronic, it can be something like witnessing the Hoover Dam, or the first experience of air travel.

But honestly, indoor plumbing, refrigeration, and light switches can all conjure that experience as well, especially if you’ve never experienced it before. To return to Arthur C. Clarke, who we mentioned earlier, that old adage that sufficiently advanced technology is indistinguishable from magic holds true, and this is how we have to understand the enduring appeal and pursuit in development of a new technology, VR.

As the Apple Vision Pro launches, there’s no killer app for it. The business case for it is limited and tenuous at best. The use seems forced, often within the Apple ecosystem, and we don’t know what the enduring appeal of it is. Now, it may be that its time has finally come, with other developers like Meta and Valve both producing products within that market.

And this may create enough interest in it for not just a standard to emerge, but also user demand to match up with the available supply. And this is largely the challenge, to make reality match our dreams. Now, the myths of VR largely come from science fiction within the 70s and 80s, so there was contemporaneous development within the technological sphere as well.

Now, there are authors who have gone into great depths about the history of VR, circa 1990. I’d refer the audience to both Howard Rheingold’s Virtual Reality and Michael Heim’s The Metaphysics of Virtual Reality from 91 and 93, respectively. But when it comes to cultural representations, there have been versions of virtual reality going back for decades.

In 1973, there was a short film version of the Ray Bradbury short story The Veldt. which was originally written in 1950. It was marketed as educational programming, and so the contents of that were burned into my brain when it was shown at school. It took my little eight year old brain a little while to understand what those lines were eating in the final frames of that one.

And you can follow a stream through from that one to their first appearance at the Holodeck on Star Trek The Next Generation in 1988, and then every subsequent appearance thereof. And somewhere in between we had the original Tron from Disney. But the visual representations were few and far between. The main source of representations of virtual reality was science fiction.

While we had early versions of computer use, like John Brunner’s Shockwave Rider from 1975, which would still be recognizable to a modern audience, but with its gated communities, urban decay, and computer viruses and identity theft, the first major representation of virtual would be Vernor Vinge’s True Names from 1981.

Now, both Shockwave Rider and True Names had something in common, that they were gobbled up by the people working in computer engineering at the time. Whether it was on campus or within specific firms, the reports are that both those titles were ones that were held in high regard by computing enthusiasts in the 70s and early part of the 80s.

As Katie Hafner and Michael Lyon note in their book Where the Wizards Stay Up Late, “Bruner became a cult figure as the book swept through the worldwide community of science fiction readers. It had a strong influence on an emerging American computer underground, a loose affiliation of phone freaks, computer hackers in places like Silicon Valley and Cambridge, who appeared simultaneously with the development of the personal computer.”

And six years later, this was still going on when True Names was published. As James Frenkel notes, quote, “When True Names was written, it was considered visionary, and was read by some of those who have had a great deal to do with shaping the internet to date.” And while I admit that his mention is problematic now, writing in the afterword to True Names, Marvin Minsky, the co founder of MIT’s Artificial Intelligence Lab, writes, and I quote, 

“In real life, You often have to deal with things you don’t completely understand. You drive a car, not knowing how its engine works. You ride as passenger in someone else’s car, not knowing how that driver works. And strangest of all, you sometimes drive yourself to work, not knowing how you work yourself. To me, the import of True Names, that it is about how we cope with things we don’t understand.

But, how do we ever understand anything in the first place? Almost always, I think, by using analogies in one way or another, to pretend that each alien thing we see resembles something we already know.” end quote. 

So it’s here in the early 80s where computer scientists and developers are being influenced by the science fiction texts, and you’ll note that I’ve hardly even mentioned the words cyberpunk or cyberspace up to this point in time.

We’ve covered cyberpunk in depth way back in episode 3, and honestly, we will continue to do so in the future. But the influences for the current implementations of virtual reality, which mostly draw from Neal Stephenson’s Snow Crash, whether it’s Meta’ slash Facebook’s pursuit of creating the metaverse, or whether it’s Apple Vision Pro Wearer’s inadvertently becoming the gargoyles from Snow Crash, conducting OSINT at every opportunity, whether inadvertently or not.

But the point is that these ideas of how virtual reality might be achieved, what it would look like, and how it would be incorporated into our daily lives, were prevalent long before the development of the tech actually enabled its use on a regular basis. The vision of the technology of what it could be is what drove the development and subsequent adoption as the users could see themselves incorporating those technologies into their own lives in ways similar to what they saw within the books.

The reason why is that those ideas sparked the mythic imagination as we noted earlier. As Mosco mentions, philosopher Alasdair MacIntyre concludes that “myths are neither true nor false but living or dead”, and the myths of virtual reality are still very much alive. All the attempts to bring them about in the real world, and the unsuccessful attempts at that, haven’t managed to kill the myth or kill the dream.

To quote Mosco a little bit further here: “A myth is alive if it continues to give meaning to human life, if it continues to represent some important part of the collective mentality of a given age, and if it continues to render socially and intellectually tolerable what would otherwise be experienced as incoherence.

To understand a myth involves more than proving it to be false. It means Figuring out why the myth exists, why it is so important to people, what it means, and what it tells us about people’s hopes and dreams.” 

So what does it mean if we’re continually pursuing these dreams of being someplace else, not on this earth, of having different jobs, of having different lives, having a different society that we live in?

And what does it mean when those dreams are pursued by the very richest among us? For those who, to quote a James Bond film would say “the world is not enough”, we can understand what the silicon dreams might mean to the average citizen, the regular users, or even to the developers to bring about something “freaking cool”.

But what does it mean to the technocrats and the industrialists and the billionaires? Why are they so dogged in their pursuit of something that has no killer app? Stick with us as we dig deeper into this in future episodes of The Implausipod.

Thank you for joining us once again here on the Implausipod. I’ve been your host, Dr. Implausible. You can reach me at drimplausible at implausipod. com for any questions, comments, or concerns. The show is licensed under a Creative Commons 4. 0 share alike license. All research, writing, editing, mixing, and music is done by me, Dr.

Implausible. Join us soon for The Old Man and the River, as we’ll look further at the impacts of pop culture on the development of technology. And then I think we’ll be returning back to Appendix W for a couple episodes before the release of Dune II. I hope you join us for that. Stay tuned, take care, and have fun.

Bibliography:
Bainbridge, W. S. (1983). The Space Flight Revolution: A Sociological Study.

Bainbridge, W. S. (2002). The Spaceflight Revolution Revisited. In Stephen Garber (Ed.), Looking Backward, Looking Forward: Forty Year of U.S. Human Spacelight Symposium. National Aeronautics and Space Administration. http://mysite.verizon.net/wsbainbridge/dl/spacerevisit.htm

Brunner, J. (1975). The Shockwave Rider. Harper and Row.

Frenkel, J. (Ed.). (2001). True Names and the Opening of the Cyberspace Frontier. TOR.

Frye, N., & Lee, A. A. (2007). The great code: The Bible and literature. Penguin Canada.

Hafner, K., & Lyon, M. (1996). Where Wizards Stay up late: The Origins of the Internet. Simon and Schuster.

Mosco, V. (2005). The Digital Sublime: Myth, Power, and Cyberspace (1 edition). The MIT Press.

Ray Bradbury (Director). (1973, September 16). The Veldt. http://archive.org/details/the-veldt

Rheingold, H. (1991). Virtual Reality. Summit Books.

Saul, J. R. (2005). On Equilibrium. Penguin Canada.

Stephenson, N. (1992). Snow Crash. Bantam Books.

Vinge, V. (1981). True Names. Bluebird.

Womack, J. (2004). Some Dark Holler (pp. 355–371). Ace Books.