Terminus Est

(this was originally published as Implausipod Episode 43 on February 5th, 2025)

Terminus Est (as seen on the cover of The Shadow of the Torturer, (Wolfe, 1980))

https://www.implausipod.com/1935232/episodes/16530739-e0043-appendix-w-99-terminus-est

In the grim darkness of the 41st millennium, some things come to and end. Join us as we look at the impact of the Appendix W on real world events through a look at one of the most iconic blades in fiction: Severian’s Terminus Est from Gene Wolfe’s 1980 novel The Shadow of the Torturer.  But much like the blade, there is much, much more hidden below the surface of this episode.


In the grim darkness of the 41st millennium, some things come to an end. So too with Appendix W, as we have reached the final episode, where we take a look back at what has come before. Since the launch of this podcast, real world events have disturbingly breached through from the chaos of the warp into this reality.

We will look at the root causes of why, in this Appendix W episode, The Implausipod. Welcome to The Implausipod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. And in this special Appendix W episode, I wanted to get to the end point of what Appendix W is all about, because since we started it, I’ve always known where the end point is going to be.

There’s a line I remember from my childhood, from the theme from Mahogany. Not the original song by Diana Ross, but a cover out of Europe. Do you know where you’re going to? When it came to Appendix W, the answer was an emphatic yes. I had a good idea at the outset where this would lead since the initial post back in 2021.

This comes with the benefit of hindsight and experience, where one can develop a good idea of the feasibility of a project at the point of inception. However, while you may have a destination in mind when you start a project, the place you may wind up at may be wildly different, or at least the path may be more circuitous than expected.

So if I didn’t discover anything new along the way, it would have been fine project, but I would have been a little disappointed. And we did uncover some new things, and that’s been fantastic. Of course, anyone familiar with that rather famous song knows the next verse starts with, did you get what you’re hoping for?

And the answer to that is, not quite. So in this penultimate episode of season one, and I say penultimate with the biggest bunny ears possible, we’ll get into the whys, wherefores, and what we learned along the way. The original endpoints of this project can be seen in some of the sections that we started with.

The descriptions of technology, the methods of travel, the aliens encountered, all overarching aesthetic elements by which we classify something as sci fi. And while we were off hunting for the origins of things, we began to weigh how much these tales had directly influenced their descendant that they had heavily inspired.

That inspiration can be seen directly in how some of those aesthetic elements were portrayed by their modern descendant, Warhammer 40, 000. But there’s more to it than just the aesthetic dimension, as the beliefs and ideologies of those authors were also embedded in the fiction they wrote as well.

Sometimes explicit, as seen in Starship Troopers or The Forever War. Sometimes more tacit or obfuscated. These beliefs were those of the post war era, in tales written by men who often served or came of age during World War II. Their science fiction reflects that era. We see large militaries and bureaucracies, hierarchies and authoritarianism.

Of the belief in the rightness of one’s cause, of being on the winning side. Sometimes this is questioned, as in Dune, and sometimes it is exaggerated to the point of satire, as in Judge Dredd. But regardless, they were common enough that the tropes and stereotypes begin to be repeated. I’m looking at you.

So, part of our original goal with Appendix W was to see how the impact of these ideologies can be traced as well. That line that follows through fiction throughout the decades. The continuous feedback loops between fiction and the real world. And this is still one of the goals. But, the real world has funny ways of moving faster than you might like, and real world events are starting to see the manifestation of these ideologies in ways that it wasn’t thought possible.

While real world events were perhaps the main reason that Appendix W wasn’t quite what I was hoping for, those real world events also offer us an opportunity to frame and focus our story, and to understand why we’ve come to the end. Terminus Est Why Terminus Est? Well, in Latin it quite literally means, It’s the end.

But it means something rather different in the context of science fiction and Warhammer 40k. In sci fi, it is one of the great swords of fiction, in a pantheon of named blades along with Stormbringer and Dragnipur and many others. Terminus Est was the sword of the executioner Severian in Gene Wolfe’s The Book of the New Sun.

We mentioned it in passing when we talked about that book back in episode 24 of Appendix W. You can see an image of it from the cover of the paperback edition of the book in the thumbnail episode of the show. It is from this iconic presentation that all of its other manifestations flow, whether in Castlevania and Path of Exile, to the manga of Blade Dance, to all of the other ridiculously oversized two handed swords and daiclaves that show up in anime, D& D, and Exalted, to an appearance in Warhammer 40, 000 itself as the name of the flagship of the Death Guards we’ve covered before.

The aesthetics of Gene Wolfe’s work in the Book of the New Sun, the imagery and use of language can be seen redolent throughout the lore of 40k. That idea of a fallen humanity long in the future dealing with technology that they no longer understand is seen throughout the work. Perhaps we can best show this in how Terminus Est is introduced to the readers on page 106 of the Timescape edition from 1980.

Quote, the sword herself. I shall not bore you with a catalogue of her virtues and beauties. You would have to see her and hold her to judge her justly. Her bitter blade was an L in length, straight and square pointed, as such as swords should be. Man edge and woman edge could part a hair to within a span of the guard.

Which was of thick silver with a carven head at either end. Her grip was onyx bound with silver bands, two spans long and terminated with an opal. Art had been lavished upon her. But it is the function of art to render attractive and significant those things that without it would not be so. And so Art had nothing to give her.

The words Terminus Est had been engraved upon her blade in curious and beautiful letters. And I had learned enough of ancient languages since leaving the Atrium of Time to know that they meant, This is the line of division. End quote But Terminus Est is an unusual blade, and she holds some secrets within her.

Quote, There’s a channel in the spine of her blade, and in it runs a river of hydrogyrum, a metal heavier than iron, though it flows like water. Thus the balance is shifted towards the hands when the blade is high, but to the tip when it falls. So, light to raise, weighty to descend, as we hear so often throughout the series.

And, if this is to be the end, then there is no more fitting artifact to focus on for this episode. So let’s take a moment to look back at Appendix W through the lens of the Executioner’s Blade.

While we’ve covered an incredible amount in the previous 98 episodes of the series, I’d like to mention some of the highlights for me. Of course, whenever channels look at the influence of 40k, there is a focus on the obvious ones. Dune, Starship Troopers, and Judge Dredd. And we did touch on all those, but for me.

The delight was in finding and uncovering those hidden little gems that found their way into the lore. Star Trek isn’t generally mentioned as a direct influence on Warhammer 40, 000 in the way that those other titles are, mostly due to the more utopic view of the future that that series held, though the 40k orcs have a lot of parallels to the Klingons.

It was the revelation of the origins of the Terran Empire that surprised me the most, that Alternate universe version of Star Trek, first seen in the episode Mirror Mirror, where Spock famously wore a goatee, so you knew he was one of the baddies. The agonizers and the punishment that has become staples of both the Imperials and Dark Eldar in the Warhammer 40, 000 universe showing up there was a nice touch, and I’m glad we spent several episodes going through our deep dive on the original series.

These small influences showed up again in our very first episode, where we saw the enslavers from the Rogue Trader rulebook appear as they did on screen in an episode of Space 1999 in the episode titled Dragon’s Domain. This is sci fi with a more British feel than Star Trek, and this difference can be seen when we looked at Blake’s 7 back in episode 17.

Yeah, I know it would have worked out better if I had planned that one ahead, but I enjoyed our further look at the instrumentality in Episode 7 instead. That same instrumentality played a huge part of our review, as we spent three episodes on it throughout the series. The amount of influence that Cordwainer Smith’s writing had on Warhammer 40, 000 was perhaps understated, and he indirectly impacted Dune as well, but this gave us birth to so much of the day to day of the Imperium, the warp, the mechanicum, and the relationship they have to technology.

It was a real pleasure to share that with you. Of course, Smith’s work was a very American, West Coast view of sci fi, as was Herbert’s, and Gene Wolfe’s too, who we looked at as we reviewed each of the four books of the Book of the New Sun, and here again in this episode with the Blade, Terminus Est. All three of these series, the Instrumentality, Dune, and the New Sun, touched on the themes of the Earth in the distant future, of the dying Earth genre, though we only spent a little bit of time on Jack Vance’s work of the same name.

Deep Time appeared repeatedly as seen in Foundation series we did back in episode 50, though I’ll admit it was hard to separate the book from the TV adaptation on Apple. And here we can see some of the commonalities of the authors of the early influential science fiction as Asimov, Heinlein, Smith, and Vance all worked for the U.

S. military in various capacities during World War II. We’ll pick up on this thread in a moment. Of course, even though much of the sci fi of the quote unquote Golden Age was written by Americans following their experience in the war, there was no shortage of British influence as well. We mostly skipped over the rather obvious Tolkien influences, opting for just a quick episode there discussing how those contributions to the fantasy genre as a whole found their way to 40k through the influence of Games Workshop’s fantasy series, the original Warhammer.

This is where the works of Michael Moorcock showed up as well, back in episode 10 when we looked at Stormbringer. The sword with a trapped demon within that inspired the whole mythology of daemon weapons within Warhammer. For me personally, the biggest revelations came from my first exposure to much of the British media that I had only rarely glimpsed growing up.

As a Canadian, we tended to get overlapping coverage of both British and U. S. culture, but it was very selective, and there was some stuff I really hadn’t seen at all. So whether it was Doctor Who, or Blake’s 7, or the various comic series included as part of 2000 AD, Discovering how those filtered into Warhammer 40, 000 was fascinating, and I’m glad I got to share those with you in the multiple episodes we did.

I’m also happy we brought in some outside experts for a look at the Gundam series with an interview with veteran modelers and fans of the franchise. Even the Gundam influence on Warhammer 40, 000 didn’t really start showing up until later in the 1990s with the release of the Tau Empire, but big stompy robots were there from the beginning.

But, uh, no exploration of sci fi influences would be complete without looking at the impact of Hollywood. Perennial franchises like Star Wars, Aliens, and Terminator all showed up in various ways, and I’m glad we got to those franchises eventually. But as we mentioned in those episodes, they are widely popular and well known, so I’m also happy we waited as long as we did before taking a look at them, as the little details of the earlier, smaller titles would have been eclipsed by the giants of the genre.

However, it is in the films that we can most easily see the differences in the sci fi ideologies that are represented within the series.

And what are the ideologies that we see? Well, as with most popular culture, what we see is a reflection of our own society. Which is why we see militarism, corporatism, hierarchies, and a focus on the commodities and trade in many of the stories. Some aspects of our society seem inescapable, what Mark Fisher calls capitalist realism, where it is easier to imagine a far future than a coherent end to capitalism.

Which is why, even in the far future of the Dune universe, filled with religion and medievalism, we have a monopolistic corporation like CHOAM controlling the economy behind the scenes. But the underlying ideology and our relation to it can change over time, and while this might not be stated explicitly, we can see it in the changing visual representations of pop culture.

Within sci fi, cinema, and television, we can see certain eras that are most clearly identified by their aesthetic. We start in the 60s, the clean era, where shows like Star Trek, the original series, and Stanley Kubrick’s 2001 both draw in inspiration from the space programs of the time. The clean lines and shiny panels everywhere, with hardly a mote of dust to be seen.

A show like Space 1999 serves as a transition piece, as the space station becomes more worn down over time, reflecting the diminishing resources of the station, and the economic malaise and uncertainty of the time, bringing us the era of grit and grime. Exemplified by the late 70s pieces of sci fi like The Star Wars and Doctor Who.

And as the 70s drew to a close, that grit turned into grease and grime, to the greasy production of shows like Alien and Ice Pirates. With steam filling the atmosphere and hiding the sets, and condensation and grease liberally applied across the surfaces. The grit was still there, of course. The recently deceased director David Lynch’s adaptation of Dune and the frenetically paced post apocalyptic Road Warrior still had much dirt and dust, but the bright future of the 60s had definitely drifted over to the dark side.

So too in the fiction. While we noted that the foundational elements of 40k consisted of a blend of British American and occasionally Japanese or European sci fi and fantasy, there was a strong showing by American writers of sci fi that focused on the deep history in the dying earth, Asimov’s foundation, Smith’s instrumentality, Vance’s dying earth, and Herbert’s dune, if we were to lay them out roughly chronologically.

But this underlying ideology has connections to U. S. military policy. As noted by Chris Hables Gray, not only has science fiction predicted many of the recent changes in war, there is a strong argument that it has influenced them to some extent. Military science fiction and military policy coexist in the same discourse system to a surprising degree, and we have sci fi as policy.

And for Gray and others, this can be seen again and again. Gray notes how H. Bruce Franklin looks at how superweapons occupy space within the American collective imagination, that space we talked about back in episode 26, Silicon Dreams. There, we were introduced to the idea of the collective imaginary with respect to virtual reality and artificial intelligence, but we find it again here too in terms of superweapons and mechanized warfare, which even Thomas Edison was talking about as early as 1915.

While the earlier sci fi had militaristic themes, as those early authors like Heinlein drew on their military backgrounds, showing us vast navies, hierarchical organizations, authoritarian systems, and War Amongst the Stars, this shifted in the 70s and 80s with the rise of the subgenre of mil sci fi. We covered some of it, from the hover tanks of David Drake’s Hammer Slammers, to the eternal wars between Man and Kzin in Larry Niven’s known space universe, to the Janissaries universe of Jerry Pournelle.

Jerry Pournelle, who passed in 2017, was a former Korean war vet who worked in the aerospace industry and entered academia, earning degrees in psychology and political science. While we didn’t cover much of his work directly, save for our discussion of orbital bombardments in the episode on Satellite Warfare and the origins of the Exterminatus in Warhammer 40k, he did collaborate with a number of other authors we looked at and was a prolific writer in the field.

However, he may be more influential on the field for his academic writing rather than his sci fi. Specifically, 1970’s The Strategy of Technology, co authored with Stefan Possony, where they argued for the demonstration of technological superiority as part of a country’s doctrine. And this was seen in the American pursuit of stealth technology, and Reagan’s SDI program, the Strategic Defense Initiative, known as Star Wars.

It could be argued that these are all elements of what Mary Kaldor calls the Baroque Arsenal, and we can see that Baroque style seeping through in the arcane elements of A Forgotten Technology in Terminus Est, and Wolfe’s Book of the New Sun, in Dune, and in Warhammer 40, 000 itself. I bring up Jerry Pournelle because his political views were embedded within his work, and he recognized and acknowledged this.

He self described as being, quote, somewhere to the right of Genghis Khan, but his conservatism tended more to the isolationist view, what is now described as paleoconservatism, that was opposed to the Roosevelt New Deal, and has been supplanted by neoconservatism in the US. And like, Many of his sci fi colleagues, he worked as a consultant, an advisor, or a futurist for various organizations during the Cold War.

And this is part of our rationale for ending. It leads us into why we’re wrapping up this chapter of The Appendix W. Or speedrunning to the end at least. Since we started this project the world has gotten darker and those dark elements of our entertainment are escaping the turbulence of the warp and manifesting in our reality.

Khornate imagery and iconography adopted by troops fighting on the front lines of the Russo Ukrainian war with sayings such as Blood for the Blood God being bandied about everywhere from internet commentary to the pro wrestling forums, the brutality of the Warhammer 40, 000 universe is seeping into our public discussion, stripped of the irony and satire attached to it in the in universe materials, where every text is issued by an unreliable narrator.

The audience still realizes that, right? That it’s satire? Sometimes I question this, as dank memes in support of certain public figures as the god emperor of mankind are posted in earnest on the internet, or if Posted with an ironic wink by the commenter, perhaps taken up and spread less ironically by the followers and algorithms that lift it up to virality.

Spreadable media of the most infectious kind. Papa Nurgle would be proud. 

And of course, there’s the cosplay, which has grown in recent years to become an industry unto itself, but has also seen growth in the fandom of the adversaries in the various sci fi universes that we enjoy. While many cosplay conventions have adopted explicit rules against historically fascist or racist imagery, They are much more lenient when it comes to allegorical representations, and as we’ve mentioned throughout this episode, and series, sci fi is rife with allegory.

Elements that were clearly presented as allegorical in the original fictions were shaded in with grey during the intervening years and have been embraced by the fandoms at different points. Elements of clear satire, Starship Troopers and Judge Dredd most specifically, were taken at face value. And so, The critique they presented on the police state or militarization of fascism gets subsumed by the larger sci fi trappings of the settings.

These fandoms have become groups unto themselves, with groups like the 501st, a now international troop of cosplayers that wear stormtrooper armor and march around conventions and other events. The group that represent the baddies in Star Wars, wearing armor and helmets designed to look like skeletons and skulls, were originally patterned off of the Americans in Vietnam.

The rebels of which Luke and Leia were a part of were the Viet Cong, according to an interview George Lucas gave with director James Cameron in 2018. And the 501st is not alone in groups of bad guys that find representation within the cosplay community. But the issue is that fashionable cosplay becomes fashionable dress rehearsal, and from there it seeps into everyday life.

So too with Warhammer 40, 000. The grim darkness of the 41st millennium finds no shortage of representations of evil. From the grinding military machine of the Imperial Army, the Astra Militarum, with its Commissars and the World War I German inspired Death Korps of Krieg, To the transhuman space marines, the Adeptus Astartes draw an inspiration from the armored soldiers of Starship Troopers, the Forever War, and the Sardaukar of Dune.

We see this continue in the Judge Dredd inspired Adeptus Arbites, the space cops that police the regular population, and the Inquisitors that purge out heresy with the ferverance of the now expected Spanish Inquisition. Games Workshop has repeatedly stated that their work is satire, but how much weight do those statements carry, especially compared to the evidence of all the other material published for their universe?

In a statement made on their website in 2021, Games Workshop stated, “The Imperium of Man stands as a cautionary tale of what could happen should the very worst of humanity’s lust for power and extreme, unyielding xenophobia set in. Like so many aspects of Warhammer 40, 000, the Imperium of Man is satirical.

For clarity, satire is the use of humor, irony, or exaggeration, displaying people’s vices or a system of flaws for scorn, derision, and ridicule. Something doesn’t have to be wacky or laugh out loud funny to be satire. The derision is in the setting’s amplification of a tyrannical, genocidal regime turned up to eleven.

The Imperium is not an aspirational state outside of the in universe perspectives of those who are slaves to its systems. It’s a monstrous civilization, and its monstrousness is plain for all to see. That said, certain real world hate groups and adherents of historical ideologies better left in the past sometimes seek to claim intellectual properties for their own enjoyment, and to co opt them for their own agendas.”

This statement was issued as a response to someone wearing full Nazi regalia to a tournament in Spain in 2021. But it’s indicative of the larger issue, and I think we need to look forward for solutions. Games Workshop may disavow the use of their material by hate groups and claim that it is satire, but it’s not clear that some groups are getting it, or rather, that the preponderance of darkness within the universe provides cover for those who would use it for nefarious ends.

The issue is that you run the risk of being that kind of bar. Now, it’s not that I think that Warhammer 40k is irredeemable, it’s just that the Grim and Dark is just that, Grim and Dark, and that sometimes the best way to combat the dank memes is to know where they come from, to detoxify them. And I know some of the audience loves the dank, and think the dankness is their ally, but you merely adopted the dank.

I was born in it, molded by it, I didn’t see Mr. Rogers until I was already a man, and by then it was nothing to me but blinding. But I digress.

Warhammer 40, 000 Rogue Trader was originally published in 1987, and it collected its inspirations, wove them together, and wore them on its sleeve, adding more fabric to the quilt as time went on. Early editions became incorporated into the design such that the sources are forgotten, and this is what we are highlighting here, especially with the more obscure titles.

But eventually, 40k grew to be enough of an influence in its own right that it was influencing the culture that it had previously assimilated. In 2025, it’s something that needs to be stressed, that the media environment that 40k was released into was vastly different than the one that existed even 10 years later, as the 20th century drew to a close.

Some of the concurrent and subsequent influences of Warhammer 40, 000 can be seen in other media titles, titles like Aliens, which was released in 1986, or Star Trek The Next Generation, originally starting in 1987, and their subsequent introduction of the Borg as an antagonist in episodes like Q Who in May of 1989, and June and September Two Parter The Best of Both Worlds in 1990.

Big sci fi movies like Independence Day came out in 1996, Starship Trooper’s movie was released in 1997, the video game Starcraft came out in March 31st of 1998, and Terminator 2 was released in 1991, and the Star Wars prequels coming out in 1999, and all of these had subsequent influences on Warhammer 40, 000.

As we go forward with the Appendix W, and we will be going forward, we will be looking at the interplay that took place during the early 1990s, a fallow period in sci fi which allowed, or forced perhaps, 40, 000 to build on its own mythology and become the cultural icon and brand that it turned into. Why are we doing this?

Well, As I stated, partly it’s a speedrun in order to catch us up to the present as current events have forced the timeline along and we don’t want to be looking at stuff that’s so hopelessly dated that it has no impact or anything to say about what’s going on currently in our world. And from this point forward, episode 99, we’ll be looking both backwards and forwards at the various titles that influence and shape what’s going on.

This will be shaped a little bit by whatever gives me joy in the moment, but I’ll do my best to announce in advance whatever it is I’m working on so that you, the listener, can follow along. I don’t know if many podcasts have tried something like this before, or if some have but have scrapped it because it’s a bad idea, but We’ll give it a shot, because it gives me a little bit of joy to do so, and that joy is critically important.

As you may have noted, since it’s been over ten months since we last published an Appendix W episode, I’ve been struggling a little bit with that joy, with that creativity, and this has taken place over the holidays and has been through into the new year as well with the seemingly unending flood of bad news.

As you can tell by the existence of this podcast, we managed to get things moving a bit, but the first step was turning off the fire hose and following through with some steps that you can do to make constructive actions to your own media and mental health. The second step was to keep creating. I mentioned my struggle in passing towards a friend, it was pointed towards an interview with Heather Cox Richardson that she had made with the National Press Club.

The relevant bit 57 minute mark in the clip and I’ll link to it in the show notes. The gist of her advice is to behave with joy as a means of resistance. Do the things that matter to you and that you can bring to the people around you, end quote. We can meet the moment and as scholars be honest and by doing the best scholarly work we can, we contribute back to humanity.

And the Appendix W and the podcast at large are both Scholarly works; it’s stuff I studied in grad school, and I want to continue bringing that knowledge and information back to a larger public. Even though contributing back to humanity seems like a lot to ask from a blog and media channel that mostly focuses on the intersection of sci fi and technology, it is 

what we’re doing. Maybe our project is a little bit wider in scope than we initially thought. But the big takeaway, at least for me, is that moment of reflection that I like what we’re doing here and I enjoy doing the podcast, the blog, the newsletter, and YouTube, which I hope to publish more on in 2025, and the various other bits that we have going on here.

So, after a brief period of stasis, we’ll get back to the things that bring us joy and find the joy in sharing them with you as well. So let’s pick up that long, finely honed blade of Terminus Est one last time. Though, not to wield, but to return to its scabbard and look toward the future.

Thank you for joining us on this special Appendix W episode of the ImplausiPod. We’ll return next episode with the start of our series on cyberspace and examine some of what is being built around us, what this is all about. After that, we’ll be looking at the first season of and or, and we may have just a few other surprises to throw your way.

In the meantime, I’m your host, Dr. Imp plausible. You can reach me at Doctor implausible@implausipod.com, and you can also find the show archives and transcripts of all our previous shows @implausipod.com as well. I’m responsible for all elements of the show, including research, writing, mixing, mastering, and music, and the show is licensed under Creative Commons 4.

0 share alike license. You may have also noted that there was no advertising during the program, and there’s no cost associated with the show, but it does grow from word of mouth of the community, so if you enjoy the show, please share it with a friend or two. and pass it along. There’s also a buy me a coffee link on each show at Implausiapod.

com which will go to any hosting costs associated with the show. Over on the blog, we’ve started up a monthly newsletter. There will likely be some overlap with future podcast episodes, and newsletter subscribers can get a hint of what’s to come ahead of time, so consider signing up and I’ll leave a link in the show notes.

Until next time, take care and have fun.

Bibliography

Chris Hables Gray- “There Will Be War!”: Future War Fantasies and Militaristic Science Fiction in the 1980s. (n.d.). Retrieved September 3, 2023, from https://www.depauw.edu/sfs/backissues/64/gray.htm

Kaldor, M. (1981). The Baroque Arsenal. Hill & Wang Pub.

https://www.amc.com/blogs/george-lucas-reveals-how-star-wars-was-influenced-by-the-vietnam-war–1005548

https://fanexpohq.com/fanexpovancouver/costume-policy

https://www.warhammer-community.com/en-gb/articles/1Xpzeld6/the-imperium-is-driven-by-hate-warhammer-is-not

Heather Cox Richardson interview: https://www.youtube.com/watch?v=QDX0hxyYcJw

Dr Implausible’s Book Club

“Read a book!” This is more than just the catchphrase for Handy, the supervillian puppet and partner of the Human Ton in The Tick animated series (1994) (pictured to the right). Its also one of the more effective ways to spread knowledge. And while there may be an anxious pressure in the first month of 2025, that reading is a distraction or ineffective, there’s no time like the present.

“Read a book!” (Handy, 1994)

While TikTok is seeing a nice resurgence in learning with the #HillmanUniversity and #TikTokUniversity programs, here we’ll just focus on going through some critical books, one at a time. This is a expanding and evergreen project so we’ve created a page for this project over in the pages section: Dr Implausible’s Book Club and we’re also mirroring the content over on the indie version of the blog here.

This one is focused on academic content, but there are a couple concurrent and overlapping genre-specific themes that we’ll dip in and out of too. We’ve introduced both of those on the podcast, in the early days, with the Cyberpunk 101 episode, and the Introduction to Appendix W (which we mentioned here way back in… 2021? Whoa). We sorta-kinda did the Appendix W as it’s own thing, and that may still continue, but we’ll try and keep everything contained here as well, in case you don’t feel like following three separate things. For those that only interested in a specific element, the companions will help narrow that focus.

We’ll start with Technology Matters: questions to live with by David E. Nye (2006). This was a text that was used as a supplementary reading for one of the classes I taught in the past, a “sociology and ethics for engineers” type of class in the STS vein. It’s approachable, and written for a non-technical audience, which makes it especially worthwhile. As Nye mentions in the preface, these are big questions, and such big questions defy simple answers (or at least ones that are easily testable), and as such we have to come at them with some empathy. Or at least, that’s my take.

Technology Matters (Nye, 2006)

We’ll start with the basics, and check back in over the next week or so, and then publish a full post (on at least one of the platforms). Trying hard not to overcommit at the outset though. Let’s see how it goes…

Incipient Diaspora

(this was originally published as Implausipod Episode 42 on January 17th, 2025)

https://www.implausipod.com/1935232/episodes/16453686-e0042-incipient-diaspora

What happens when a change is on the horizon, one that is approaching that will force you to move but is outside your control? When a community knows it will be disrupted, it may be facing an Incipient Diaspora. For the US denizens of the TikTok app, facing a ban in that country on January 19, 2025, we can observe how they reacted and prepared, and what lessons can be learned from the ongoing situation.


A famous poet once wrote that the waiting is the hardest part. Sometimes the antici-pation, can be wonderful, sometimes it can be terrible. But as we wait, that sound of inevitability, that rush of air in the distance signaling the approach of the sublime, sometimes all we can do is our best to get through the storm.

As we start 2025, we can see multiple storms on the horizon, some closer than others, and communities are handling this differently. One of the worlds we’ve been looking at is deep within cyberspace, and for the netizens of TikTok, the citizens are facing the looming dissolution of their world. Everyone is making plans on what to do next as they pass through that singularity, leaving messages about how to find one another on the other side.

We talked about this a little bit back in June of last year in TikTok Tribulations, but the trouble with tribulations is that they don’t just go away. When faced with an incipient diaspora, what do you do? Is it about the waiting or is it about the recovery? We’ll talk about both in this episode of the Implausipod.

But before we begin, a brief note. After we had started recording this episode in late December 2024, the Eaton and Pacific Palisades wildfires have devastated communities in Los Angeles, California, destroying thousands of homes and displacing many thousands more. Our hearts go out to those affected, our thanks to the firefighters and others involved in the recovery, and we urge you to contribute to a charitable organization that can assist with helping the survivors.

This episode is about loss and displacement, but it is not a commentary on the specific events of the 2025 L. A. wildfires. Thank you. 

Welcome to the Implausipod,

a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. In the last weeks of 2024, it was clear that there was a change in the air. The tone of the content made by various posters on TikTok started to change. A lot of people started making posts about other places they could make content on, or for the more casual poster, where everyone was going.

There was more than a few lurkers asking where the party was going to be, it had some real Steve Buscemi with a skateboard saying hello fellow kids energy. It was the collective realization that, absent any acts of deus ex machina, by January 19th, TikTok would go away, with legislation in the United States poised to ban the company from operating within those borders.

Of course, TikTok has a global audience, so various Brits, Australians, Canadians, and people from other countries behaved as if they were unaffected, because largely they were, but the net impact of the American audience and participants realizing that things were about to change shifted the tone of the discourse on the app as a whole.

It became a moment of incipient diaspora. As an observer, I’d like to capture a snapshot of what that moment was like as it was going on. It began shortly before Christmas 2024, as I saw people with more time on their hands, with their kids off from school, or university students home for the holidays, starting to realize that the time left with the app was short.

That there was under a month left to go. Some forward thinking people were starting to make posts asking what was going to happen in the new year. As the holiday festivities wrapped up and those who had vacations slipped into that weird, liminal, timeless zone between Christmas and New Year’s, where everyone is sleepy from gorging on turkey dinner, leftover wine and cheese, and enjoying their holiday gifts.

The trend continued, with more people starting to ask questions, and by the time New Year’s would have rolled around, everybody realized that time was drawing short. People began posting lists of links of their other social medias, other places that they could be found on. This was not unusual in and of itself, as something that happened fairly regular with content creators that derived their income from posting in various places.

Would often try to drive traffic to places that they had monetized. Or were able to capitalize off the audience. For a lot of creators, places like YouTube and Instagram were much better suited for that. So that wasn’t that noteworthy, but by January 7th, this practice had spread to the smaller creators, too.

Those who hadn’t necessarily monetized their content, but wanted to remain in contact with the friends that they had made, and the communities that they had become a part of, while on the app. In early January, this still included places that were the most wide ranging and popular, places like Facebook, Instagram, and X or Twitter.

Though the last one wasn’t quite as prominent, as there was more mentions of Blue Sky, with the migration that had already begun there following the U. S. election in November 2024. However, this was soon to change, as by the end of that week, the U. S. Supreme Court would hear arguments requesting a state of the ban.

Politically minded posters and legal scholars noticed the upcoming case and started commenting on what they thought would happen, and this spread from there to all corners of the app. with many posters expressing concern about what the outcome might be. There was an additional group of commenters who put down their epidemiologist certificates they’d been using for the last few years, dusted off their internet law degree, and stepped outside of the Motel 6 they stayed at the previous night to offer their opinions about what was going on.

But perhaps I’m being too harsh. What I’m suggesting is that a lot of people were commenting on the outcome of the case, but many of them were adding noise rather than signal to the conversation. Regardless, by the day the case of TikTok versus Merrick Garland was going to be heard, January 10th, 2025, everybody’s attention was focused on it.

The high degree of uncertainty about what the outcome of that case might be led to two notable things happening. The first was that everybody started making contingency plans, posting about other apps that they were on, places that they could be found, or profiles that they had made, and the second was that they started taking a deeper look at why the ban was taking place at all.

The argument that the app was a national security risk drew some scrutiny, and a lot of people started looking at the lobbying efforts of TikTok’s biggest competitors. Again. Meta, or Facebook. Now, Meta, the company, and the practices that it engages in and the commodification of the audience is something we’ve commented on many times on this podcast before.

We discussed the audience commodity way back in Episode 8 in July of 2023, and we touched on it a little bit more in Episode 15, entitled Embrace, Extend, Extinguish, and of course the TikTok Tribulations episode from June of last year. We’ve also commented on this in the blog and the newsletter, so let’s just say it’s an ongoing topic of discussion.

If you’d like to hear more about it, I’d encourage you to check out some of those past shows in the archives on implausopod. com. But back to the topic at hand. With TikTok users realizing that Meta and Mark Zuckerberg were one of the larger reasons that the ban was actually going forward, There was a collective pushback against moving to meta owned properties like Facebook, and Instagram especially, as they were seen as the more direct competitor to TikTok.

There was also a pushback against moving to X, as people saw Musk as equally complicit in the ban, due to his recent role with the US government. And this manifested in posters explicitly calling those platforms out and looking for direct alternatives to TikTok that weren’t owned by those companies.

This pushback was exacerbated by an announcement that Meta made on January 7th that they would no longer be using third party fact checkers, and an appearance by Mark Zuckerberg on the Joe Rogan podcast. Again, there’s a lot going on, and it’s all happening roughly contemporaneously. Following the initial arguments in front of the U.

S. Supreme Court, the users became much more active in finding alternative places. They began mobilizing, began contacting their various political representatives, and in their search for alternatives, they came up with an unlikely option. The app known as Zhenghongshu. Little Red Note, an app that was pitched as a Chinese version of TikTok, but was actually more akin to a Chinese version of Pinterest, an app that was actually Chinese state owned, operating in mainland China, and whose discourse took place largely in Mandarin.

Within two days, the TikTok userbase had collectively made this the most popular app in the App Store, and showed that they would rather learn a foreign language and deal with a directly foreign owned app than deal with a meta product again. The pettiness and spite of the American TikTok userbase apparently knows no bounds.

Much like Ricardo Montalban in Star Trek II Wrath of Khan stating, From hell’s heart, I stab at thee. For hate’s sake, I spit my last breath at thee. The TikTok userbase were deciding to go out in epic fashion and take Meta down with them. And this brings us forward to now, January 17th, 2025, two days before the ban.

The diaspora is in full swing, and still nobody has an idea of what’s going on. It leads us to a question. Is the incipient diaspora about the waiting, or is it about the recovery?

While as of the morning of January 17th, the U. S. Supreme Court has still yet to make a statement on their decision, and both U. S. administrations, both outgoing and incoming, have somewhat punted on making a final determination, lending to much uncertainty even two days before the ban, there’s a lot that we can learn from the observations we’ve made about the reactions of the residents of TikTok.

The first observation speaks directly to that uncertainty. There’s a from the creator of the Princess Bride. Nobody knows anything. Now, William Goldman was referring to Hollywood, and that nobody can really tell when it comes to creatives pursuits, what is going to take off, what would be a hit and what wouldn’t.

But it applies in this situation as well, because January 19th is somewhat of a singularity. No one can tell for certain what’s going to happen after that point. In early to mid January, there were posters that were stating with absolute certainty and confidence about what would happen, but they had no special knowledge about what was going on.

In those times of uncertainty, the best approach is to put on one’s critical thinking hat. Because the truth is that nobody knows, and even the best can only make an informed decision based on past events and can’t say for certain what’s going to happen. However, in an era of uncertainty, there will be those courting clout and influence that seek to provide answers to a questioning audience, even where no answers exist.

In an era of uncertainty, all you can do is make backups, plan for contingencies, establish lines of communication, and try your best to ensure that you can see people on the other side. And that speaks to the second point, that there are identifiable actions that can be done. Even in an era of uncertainty.

The mantra of the three S’s, Save, Share, and Spread, goes a long way in ensuring that those challenges can be met. The first one is that you save your information. You save your peeps. You get a list of everyone you need to keep track of, everyone you need to contact, and that makes it easier to get in touch with them afterward.

You know who the real ones are, and you ensure that those are available. And this is good disaster prep in general. Have that documentation available, and have backup copies too. The second is that users need to share their info. Have that copy a list of places that they can be found and contact cards, and share that widely with the people that they want to be able to track them down.

It doesn’t have to be overly complicated, it just has to be a list of contacts on a card. For an older audience that may dimly remember the era before mobile phones, this is the list of places that people can track you down at. You know, if I’m not at the arcade, I’m at the rec center. If I’m not at the rec center, I’m at your mom’s house.

You know where to find me, right? And the third task is to spread that information. If you see a mutual acquaintance that has that contact card, you keep a copy and share it to other acquaintances so it’s more widely available. If there’s multiple copies of something around, then it’s more likely to survive and be able to be passed on.

Users are in the process of developing a network of resilience, and that’s what they need in order to manage the uncertainty that may be happening during this era. This is because the place that they’re looking to land might not even exist yet, or it might be just a app that’s in beta someplace, and not really readily available.

Users might not know where everybody’s going to be, but the idea is you create that network and you become that lighthouse that can guide the other users back to the community when you find one. And the third observation follows from that, and that is that the perfect is the enemy of the good. And when we’re talking about third spaces, both real and virtual.

virtual, sometimes it’s best to take something that exists and meets some of your needs than the perfect option that doesn’t exist or may never exist. You can’t let something not being your optimum choice deter you from using what’s available. When it comes to third spaces, both real and virtual, you need to look at what you’re trying to do.

Now, some of this builds on what Ray Oldenburg was talking about in The Great Good Place when he was originally discussing what third places are. When it comes to third spaces, you can’t let the perfect be the enemy of the good, and the good that you’re trying to do is to build community. When you’re trying to build community, you can use the tools that are available to you.

In the late summer of 2024, there was a discussion of third places that was taking place online, both in blogs and on TikTok and other sites, and there was a lot of headcanon or misconceptions about what third places are and what counts. There are statements like a third place can’t be a business, or can’t have people working there, and if there are, then it doesn’t count, and frankly, this is nonsense.

It might not be optimal, but it can still count as a third place. Remember, a third place is just someplace that isn’t work or home, but a place where you can relax and spend some time. Some of the original examples of things like third spaces were things like barbershops or bars or coffee shops or pool halls, and these are all businesses, but they still count.

So it doesn’t matter whether it’s a McDonald’s or a Rotten Ronnie’s, or a mcds or a raunchy, Rons or a Macas. Those can all count as third spaces. You can go there every morning, grab a cup of coffee, sit around with your friends or acquaintances or people from the community or even just people passing through, and that might be the best part as you’re exposed to news from elsewhere, and you can have a discussion.

This is how community is built. It might not be perfect because it’s corporate and policy changes might change how things are going. They take out the seats or the price of coffee changes or whatever. Or this could reshape the environment and not make it as conducive to having that community and discussion.

And this can happen with the change of ownership of smaller businesses as well, whether it’s a barbershop or a pool hall or whatever. But it is something that can be used while community is being built up. This is something we talked about in our earlier episode on recursive public. So if you want to go back and check that in the archives again, I encourage you to have a look.

But this is something that we need to get over, the idea that our virtual spaces have to be perfect from the get go and not recognizing that the previous ones that we had built up over time and acquired characteristics as the users interacted with them. So again, the rule is if you find a place that’s suitable, you work to build that up and you become a lighthouse to your community and bring them in with you.

You start where you are, you use what you have, and you Do what you can. And I’m not just saying this from my own experience as someone who spent 18 months doing field work at Third Spaces looking at how communities form and interact. I mean, I am that person, but I’m not just saying that. But the point being is that a community has to be built, and it takes the effort of the individuals involved in it to come together and build and shape that community into something that works for them.

And then the fourth big takeaway from the observations is that users can make informed decisions and that their choices do matter. This became most obvious as the tide started to shift against using meta and its related products like Instagram and Facebook as An alternative to TikTok. There’s a phrase that goes around that our audience may be aware of, that there is no ethical consumption under capitalism.

That in that system, someone somewhere is getting the short end of the stick. And while that’s true, there’s often an element or undercurrent of resignation, of engineered helplessness. Designed to get somebody asking, if every choice I make is wrong, if there is no ethical choice, then what does my choice matter?

But as I said earlier, that choice is critical because for users and for creators who are consumers of platforms, the choice of which platform to use really matters. On January 7th, when Meta announced that they’d no longer be using third party fact checkers, or an earlier announcement where they said that they’d be using AI agents within the stream so that your audience may no longer be an audience, one begins to wonder why even use those products at all.

A user or creator would have to ask themselves, does continuing to use this product legitimize those practices? This is a question that a number of users and creators started asking themselves when it came to X slash Twitter, and that led to the mass migration to Blue Sky as they finally realized that their presence, especially that of the journalists and academics, legitimized Twitter as a platform.

I say, finally, as it seemed like a patently obvious outcome with the change in ownership in 2022, and I’d be standing here like John McClane shouting out the window yelling, Welcome to the party, pal, but We all come to these things in our time. The point is, is once you make that realization, is you need to take action.

Long term, who’s to say that blue sky was the right choice, but right now it seems to be a safer choice, even though it might just be a big pot of honey that one day will become commodified once the resource has been sufficiently built out and another wave of migration will take place, but Such is the way of life on the internet.

The last comment we’ll make is the idea of the root causes of the ban. As we noted earlier, there was a lot of speculation about what those causes were, but most of it just boils down to two words, and those two words are market power. Market power is the ability of a firm to set the price of its good above the marginal cost.

And in this case, it’s helpful to remember what the product of a social media company is. They sell audiences to advertisers. This includes you, and me, and Everybody else and everything that’s done on those platforms, which is then packaged up and sold off to advertisers looking for those specific demographics.

In order to maintain that market power, you need to be able to manipulate either the supply or the demand. And for social media companies and other high tech firms, that works a little bit differently, because an innovation can come along and disrupt the market that they’ve gathered. For example, it doesn’t matter if you’re the best film camera company in the world, if everybody shifts to digital cameras and nobody’s taking pictures anymore.

So for firms that obtain that monopoly position that allows them to exert market power, they’ll often do a lot to retain that market power and maintain the ability to charge what they want. And I say monopoly, but it’s often usually only one or two firms within any given high tech segment. Think about Microsoft versus Apple on the desktop or.

Android versus iOS on your smartphones. Regardless of whether it’s a monopoly or a duopoly, they don’t want competition. It messes with their vibe. And their vibe is the ability to extract exorbitant profits. Now, I’m drawing this from Mordecai Kurz’s The Market Power of Technology, published in 2023.

Kurtz is a professor emeritus of economics at Stanford, and he’s been doing this for a long time. The book is pretty dense and technical, but it’s been written with an eye to a lay audience, and there’s sections of it that are very readable and include some real solutions as well. We reviewed it in a newsletter a few months back, and as I said, it was written in 2023, but what we’re seeing with the TikTok ban reads like a case study.

It’s like chapter and verse of the observations that Mordecai Kurz made in his book about market power and how it’s exerted in high tech firms. This is why something like TikTok, whose technologies presented a threat to the dominance that Meta had on its social media properties, was something that had to be dealt with from a lobbying perspective.

And I say technologies here because it’s an assemblage of technologies. It isn’t just the algorithm, which seems to draw a lot of the interest, but it’s also the app and the associated tools, the way it functions, the way it’s designed to allow users to create. All these things come together to provide a compelling alternative to met as products that are offered.

And it is in much the same way that all these observations come together to give us a picture of what happens during the incipient diaspora, the root causes as well as some of the effects that take place. As we asked earlier, when we look at an incipient diaspora, is it about the waiting or the recovery?

And in this case, What happens next?

Thank you for joining us for this episode of the Implausipod. We’re happy to start 2025 with you, and we’ve got some new episodes coming out to you soon. We’ve been preparing them for a while, so I’ve been looking forward to sharing them with you. I’m your host, Dr. Implausible. You can reach me at drimplausible at implausipod.

com, and as mentioned, you can also find the show archives and all our previous shows at implausipod. com as well. I’m responsible for all elements of the show, including research, writing, mixing, mastering, and music, and the show is licensed under Creative Commons 4. 0 share alike license. No AI is used in the production of this show, though I think there’s a machine learning algorithm in the transcription software that I use.

As stated earlier, we do make allowances for accessibility. You may have also noted that there was no advertising during the program, and there’s no cost associated with the show. But it does grow from word of mouth of the community. So if you enjoy the show, please share it with a friend or two, and pass it along.

There’s also a Buy Me A Coffee link on each show at implausipod. com, which will go to any hosting costs associated with the show. Until next time, take care, and have fun.

The California Ideology

(this was originally published as Implausipod Episode 39 on December 7th, 2024)

What do you think of when you heard the word California?  What do you think it’s “ideology” might be?  If you work in or on high technology, that California ideology may be shaping the way that you work, the projects that you work on, and the business models that high technology pursues. 

What does it all mean?  The thinking that is driving the pursuit of certain developments in technology, such as robotics and artificial intelligence, and the rise of accelerationism need to be understood by looking at the underlying philosophies.  Join us as we dig deep to find out what’s going on.


Let’s start with a question. What do you think of when you hear the word California? What’s the picture that comes into your head? If you had to hazard a guess, what would something called the California Ideology be? Take a moment and walk in your answer. We’re going to have a look during this episode of The ImplausiPod.

Welcome to The ImplausiPod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. And what is the California Ideology? Let’s see. Well, if you pictured a mix of hippies and high tech, of new wave and new money, you’d be pretty close. But the California ideology is something that didn’t start in the 2020s or even the 2000s.

We have to go back even earlier. It’s something that came about in the 60s and 70s, that mix of new mysticism and new technology that was coming through, funded in part by a whole lot of U. S. Cold War defense spending. Writing in 2001, Mark Tribe described it as, quote, a deadly cocktail of naïve optimism, techno utopianism, and new libertarian politics popularized by Wired magazine, end quote.

And from the tone you can sense that there was a point of criticism there. Because the Californian ideology was being defined by European academics, media theorists, and thinkers, who might not have had a technological edge, but definitely had the upper hand when it came to theory. Mark Tribe wrote that definition in 2001, in the introduction to a book by one of those European thinkers, Russian émigré artist and media theorist Lev Manovich.

A few years earlier, in the mid 90s, Manovich had published a piece on Mark Tribe’s Rhizome mailing list, This is back before blogs were even a thing. We might call it a web ring or a web forum now. In that piece, called On Totalitarian Interactivity, which, in 2024, reads like it was written by a time traveler, in the way it absolutely nails our current situation, Manovich compared the two opposing schools of new media philosophy, the Eastern and the Western, and he was Critical of both, having seen both of them first hand.

For Manovich, the belief in the power and potential of a new technology is drawn from the experiences of the user, to which we wholeheartedly agree. Those beliefs are going to shape a lot of the way things try and get used, which we’ve talked about a lot before here. But those beliefs are also going to shape the types of things that try to be made.

The technologies that engineers will try and work on, that companies will try and bring to market, that governments will try and fund research in, and that users will eventually adopt. Or not. And this is why it all boils back down to ideology. As Manovich said, quote, Western media artists usually take technology absolutely seriously and despair when it does not work, end quote.

And the solution for the Western artists is often more technology. Manovich goes on further and states, quote, A Western artist sees the internet as a perfect tool to break down all hierarchies and bring the art to the people. Parentheses, while in reality more often than not using it as a super media to promote his or her name, end parenthesis, end quote.

And in 1996, if someone was going to try and describe influencer culture on social media, I think he kind of nailed it. Like I said, time traveler. But both these quotes kind of hint at what the California ideology is. Manovich would go on further to write a book in 2001 called The Language of New Media, which went much more in depth on some of the topics we’re discussing here, and we’ll return to that at a later point in time.

To really understand the Californian ideology, we need to look at where it originally came from. And the best place to do that is to look at the paper that originally identified it. A 1995 essay by Richard Barbrook and Andy Cameron. And buckle up, this one might take a bit.

The Californian Ideology was originally published by the authors in 1995 in a British magazine titled Mute. It was a mix of online and print versions, so I can’t tell exactly which format the original came out in, and there’s been a couple different versions that have been published since. It’s still accessible online, so I’ll put the link in the notes.

You can go to the metamute. org website if you want to see their archives as well. The essay is typical of a lot of those mid 90s works on the internet, as everything’s starting to come on board, and people are really just feeling their way around it and trying to figure it out. Here, the authors describe the internet as hypermedia.

Drawing on very McLuhan esque terminology in order to situate it, but we can see where they’re going with it, and looking back with nearly 30 years of hindsight, it’s clear what they’re talking about. There’s very much a leftist, anti-capitalist view to much of their work, and we can see that in some of the terminology they use, even in the opening paragraph.

Quote Once again, capitalism’s relentless drive to diversify and intensify the creative powers of human labor is on the verge of qualitatively transforming the way in which we work, Play and live together. By integrating different technologies around common protocols, something is being created which is more than the sum of its parts.

When the ability to produce and receive unlimited amounts of information in any form is combined with the reach of the global telephone networks, Existing forms of work and leisure can be fundamentally transformed. End quote. 

And they go on further to say that anyone who can offer a simple explanation of what’s going on will be listened to, and this has come about through a quote, “Loose alliance of writers, hackers, capitalists, and artists from the west coast of the USA”.

And what those people have come up with is the Californian ideology, which is quote, A heterogeneous orthodoxy for the coming information age. The Californian ideology is this blend of hippies and high tech. It’s, as they say, an amalgamation of opposites, combining a freewheeling spirit and an entrepreneurial zeal where everyone will be both hip and rich.

And because it’s optimistic and positive and allows space for everybody, kind of like Clay Shirky said, it allows computer nerds, slackers, capitalists, social activists, academics, futuristic bureaucrats, and opportunistic politicians to say the least. To buy in, to get traction, to be seen as forward thinking if they hop on the early wave of this new technology.

And Barbrook and Cameron characterize this as an extropian cult, one that also sees buy in from various European artists and academics as well. In order to really understand the Californian ideology, Barbrook and Cameron go deep into the rise of the virtual class. who are, according to Arthur Croker and Michael Weinstein in their book Data Trash, the techno intelligentsia of cognitive scientists, engineers, computer scientists, video game developers, and all the other communications specialists.

This echoes a lot of what Daniel Bell was talking about in 1973 in The Coming of the Post-Industrial Society, and here, 20 years later, they’re starting to actually see it become reality. And we can see the roots in what all of these authors were talking about and what rose to become the gig economy. As they were discussing this already happening to the virtual class in the 1990s.

It’s important to remember that the gig economy did not first come for the taxi drivers, it came for the tech workers, and then they thought it was good enough for everybody else. But this is in part because the digital class, the virtual class, was incredibly myopic. They were a very privileged part of the labor force, and the benefits that they incurred did not necessarily apply to the population at large.

Barbrook and Cameron note that “the Californian ideology therefore simultaneously reflects the disciplines of market economics and the freedoms of hippie artisanship. This bizarre hybrid is only made possible through a nearly universal belief in technological determinism.” End quote. And this new technology allowed for the possibilities of the social liberalism that the hippies were looking for.

Along with the economic liberalism, or the libertarianism, really, that the new right was looking for. And what both of them were looking for, in a way to legitimize what they were talking about, is a link back to the founding fathers of the United States democracy. Quoting from Barbrook and Cameron again, “Above all, they are passionate advocates of what appears to be an impeccably libertarian form of politics.

They want information technologies to be used to create a new Jeffersonian democracy. Where all individuals would be able to express themselves freely within cyberspace.” And while that sounds like a great idea, looking back to the roots of American democracy, that’s not without its problems. Because Jeffersonian democracy, that popularized by the American founding father Thomas Jefferson, had very particular ideas of who counted when it came to that democracy.

Quote, their utopian vision of California depends on a willful blindness towards the other, much less positive, features of life on the west coast. Racism, poverty, and environmental degradation. End quote. 

What the authors are saying is that there’s a deep history of exploitation that goes hand in hand with the development of that ideology. And that in order to bring it about, you have to hide or ignore some of the realities of that history. 

At the core of the Californian ideology, there’s a lot of ambiguity as it’s bridging that gap between the left and the right, but the best way to understand it is probably to realize that it’s trying to have its cake and eat it too. It’s a hybrid faith that’s trying to cater to both the new left and the new right at the same time, and realize the utopian visions of both.

And regardless of whether it’s drawn from the left or the right, the Californian ideology is a capitalist ideology. As I said earlier, this was written in the mid 90s in the early days when people were figuring out what the internet would become, but for Barbrook and Cameron, they note that hypermedia, what they call the internet, would be a key component of the next stage of capitalism.

On the new left, the authors see the proponents of the virtual community with people like Howard Rheingold, where the internet could allow for the rise of a high tech gift economy based on the voluntary exchange of information and ideas and knowledge. On the new right, they note how there’s an embracer of the Laissez faire ideology, where tech culture publications like Wired would just uncritically reproduce works by Newt Gingrich, for example, buying into McLuhan esque technological determinism and thinking that the electronic telecommunications will give rise to an electronic marketplace.

For the authors writing in 1995, they weren’t sure what this would lead to. Quote, What is unknown is the social and cultural impact of allowing people to produce and exchange almost unlimited quantities of information on a global scale. End quote. And looking at the state of the internet 30 years later, we see the merger of both of those ideas of an electronic marketplace and a virtual community with the free exchange of ideas.

But that often can be deeply contested and there’s a lot of friction involved. The California ideology promises that each member of the virtual class can become a successful high tech entrepreneur, much like the way that many Americans consider themselves temporarily embarrassed millionaires, and that these people are quote, “Resourceful entrepreneurs who are the only people cool and courageous enough to take risks.”

The Californian ideology proposes a world where, quote, “visionary engineers are inventing the tools needed to create a free market within cyberspace, such as encryption, digital money, and verification procedures,” end quote. And if this sounds like it was ripped out of the pitch deck for any recently proposed crypto venture of the last five years, then I want to remind you, again, this is 1995 written by people that were critical of what was happening.

One of the things Barbrook and Cameron note about the Californian ideology is how much it ignores its own history of the government funding that went into the development of the technology, especially on the West Coast, and the rise of the mixed economy there. Much of this is covered by researcher Teng-Hui Hu in their book, A Prehistory of the Cloud, published in 2016, where they note how much of the infrastructure of the internet mirrors the physical surroundings, especially on the West Coast.

And my own take is that these particular visions of cyberspace were removed from the physical realm where it was thought that everything was formless and weightless and that anybody could be anything. We see the creation tales from many elder myths made manifest once again in the mythic visions of cyberspace and the new cyber religion, so it follows.

We talked about these mythic visions back in episode 26 titled Silicon Dreams, so I encourage you to go check that one out if you’d like. What those mythic visions were really good at was inspiring the DIY culture that really developed some of the innovative ideas that were extent within the burgeoning computer scene.

And while this includes technological developments, like the early personal computers that were developed in garages across California, it also includes social elements, like new agers, surfing, skateboarding, LGBTQ, liberation, health food, yoga, pop music, and a whole bunch of else besides. The fact you didn’t necessarily need to be a tech innovator helped get buy in from a lot more groups with respect to the California ideology, and the tech was definitely helped a whole lot by government spending.

And the contribution by all these groups, the community, the DIYers, the popular culture, and the government at large, is something that often gets ignored by the entrepreneurs and other supposed tech visionaries. As their authors state, all technological progress is cumulative. It depends on the results of a collective historical process and must be counted At least in part as a collective achievement.

But this idea of collective achievement goes against much of their narrative. But that narrative draws on many sources of inspiration, and given that we’re dealing with high technology, at least one of those is science fiction. Now, sci fi, whether it’s cyberpunk or otherwise, often has a very libertarian ethos.

The authors note how the utopian visions of the future on the right side of Californian ideology often echoed the predictions of Isaac Asimov, Robert Heinlein, and other sci fi writers, quote, whose future worlds were always filled with space traders, super slick salesmen, genius scientists, Pirate captains and other rugged individualists, end quote.

This is the trail that led back to the Jeffersonian democracy and the Founding Fathers. In the 80s and 90s, that same character would show up, a hacker, a quote, lone individual fighting for survival within the virtual world of information. End quote. And this is where the California of that present connected with the California of the past, the ideology of the gold rush, of the self sufficient individual living out on the frontier.

It never really went away, it just became part and parcel of the underlying ideology of cyberspace, of the internet, of high technology, of California. And that ideology is what tech calls thinking.

What Tech Calls Thinking is a book published in 2020 by Adrian Daub, a professor of comparative literature at Stanford. And what he shows us is that despite being 25 years later, we’re still seeing a lot of the same old thinkers show up. Even though Silicon Valley itself has gone through some major changes since 1995, as the only players of note from back then are Microsoft and Apple, as Google was just in its infancy, and Amazon, Facebook, and the rest of social media didn’t exist at all, and the owners of some of those companies are now famous enough to be recognizable by only their last name.

We can call it the Madonna Zone, or Maybe even the Cher Zone, though these guys aren’t about sharing. They have names like Bezos, and Musk, and Zuckerberg, and I guess we could add Altman to that list now, too. In Altman’s recent essay, The Intelligence Age, he outlines some of the philosophy driving his quest towards AGI.

But, regardless of the name or the company that they founded or own, not always the same thing, we need to point that out, these tech oligarchs express a strikingly similar ideology. We covered a little bit of that almost a year ago when we looked at the Tecto Optimist Manifesto published by Mark Anderson, formerly of Netscape, but Dow covers it sufficiently well.

In each of the seven chapters of the book, Daub covers one of the ideas that’s central to the philosophy behind Silicon Valley, usually characterized by a single author, perhaps two. These writers and philosophers include some familiar names like Marshall McLuhan, Ayn Rand, Aldous Huxley, Jacques Girard, Joseph Schumpeter, Cass Phillips.

And if we’ve heard a bunch of those names already, it’s not by accident. Like I said, there’s a lot of consilience and overlap. In the course of my own studies in grad school, I covered a few of these names in depth, though I’ll admit not all, but what I see here overlaps a lot of what I’ve studied elsewhere.

The overarching aim of Daub’s work is to get behind the media’s focus on the tech industry’s thought leaders, the public intellectuals that get written up so often in media pieces, and trace the ideas and where they’ve come from. And the key point of inception for Daub is Stanford. This is the inflection point, or quilting point, where everything comes together.

This makes some sense for Daub. It was where he was located and viewing his surroundings. And there are other universities involved as well. When one thinks of big tech schools, MIT surely comes to mind, too, but for a Californian perspective, we need to look at Stanford. And the university is important, because a lot of tech’s ideas are quote, university adjacent, or quote, academic.

Big Tech seeks the legitimation of their ideas via the proximity to higher learning, as the people involved have often dropped out or not completed their education. Dropping out is the focus of Chapter One, as it allows founders to buy into the pre existing narrative, one that’s pre packaged and ready for them, and makes for easier work for the journalists covering the field.

There’s a visibility of being associated with the college, but only briefly. Don’t overstay your welcome if you want to be treated as a visionary. As Daub points out, What this means is that the education of these founders is often incomplete, missing the context that would come with more advanced study and absent from a general studies survey course.

Usually, I’ll admit to having been blessed with a couple great profs back in the day myself, but dropping out allows one to fit the role of a maverick, able to reject elite institutions and not constrained by conventional thinking. to really allow one to engage in the creative destruction that comes from disrupting the market.

And that Schumpeterian creative destruction features heavily, comprising much of Chapter 6. Joseph Schumpeter was an Austrian economist who worked at Harvard starting in the 1930s, and he coined the term as part of his observations of the nature of the business cycle. Much of what he was talking about was the instability of capitalism and the inevitability of socialism, but this was done through the lens of the role of the entrepreneur in the process of innovation.

a bringing something new to market. Quote, The fundamental impulse that sets and keeps the capitalist engine in motion comes from new consumer goods, the new methods of production or transportation, the new markets, the new forms of industrial organization that capitalist enterprise creates. This is from Schumpeter, which Daub quotes at length in his work.

This shaking up is what keeps it afloat. If it wasn’t for the shakeup, the instability in the system would get too much, and it all falls apart. As Daub notes, quote, The concept of creative destruction sublimates the concept of revolution. End quote. Things continually get disrupted, and the only constant seems to be change.

Of course, the title of chapter six is disruption, that underlying ethos that impels so much change within Silicon Valley. Disruption is one of those totalizing terms that gets leveraged by Silicon Valley to suggest that this is the only way that change or innovation can happen. As Daub notes, quote, Disruption plays to our impatience with structures and situations that seem to coast on habit and inertia, and it plays to the press’s excitement about underdogs, rebels, and outsiders.

It’s that personal narrative that we talked about a few minutes ago that allows these multi billionaire founders to consider themselves still the plucky underdog from their favorite movies when they were young. And it allows them to deal with the cognitive dissonance of realizing that perhaps they’re on the other side.

Because once you’ve got a couple billion dollars behind you, you are the establishment, no matter how you might frame yourself. Narratives about disruption are ultimately narratives about change, but only in a certain constrained direction. As Daub notes, disruption is newness for people who are scared of genuine newness, revolution for people who don’t stand to gain anything from revolution.

And that idea that Silicon Valley is introducing something that’s genuinely new really needs to be looked at with a hard, critical eye. Daub notes, one ought to be skeptical of unsubstantiated claims of something being totally new and not following the hitherto unestablished rules of business, of politics, of common sense.

The amount of stuff that’s actually new or a radical innovation is incredibly tiny. For an example, one needs to look no further than a single episode of the show Connections, hosted by the British historian of science and technology, James Burke, where he traces the multiple contingencies and coincidences that have led through the path of history to our modern inventions and technologies.

And if we apply this kind of historiographic analysis through a critical To nearly anything that’s claimed to be disruptive, we can see the path through history that led up to that point. Genuine newness is very, very rare. And even the claims that the tech industry has, there’s dog quotes that they’re making fundamental transformations of how capitalism functions, can be looked at with a skeptical eye.

Because as Schumpeter was writing 100 years ago, and Marx decades before that, That’s just how capitalisms always work. Disruption is just faster and more far reaching, and as we suggested, it’s totalizing. As Daub quotes, Disruption seems to suggest that the rapids are all there is and can be. And we’ve talked about those rapids before, back in episode 27, The Old Man and the River, back in February.

But the speed is the thing. Quote, Disruption seems to lean in the direction of more capitalism, end quote. And this is not by accident. The disruptions want to go faster, and that theory of move fast and break things has a historical antecedent nearly a hundred years ago. That theory is accelerationism, and we need to talk about it.

Accelerationism is an ideology or set of philosophies that crosses between party lines. It kind of exists on both the left and the right, and what it calls for is the radical acceleration of everything that’s going on. An intensification of the capitalization of everything in order to get to some perceived next level of human growth or achievement.

There’s this idea that we’re not going fast enough, that the checks and balances that we put on society are holding us back from reaching that. And if we just go faster, harder, we’ll have enough technology or AI or whatever that’ll help solve those problems. And we can deal with it in whatever imagined future state where we have the technology.

And it should be noted that there’s left wing groups that believe in this accelerationism as well, who believe if you allow capitalism to put the pedal to the metal, it’ll be It’ll eventually go off the rails and then you can rebuild out of the ashes of whatever’s left. You know, once we get through that cool Mad Max stage and actually get around to rebuilding society.

But as you can tell from my tone, it’s an incredibly bad idea. First off is there’s this assumption that whoever is pushing the pedal to the metal that As their hand on the throttle will be there at the end to reap the rewards, once we get there. You know, that they’ll be among the survivors. And two, is that an incredibly large number of people will get hurt in the process of going faster and harder.

It’s just incredibly irresponsible, and there’s no guarantee that we get there either. It’s an assumption that they make that, hey, if we strap a rocket to our back like Wile E. Coyote, we’ll get to where we’re going faster. But it’s not necessarily borne out. It’s all in theory. We talked about it on one of our episodes of the podcast about a year ago, episode 17, called Not a Techno Optimist.

So, my apologies for recovering some old ground, but it’s worth mentioning again. Go check it out in the archives if you’d like. There’s more to talk about when it comes to accelerationism, but we’re going to have to get into that in a few episodes from now. The main thing is this idea of being a disruptor.

It isn’t a thing of science fiction, which inspires so much of Silicon Valley. It’s Fantasy. Daub also talks about the continued role of Ayn Rand and her influence on the libertarian elements that are so prevalent in technology. I think the best quote summarizing Ayn Rand can be attributed to John Rogers.

Quote, there are two novels that can transform a bookish 14 year kid’s life. The Lord of the Rings and Atlas Shrugged. One is a childish daydream that can lead to an emotionally stunted, socially crippled adulthood, in which large chunks of the day are spent inventing ways to make real life more like a fantasy novel.

The other is a book about orcs. End quote. Of course, Maybe not skipping that English lit class in the college you dropped out of would help give a little context for understanding Rand. However, we’re not here to chase that particular rabbit. The big takeaway from Dobb’s work is a look at the tech industry’s philosophical roots and its focus on money.

As he notes, The tech industry we know today is what happens when certain received notions meet with a massive amount of cash with nowhere else to go. End quote. Absent an idea of what to do with all that money, tech looked around for legitimation. And, as Daub notes, quote, the ideas that tech call thinking were developed and refined in the making of money, end quote.

This is accomplished via a blend of state intervention and capitalist entrepreneurship that leverages DIY culture, relying on it for essential contributions by innovators and early adopters, to be sure. And much of tech has resulted in the development of, quote, mass markets for private companies to sell existing information commodities, end quote, things like films and music and television.

Stuff that we would normally call art has been transformed by the shift from representation to manipulation that occurs within the digital realm, according to Manovich. Further, he notes that Western artists appear to break down hierarchies as part of the process of building a personal brand for themselves, and coming out of the influencer decade of the 20 teens where catchphrases like the brand is you get tossed around, this seems self evident.

It’s a commodification of the self. But we’ll have to wait for a later date to do a deeper dive into this process of becoming which drives influencer culture. We’ll let you know when that episode is ready to go. 

By contrast, for Manovich, the Eastern artists, quote, recognize that the nature of technology is that it does not work, it will always break down. It will never work as it is supposed to. 

For the outside observer, we can see how this makes sense, where the failures of one technology provide the opportunity for the sale of another technology to solve the problems of the first one. And one thing tech likes is another sale, because tech is ultimately a capitalist enterprise.

And it is this focus on capitalism which underlies the Californian ideology as a whole. The connection point between Daub and the work 25 years previous is that those ideas never went away. The tech industry in 2020 is pretty much still the same industry it was that Barbrook and Cameron identified back then.

Witness that quote about the crypto pitch deck we made earlier. The big difference is that there is more of it, the increased focus on the money. We’re just later along in the late stage capitalism. We’re not so far along that we’ve reached the sci fi aspirations driving some of them forward, as mentioned earlier, but those aspirations exist in both works too.

Barbrook and Cameron note that there is a drive for the emergence of the post human that we can see in N. Katherine Hayle’s work, as well as various cyberpunk authors such as William Gibson and others. Post humanism is, after all, a quote, biotechnological manifestation of the social privileges of the virtual class, end quote.

This is why there is such a strong connection to the accelerationists mentioned earlier. The remaining virtual class are aging and looking to live longer. There is a fear of death motivating much of the virtual class, characterizing them as extropian, that sect of transhumanists seeking to extend their lifespans to the extent that they may one day live indefinitely.

They seek to advance technology faster, as that dark specter inexorably catches up with them. The third point in common between what Tech calls thinking and the Californian ideology, two works separated by 25 years, a continent, and an ocean, is the critique of the underlying ideology of the virtual class itself.

There’s other names for it floating around, of course, calling them Tech Bros, or TESCREAL, or whatever, but like Manovich pointed out earlier, it’s all of the same thread of Western critiques of Tech. And seeing as we mentioned Lev Manovich, let’s return to a bit of what he had to say on totalitarian interactivity.

There, from his position as a quote, post communist subject, he saw the internet as a communal apartment of the Stalin era where everybody spies on everybody else, or as a giant garbage site for the information society, with everybody dumping their used products of intellectual labor and nobody cleaning up.

As in the moment, we are witness to a mass migration from Twitter to BlueSky, with some people deleting their posts and accounts, and others not, just fleeing, as statements ring poignantly true. We are witnessing the migration of much of the virtual class in real time, as platforms shift and become unstable, and new platforms are found.

There’s a degree of insulation that comes with this, as if moving platforms is somehow enough of an action to take. There’s a blending of beliefs going on here. As Barbrook and Cameron note, quote, Many members of the virtual class want to be seduced by the libertarian rhetoric and technological enthusiasm of the new right, end quote, a term that describes the newt gingrich era republicans in the U. S. in the mid 1990s. 

That belief and enthusiasm affords them the opportunities to continue living much as they had previously. Not all internet users are so lucky. There are clear divides. Redlining by telephoning companies creates a very real gap in accessibility to the information superhighway. 

As this was written around the same time as the U. S. Department of Commerce was warning of the digital divide in 1995, which would soon be picked up and championed as a term elsewhere by those advocating for more widespread internet adoption. We can see why. 

The scholar Teng-hui Hu traces this very real phenomenon of the physical geography’s effect in shaping the rather ephemeral nature of cyberspace in their book, A Prehistory of the Cloud, 2015.

For those members outside the virtual class, the prospects are much more bleak. Quoting from Barbrook and Cameron, The deprived only participate in the information age by providing cheap, non unionized labor for the unhealthy factories of the Silicon Valley chip factories. End quote. Fifteen years later, this could still describe Foxconn making iPhones for Apple, or the warehouses at Amazon, or drivers for Uber.

The trend toward the gigged economy had a long arc that started well before the smartphone era. The digital artisans were, quote, living within a contract culture and, quote, gigged long before others, well paid in a manner that decentralized collective action. To quote the authors again, Although they enjoy cultural freedoms won by the hippies, most of them, that is the virtual class, are no longer actively involved in the struggle to build ecotopia.

End quote. The true believers of the new left involved in the building of cyberculture took their stock options and left the suburbs behind. This cybernetic libertarianism was very much in the whatever I’ve got mine mindset, never imagined that one day those cyber leopards might eat their faces. And this follows from the ideals of the Jeffersonian democracy that drives the Californian ideology.

In a section titled, Cyborg Masters and Robot Slaves, Barbrook and Cameron note that the fear of the rebellious underclass has now corrupted the most fundamental tenet of the Californian ideology, its belief in the emancipatory potential of the new information technologies. However, as they note, those technologies of freedom are turning into machines of dominance.

The crux of the Californian ideology is in Barbrook and Cameron’s description of the racial divide in California. “If human slaves are ultimately unreliable, then mechanical ones will have to be invented. The search for the holy grail of artificial intelligence reveals this desire for the golem. A strong and loyal slave whose skin is the color of earth and whose innards are made of sand.”

As we discussed back in episode 17, there is a utopian vision here, and Barbrook and Cameron note how these techno utopians, quote, imagine that it is possible to obtain slave like labor from inanimate machines. However, slave labor cannot be obtained without somebody being enslaved, end quote. And this can be seen in very recent history, too.

Anyone wondering about the results of the voting for Proposition 6 in California during the recent national election in the United States on November 2024, for any future listeners, will find their answer here. 

Proposition 6 was a proposed amendment to California’s constitution that would bar slavery in any form and repeal involuntary servitude as punishment for a crime.

In it, Californians voted 53. 3 percent against. 

The Californian ideology has a dark history, one that still has a hand in shaping the future.

Thank you for joining us for this episode of the Implausipod. I’m your host Dr. Implausible. Join us for the next few episodes as we continue our journey into exploring what the Californian ideology has left us. As we look into those Californian roads and car culture. And then what that utopic vision of the world would look like as we delve into the world model that we hinted at when we talked about Sam Altman’s intelligence age essay.

I hope we can explore these before the end of 2024 and then we’ll see what 2025 has in store. 

You can reach me at drimplausible at implausipod. com, and you can also find the show archives and transcripts of all our previous shows at implausipod. com as well. I’m responsible for all elements of the show, including research, writing, mixing, mastering, and music, and the show is licensed under Creative Commons 4. 0 share alike license. 

You may have also noted that there was no advertising during the program, and there’s no cost associated with the show. But it does grow from word of mouth of the community, so if you enjoy the show, please share it with a friend or two, and pass it along. There’s also a Buy Me A Coffee link on each show at implausipod dot com, which will go to any hosting costs associated with the show. 

Over on the blog, we’ve started up a monthly newsletter. There will likely be some overlap with future podcast episodes, and newsletter subscribers can get a hint of what’s to come ahead of time, so consider signing up, and I’ll leave a link in the show notes.

Until next time, take care, and have fun.



Bibliography

Altman, S. (2024, September 23). The Intelligence Age. https://ia.samaltman.com/

Barbrook, R., & Cameron, A. (1995). The Californian Ideology. Mute, 1(3). http://www.imaginaryfutures.net/2007/04/17/the-californian-ideology-2/

Bell, D. (1973). The coming of post-industrial society: A venture in social forecasting. Basic Books.

Daub, A. (2020). What Tech Calls Thinking: An Inquiry into the Intellectual Bedrock of Silicon Valley. Farrar, Straus and Giroux.

Hayles, N. K. (1999). How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics (1 edition). University Of Chicago Press.

Hu, T.-H. (2016). A Prehistory of the Cloud (Illustrated edition). The MIT Press.

Manovich, L. (1996). On Totalitarian Interactivity. https://www.manovich.net

McLuhan, M. (1964). Understanding Media: The Extensions of Man. The New American Library.

Schumpeter, J. A. (1962). Capitalism, socialism and democracy (First Harper Torchbook ed). Harper & Row.

Tribe, M. (2001) “Introduction” in Manovich, L. (2001). The language of new media. MIT Press.

AI Refractions

(this was originally published as Implausipod Episode 38 on October 5th, 2024)

https://www.implausipod.com/1935232/episodes/15804659-e0038-ai-refractions

Looking back in the year since the publication of our AI Reflections episode, we take a look at the state of the AI discourse at large, where recent controversies including those surrounding NaNoWriMo and whether AI counts as art, or can assist with science, bring the challenges of studying the new medium to the forefront.


In 2024, AI is still all the rage, but some are starting to question what it’s good for. There’s even a few that will claim that there’s no good use for AI whatsoever, though this denialist argument takes it a little bit too far. We took a look at some of the positive uses of AI a little over a year ago in an episode titled AI Reflections.

But it’s time to check out the current state of the art, take another look into the mirror and see if it’s cracked. So welcome to AI Refractions, this episode of ImplausiPod.

Welcome to The ImplausiPod, an academic podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. And in this episode, we’ve got a lot to catch up on with respect to AI. So we’re going to look at some of the positive uses that have come up and how AI relates to creativity and statements from NaNoWriMo caused a bit of controversy.

And how that leads into AI’s use in science. But it’s not all sunny over in AI land. We’ve looked at some of the concerns before with things like Echange, and we’ll look at some of the current critiques as well. And then look at the value proposition for AI, and how recent shakeups with open AI in September of 2024 might relate to that.

So we’ve got a lot to cover here on our near one year anniversary of that AI Reflections episode, so let’s get into it. We’ve mentioned AI a few other times since that episode aired in August of 2023. It came up in episode 28, our discussion on black boxes and the role of AI handhelds, as well as episode 31 when we looked at AI as a general purpose technology.

And it also came up a little bit in our discussion about the arts, things like Echanger and the Sphere, and how AI might be used to assist in higher fidelity productions. So it’s been an underlying theme about a lot of our episodes. And I think that’s just the nature of where we sit with relation to culture and technology.

When you spend your academic career studying the emergence of high technology and how it’s created and developed, when a new one comes on the scene, or at least becomes widely commercially available, you’re going to spend a lot of time talking about it. And we’ve been obviously talking about it for a while.

So if you’ve been with us for a while, first off, you’re Thank you, and this may be familiar to you, and if you just started listening recently, welcome, and feel free to check out those episodes that we mentioned earlier. I’ll put links to the specific ones in the text. And looking back at episode 12, we started by laying down a definition of technology.

We looked at how it functioned as an extension of man, to borrow from Marshall McLuhan, but the working definition of technology that I use, the one that I published in my PhD, is that “Technology is the material embodiment of an artifact and its associated systems, materials, and practices employed to achieve human ends.”

And this definition of technology covers everything from the sharp stick and sharp stick- related technologies like spears, pencils, and chopsticks, to our more advanced tech like satellites and AI and VR and robots and stuff. When you really think about it, it’s a very expansive definition, but that helps us in its utility in allowing us to recognize and identify things.

And by being able to cover everything from sharp sticks to satellites, from language to pharmaceuticals to games, it really covers the gamut of things that humans use technology for, and contributes to our view of technology as an emancipatory view. That technology is ultimately assistive and can aid us in issues that we’re struggling with.

We recognize that there’s other views and perspectives, but this is where we fall down on the spectrum. Returning back to episode 12, we showed how this emancipatory stance contributes to an empathetic view of technology, where we can step outside of our own frame of reference and think about how technology can be used by somebody who isn’t us.

Whether it’s a loved one, somebody close to us, or even a member of our community or collective, or you. More widely ranging, somebody that we’ll never come into contact with. How persons with different abilities and backgrounds will find different uses for the technology. Like the famous quote goes, “the street finds its own uses for things.”

Maybe we’ll return back to that in a sec. We finished off episode 12 looking at some of the positive uses of AI at that time that had been published just within a few weeks of us recording that episode. People were recounting how they were finding it as an aid or an enhancement to their creativity, and news stories were detailing how the predictive text abilities as well as generative AI facial animations could help stroke victims, as well as persons with ALS being able to converse at a regular tempo.

So by and large it could function as an assistive technology, and in recent weeks we have started trying to Catalogue all those stories. Back in July over on the blog we created the Positive AI Archive, a place where I could put those links to all the stories that I come across. Me being me, I forgot to update it since, but we’ll get those links up there and you should be able to follow along.

We’ll put the link to the archive in the show notes regardless. And, in the interest of positivity, that’s kinda where I wanted to start the show.

The street finds its own uses for things. It’s a great quote from Burning Chrome, a collection of short stories by William Gibson. It’s the one that held Johnny Mnemonic, which led to the film with Keanu Reeves, and then subsequently The Matrix and Cyberpunk 2077 and all those other derivative works. The street finds its own uses for things is a nuanced phrase and nuance can be required when we’re talking about things, especially online when everything gets reduced to a soundbite or a five second dance clip.

The street finds its own uses for things is a bit of a mantra and it’s one that I use when I’m studying the impacts of technology and what “the street finds its own uses for things” means is that the end users may put a given technology to tasks that its creators and developers never saw. Or even intended.

And what I’ve been preaching here, what I mentioned earlier, is the empathetic view of technology. And we look at who benefits from using that technology, and what we find with the AI tools is that there are benefits. The street is finding its own uses for AI. In August of 2024, a number of news reports talked about Casey Harrell, a 46 year old father suffering from ALS, amyotrophic lateral sclerosis, who was able to communicate with his daughter using a combination of brain implants and AI assisted text and speech generation.

Some of the work on these assistive technologies was done with grant money, and there’s more information about the details behind that work, and I’ll link to that article here. There’s multiple technologies that go into this, and we’re finding that with the AI tools, there’s very real benefits for persons with disabilities and their families.

Another thing we can do when we’re evaluating a technology is see where it’s actually used, where the street is located. And when it comes to assistive AI tools like ChatGPT, The street might not be where you think it is. In a recent survey published by Boston Consulting Group in August of 2024, they showed where the usage of ChatGPT was the highest.

It’s hard to visually describe a chart, obviously, but at the top of the scale, we saw countries like India, Morocco, Argentina, Brazil, Indonesia. English speaking countries like the US, Australia, and the UK were much further down on the chart. The country where ChatGPT is finding its most adoption are countries where English is not the primary language.

They’re in the global south, countries with large populations that have also had to deal with centuries of exploitation. And that isn’t to say that the citizens of these countries don’t have concerns, they do, but they’re using it as an assistive technology. They’re using it for translation, to remove barriers and to help reduce friction, and to customize their own experience. And these are just a fraction of the stories that are out there. 

So there are positive use cases for AI, which may seem to directly contradict various denialist arguments that are trying to gaslight you into believing that there is no good use for AI. This is obviously false.

If the positive view, the use on the street, is being found by persons with disabilities, it follows that the denialist view is ableist. If the positive view, that use on the street, is being found by persons of color, non English speakers, persons in the global south, then the denialist view will carry all those elements of oppression, racism, and colonialism with it.

If the use on the street is by Those who find their creativity unlocked by the new tools and they’re finally able to express themselves where previously they may have struggled with a medium or been gatekept from having an arts education or poetry or English or what have you, only to now find themselves told that this isn’t art or this doesn’t count despite all evidence to the contrary, then there’s massive elements of class and bias that go into that as well.

So let’s be clear. An empathetic view of technology recognizes that there are positive use cases for AI. These are being found on the street by persons with disabilities, persons of the global south, non english speakers, and persons across the class spectrum. To deny this is to deny objective reality.

It’s to deny all these groups their actual uses of the technology. Are there problems? Yes, absolutely. Are there bad actors that may use the technology for nefarious means? Of course, this happens on a regular basis, and we’ll put a pin in that and return to that in a few moments, but to deny that there are no good uses is to deny the experience of all these groups that are finding uses for it, and we’re starting to see that when this denialism is pointed out, it’s causing a great degree of controversy.

In a statement made early in September of 2024, NaNoWriMo, the non profit organization behind National Novel Writing Month, it was acceptable to use AI as an assistive technology when writers were working on their pieces for NaNoWriMo, because this supports their mission, which is to quote, “provide the structure, community, and encouragement to help people use their voices, achieve creative goals, and build new worlds, on and off the page.” End quote. 

But what drew the opprobrium of the online community is that they noted that some of the objections to the use of AI tools are classist and ableist. And, as we noted, they weren’t wrong. For all the reasons we just explained and more. But, due to the online uproar, they’ve walked that back somewhat.

I’ll link to the updated statement in the show. The thing is, if you believe that using AI for something like NaNoWriMo is against the spirit of things, that’s your decision. They’ve clearly stated that they feel that assistive technologies can help for people pursuing their dreams. And if you have concerns that they’re going to take stuff that’s put into the official app and sell it off to an LLM or AI company, well, that’s a discussion you need to have with NaNoWriMo, the nonprofit. 

You’re still not held off from doing something like NaNoWriMo using notepad or obsidian or however else you take your notes, but that’s your call. I for one was glad to see that NaNoWriMo called it out. One of the things that I found both in my personal life, as well as in my research, when I was working on the PhD and looking at Tikkun Olam Makers is that it can be incredibly difficult and expensive for persons with disabilities to find a tool that can meet their needs, if it exists at all. So if you’re wondering where I come down on this, I’m on the side of the persons in need. We’re on the side of the streets. You might say we’re streets ahead.

Of course, one of the uses that the street finds for things has always been art. Or at least work that eventually gets recognized as art. It took a long time for the world to recognize that the graffiti of a street artist might count, but in 2024, if one was to argue that Banksy wasn’t an artist, you’d get some funny looks.

There are several threads of debates surrounding AI art, generative art, including the role of creativity, the provenance of the materials, the ethics of using the tools, but the primary question is what counts? What counts as art and who decides that it counts? That’s the point that we’re really raising with that question, and obviously it ties back to what we were talking about last episode when it comes to Soylent Culture, and before that when we were talking about the recently deceased Frederick Jameson as well.

In his work Nostalgia for the Present from 1989, Jameson mentioned this with respect to television. He said, Quote, “At the time, however, it was high culture in the 1950s who was authorized, as it still is, to pass judgment on reality, to say what real life is and what is mere appearance. And it is by leaving out, by ignoring, by passing over in silence and with the repugnance one may feel for the dreary stereotypes of television series, that high art palpably issues its judgments.” end quote. 

Now, High Art in Bunny Quotes isn’t issuing anything, obviously, Jameson’s reifying the term, but what Jameson is getting at is that there’s stakes for those involved about what does and does not count. And we talked about this last episode, where it took a long time for various forms of new media to finally be accepted as art on its own terms.

For some, it takes longer than others. I mean, Jameson was talking about television in the 1980s, for something that had already existed for decades at that point. And even then, it wasn’t until the 90s and 2000s, to the eras of Oz and The Sopranos and Breaking Bad and Mad Men and the quote unquote “golden age of television” that it really began to be recognized and accepted as art on its own terms.

Television was seen as disposable ephemera for decades upon decades. There’s a lot of work that goes on on behalf of high art by those invested in it to valorize it and ensure that it maintains its position. This is why we see one of the critiques about A. I. art being that it lacks creativity, that it is simply theft.

As if the provenance of the materials that get used in the creation of art suddenly matter on whether it counts or not. It would be as if the conditions in the mines of Afghanistan for the lapis lazuli that was crushed to make the ultramarine used by Vermeer had a material impact on whether his painting counted as art. Or if the gold and jewels that went into the creation of the Fabergé eggs and were subsequently gifted to the Russian royal family mattered as to whether those count. It’s a nonsense argument. It makes no sense. And it’s completely orthogonal to the question of whether these works count as art.

And similarly, where people say that good artists borrow, great artists steal, well, we’ll concede that Picasso might have known a thing or two about art, but Where exactly are they stealing it from? The artists aren’t exactly tippy toeing into the art gallery and yoinking it off the walls now, are they?

No, they’re stealing it from memory, from their experience of that thing, and the memory is the key. Here, I’ll share a quote. “Art consists in bringing the memory of things past to the surface. But the author is not a Paessiest. He is a link to history, to memory, which is linked to the common dream.” This is of course a quote by Saul Bellow, talking about his field, literature, and while I know nowadays not as many people are as familiar with his work, if you’re at a computer while you’re listening to this, it might be worth to just look him up.

Are we back? Awesome. Alright, so what the Nobel Prize Laureate and Pulitzer Prize winner Saul Bellow was getting at is that art is an act of memory, and we’ve been going in depth into memory in the last three episodes. And the artist can only work with what they have access to, what they’ve experienced during the course of their lifetime.

The more they’ve experienced, the more they can draw on and put into their art. And this is where the AI art tools come in as an assistive technology, because they would have access to much, much more than a human being can experience, right? Possibly anything that has been stored and put into the database and the creator accessing that tool will have access to everything, all the memory scanned and stored within it as well.

And so then the act of art becomes one of curation of deciding what to put forth. AI art is a digital art form, or at least everything that’s been produced to date. So how does that differ? Right? Well, let me give you an example. If I reach over to my paint shelf and grab an ultramarine paint, right, a cheap Daler Rowney acrylic ink, it’s right there with all the other colors that might be available to me on my paint shelf.

But, back in the day, if we were looking for a specific blue paint, an ultramarine, it would be made with lapis lazuli, like the stuff that Vermeer was looking for. It would be incredibly expensive, and so the artist would be limited in their selection to the paints that they had available to them, or be limited in the amount that they could actually paint within a given year.

And sometimes the cost would be exorbitant. For some paints, it still actually is, but a digital artist working on an iPad or a Wacom tablet or whatever would have access to a nigh unlimited range of colors. And so the only choice and selection for that artist is by deciding what’s right for the piece that they’re doing.

The digital artist is not working with a limited palette of, you know, a dozen paints or whatever they happen to have on hand. It’s a different kind of thing entirely. The digital artist has a much wider range of things to choose from, but it still requires skill. You know, conceptualization, composition, planning, visualization.

There’s still artistry involved. It’s no less art, but it’s a different kind of art. But one that already exists today and one that’s already existed for hundreds of years. And because of a banger that just got dropped in the last couple of weeks, it might be eligible for a Grammy next year. It’s an allographic art.

And if you’re going to try and tell me that Mozart isn’t an artist, I’m going to have a hard time believing you.

Allographic art is a type of art that was originally introduced by Nelson Goodman back in the 60s and 70s. Goodman is kind of like Gordon Freeman, except, you know, not a particle physicist. He was a mathematician and aesthetician, or sorry, philosopher interested in aesthetics, not esthetician as we normally call them now, which has a bit of a different meaning and is a reminder that I probably need to book a pedicure.

Nelson was interested in the question of what’s the difference between a painting and a symphony, and it rests on the idea of like uniqueness versus forgery. A painting, especially an oil painting, can be forged, but it relies on the strokes and the process and the materials that went into it, so you need to basically replicate the entire thing while doing it in order to make an accurate forgery, much like Pierre Menard trying to reproduce Cervantes ‘Quixote’ in the Jorge Luis Borges short story.

Whereas a symphony, or any song really, that is performed based off of a score, a notational system, is simply going to be a reproduction of that thing. And this is basically what Walter Benjamin was getting at when he was talking about art in the age of mechanical reproduction, too, right? So, a work that’s based off of a notational system can still count as a work of art.

Like, no one’s going to argue that a symphony doesn’t count as art, or that Mozart wasn’t an artist. And we can extend that to other forms of art that use a notational system as well. Like, I don’t know, architecture. Frank Lloyd Wright didn’t personally build Falling Water or the Guggenheim, but he created the plans for it, right?

And those were enacted. He did. We can say that, yeah, there’s artistic value there. So these things, composition, architecture, et cetera, are allographic arts, as opposed to autographic arts, things like painting or sculpture, or in some instances, the performance of an allographic work. If I go to see an orchestra playing a symphony, a work based off of a score, I’m not saying that I’m not engaged with art.

And this brings us back to the AI Art question, because one of the arguments you often see against it is that it’s just, you know, typing in some prompts to a computer and then poof, getting some results back. At a very high level, this is an approximation of what’s going on, but it kind of misses some of the finer points, right?

When we look at notational systems, we could have a very, you know, simple set of notes that are there, or we could have a very complex one. We could be looking at the score for Chopsticks or Twinkle Twinkle Little Star, or a long lost piece by Mozart called Serenade in C Major that he wrote when he was a teenager and has finally come to light.

This is an allographic art, and the fact that it can be produced and played 250 years later kind of proves the point. But that difference between simplicity and complexity is part of the key. When we look at the prompts that are input into a computer, we rarely see something with the complexity of say a Mozart.

As we increase the complexity of what we’re putting into one of the generative AI tools, we increase the complexity of what we get back as well. And this is not to suggest that the current AI artists are operating at the level of Mozart either. Some of the earliest notational music we have is found on ancient cuneiform tablets called the Hurrian Hymns, dating back to about 1400 BCE, so it took us a little over 3000 years to get to the level of Mozart in the 1700s.

We can give the AI artists a little bit of time to practice. The generative AI art tools, which are very much in their infancy, appear to be allographic arts, and they’re following in their lineage from procedurally generated art has been around for a little while longer. And as an art form in its infancy, there’s still a lot of contested areas.

Whether it counts, the provenance of materials, ethics of where it’s used, all of those things are coming into question. But we’re not going to say that it’s not art, right? And as an art, as work conducted in a new medium, we have certain responsibilities for documenting its use, its procedures, how it’s created.

In the introduction to 2001’s The Language of New Media, Lev Manovich, in talking about the creation of a new media, digital media in this case, noted how there was a lost opportunity in the late 19th and early 20th century with the creation of cinema. Quote, “I wish that someone in 1895, 1897, or at least 1903 had realized the fundamental significance of the emergence of the new medium of cinema and produced a comprehensive record.

Interviews with audiences, systematic account of narrative strategies, scenography, and camera positions as they developed year by year. An analysis of the connections between the emerging language of cinema and different forms of popular entertainment that coexisted with it. Unfortunately, such records do not exist.

Instead, we are left with newspaper reports, diaries of cinema’s inventors, programs of film showings, and other bits and pieces. A set of random and unevenly distributed historical samples. Today, we are witnessing the emergence of a new medium, the meta medium of the digital computer. In contrast to a hundred years ago, when cinema was coming into being, We are fully aware of the significance of this new media revolution.

Yet I am afraid that future theorists and historians of computer media will be left with not much more than the equivalence of the newspaper reports and film programs from cinema’s first decades.” End quote. 

Manovich goes on to note that a lot of the work that was being done on computers, especially in the 90s, was stuff prognosticating about its future uses, rather than documenting what was actually going on.

And this is the risk that the denialist framing of AI art puts us in. By not recognizing that something new is going on, that art is being created, and allographic art, we lose the opportunity to document it for the future. And

And as with art, so too with science. We’ve long noted that there’s an incredible amount of creativity that goes into scientific research, that the STEM fields, science, technology, engineering, and mathematics, require and benefit so much from the arts that they’d be better classified as STEAM, and a small side effect of that may mean that we see better funding for the arts at the university level.

But I digress. In the examples I gave earlier of medical research, of AI being used as an assistive technology, we were seeing some real groundbreaking developments of the boundaries being pushed, and we’re seeing that throughout the science fields. Part of this is because of what AI does well with things like pattern recognition, allowing weather forecasts, for example, to be predicted more quickly and accurately.

It’s also been able to provide more assistance with medical diagnostics and imaging as well. The massive growth in the number of AI related projects in recent years is often due to the fact that a number of these projects are just rebranded machine learning or deep learning. In a report released by the Royal Society in England in May of 2024 as part of their Disruptive Technology for Research project, they note how, quote, “AI is a broad term covering all efforts aiming to replicate and extend human capabilities for intelligence and reasoning in machines.”

End quote. They go on further to state that, quote, “Since the founding of the AI field at the 1956 Dartmouth Summer Research Project on Artificial Intelligence, Many different techniques have been invented and studied in pursuit of this goal. Many of these techniques have developed into their own sub fields within computer science, such as expert systems and symbolic reasoning.” end quote. 

And they note how the rise of the big data paradigm has made machine learning and deep learning techniques a lot more affordable and accessible, and scalable too. And all of this has contributed to the amount of stuff that’s floating around out there that’s branded as AI. Despite this confusion in branding and nomenclature, AI is starting to contribute to basic science.

A New York Times article published July by Siobhan Roberts talked about how a couple AI models were able to compete at the level of a silver medalist at the recent International Mathematical Olympiad. And this is the first time that the AI model has medaled at that competition. So there may be a role for AI to assist even high level mathematicians to function as collaborators and, again, assistive technologies there.

And we can see this in science more broadly. In a paper submitted to arxiv. org in August of 2024, titled, The AI Scientist Towards Fully Automated Open Ended Scientific Discovery, authors Liu et al. use a frontier large language model to perform research independently. Quote, “We introduce the AI scientist, which generates novel research ideas, writes code, executes experiments, visualizes results, describes its findings by writing a scientific paper, And then runs the simulated review process for evaluation” end quote.

So, a lot of this is scripts and bots and hooking into other AI tools in order to simulate the entire scientific process. And I can’t speak to the veracity of the results that they’re producing in the fields that they’ve chosen. They state that their paper can, quote, “Produce papers that exceed the acceptance threshold at a top machine learning conference as judged by our automated reviewer,” end quote.

And that’s Fine, but it shows that the process of doing the science can be assisted in various realms as well. And in one of those areas of assistance, it’s in providing help for stuff outside the scope of knowledge of a given researcher. AI as an aid in creativity can help explore the design space and allow for the combination of new ideas outside of everything we know.

As science is increasingly interdisciplinary. We need to be able to bring in more material, more knowledge, and that can be done through collaboration, but here we have a tool that can assist us as well. As we talked about with Nessience and Excession a few episodes ago, we don’t know everything. There’s more than we can possibly know, so the AI tools help expand the field of what’s available to us.

We don’t necessarily know where new ideas are going to come from. And if you don’t believe me on this, let me reach out to another scientist who said some words on this back in 1980. Quote, “We do not know beforehand where fundamental insights will arise from about our mysterious and lovely solar system.

And the history of our study of the solar system shows clearly that accepted and conventional ideas are often wrong, and that fundamental insights can arise from the most unexpected sources.” End quote. That, of course, is Carl Sagan. From an October 1980 episode of Cosmos A Personal Journey, titled Heaven and Hell, where he talks about the Velkovsky Affair.

I haven’t spliced in the original audio because I’m not looking to grab a copyright strike, but it’s out there if you want to look for it. And what Sagan is describing there is basically the process by which a Kuhnian paradigm shift takes place. Sagan is speaking to the need to reach beyond ourselves, especially in the fields of science, and the AI assisted research tools can help us with that.

And not just in the conduction of the research, but also in the writing and dissemination of that. Not all scientists are strong or comfortable writers or speakers, and many of them come to English as a second, third, or even fourth language. And the role of AI tools as translation devices means we have more people able to communicate and share their ideas and participate in the pursuit of knowledge.

This is not to say that everything is rosy. Are there valid concerns when it comes to AI? Absolutely. Yes. We talked about a few at the outset and we’ve documented a number of them throughout the run of this podcast. One of our primary concerns is the role of the AI tools in échanger, that replacement effect that happens that leads to technological unemployment.

Much of the initial hype and furor around the AI tools was people recognizing that potential for échanger following the initial public release of ChatGPT. There’s also concerns about the degree to which the AI tools may be used as instruments of control, and how they can contribute to what Gilles Deleuze calls a control society, which we talked about in our Reflections episode last year. 

And related to that is the lack of transparency, the degree to which the AI tools are black boxes, where based on a given set of inputs, we’re not necessarily sure about how it came up with the outputs. And this is a challenge regardless of whether it’s a hardware device or a software tool.

And regardless of how the AI tool is deployed, the increased prevalence of it means we’re leading to a soylent culture. With an increased amount of data smog, or bitslop, or however you want to refer to the digital pollution that takes place with the increased amount of AI content in our channels and For-You-Feeds, and this is likely to become even more heightened as Facebook moves to pushing AI generated posts into the timelines.

Many are speculating that this is becoming so prevalent that the internet is largely bots pushing out AI generated content, what’s called the “Dead Internet Theory”, which we’ll definitely have to take a look at it in a future episode. Hint, the internet is alive and well, it’s just not necessarily where you think it is.

And with all this AI generated content, we’re still facing the risk of the hallucinations, which we talked about, holy moly, over two years ago when we discussed the LOAB, that brief little bit of creepypasta that was making the rounds as people were trying out the new digital tools. But the hallucinations still highlight one of the primary issues with the AI tools, and that’s the errors in the results.

In order to document and collate these issues, a research team over at MIT has created the AI Risk Repository. It’s available at airisk. mit. edu. Here they have created taxonomies of the causes and domains where the risks may take place. However, not all of these risks are equal. One of the primary ones that gets mentioned is the energy usage for AI.

And while it’s not insignificant, I think it needs to be looked at in context. One estimate of global data center usage was between 240 and 340 terawatt hours, which is a lot of energy, and it might be rising as data center usage for the big players like Microsoft and Google has gone up by like 30 percent since 2022.

And that still might be too low, as one report noted that the actual estimate could be as much as 600 percent higher. So when you put that all together, that initial estimate could be anywhere between a thousand and 2000 terawatts. But the AI tools are only a fraction of what goes on at the data centers, which include cloud storage and services, streaming video, gaming, social media, and other high volume activities.

So you bring that number right back down. And AI is using? The thing is, whatever that number is, 300 terawatts times 1. 3 times six divided by five. Whatever that result ends up being doesn’t even chart when looking at global energy usage. Looking at a recent chart on global primary energy consumption by source over at Our World in Data, we see that the worldwide consumption in 2023 was 180, 000 terawatt hours.

The amount of energy potentially used by AI hardly registers as a pixel on the screen compared to worldwide energy usage that were presented with the picture in the media where AI is burning up the planet. I’m not saying AI energy usage isn’t a concern. It should be green and renewable. And it needs to be verifiable, this energy usage of the AI companies, as there is the risk of greenwashing the work that is done, of painting over their activities true energy costs by highlighting their positive impacts for the environment.

And the energy usage may be far exceeded by the water usage that’s used for the cooling of the data centers. And as with the energy usage, the amount of water that’s actually going to AI is incredibly hard to dissociate from all the other activities that are taking place in these data centers. And this greenwashing, which various industries have long been accused of, might show up in another form as well.

There is always the possibility that the helpful stories that are presented, AI tools have provided for various at risk and minority populations, are presented as a form of “aidwashing”. And this is something we have to evaluate for each of the stories posted in the AI Positivity Archive. Now I can’t say for sure that “aidwashing” specifically as a term exists.

A couple searches didn’t return any hits, so you may have heard it here first. However, while positive stories about AI often do get touted, do we think this is the driving motivation for the massive investment we’re seeing in the AI technologies? No, not even for a second. These assistive uses of AI don’t really work with the value proposition for the industry, even though those street uses of technology may point the way forward in resolving some of the larger issues for AI tools with respect to resource consumption and energy usage.

The AI tools used to assist Casey Harrell, the ALS patient mentioned near the beginning of the show, use a significantly smaller model than one’s conventionally available, like those found in ChatGPT. The future of AI may be small, personalized, and local, but again, that doesn’t fit with the value proposition. 

And that value proposition is coming under increased scrutiny. In a report published by Goldman Sachs on June 25th, 2024, they question if there’s enough benefit for all the money that’s being poured into the field. In a series of interviews with a number of experts in the field, they note how initial estimates about both the cost savings, the complexity of tasks that AI is available to do, and the productivity gains that would derive from it, are all much lower than initially proposed or happening on a much longer time frame.

In it, MIT professor Daron Acemoglu forecasts minimal productivity and GDP growths, around 0. 5 percent or 1%, whereas Goldman Sachs predictions were closer to 9 percent and 6 percent increase in GDP. With such varying degrees of estimates, what the actual impact of AI in the next 10 years is, is anybody’s guess.

It could be at either extreme or somewhere in between. But the main takeaway from this is that even Goldman Sachs is starting to look at the balance sheet and question the amount of money that’s being invested in AI. And that amount of money is quite large indeed. 

In between starting recording this podcast episode and finishing it, OpenAI raised 6. 6 billion dollars in a funding round from its investors, including Microsoft and Nvidia, which is the largest ever recorded. As reported by Reuters, this could value the company at 157 billion dollars and make it one of the the world. valuable private companies in the world. And this coincides with the recent restructuring from a week earlier which would remove the non profit control and see it move to a for profit business model.

But my final question is, would this even work? Because it seems diametrically opposed to what AI might actually bring about. If assistive technology focused on automation and Echange, then the end result may be something closer to what Aaron Bastani calls “fully automated luxury communism”, where the future is a post-scarcity environment that’s much closer to Star Trek than it is to Snow Crash.

How do you make that work when you’re focused on a for profit model? The tool that you’re using is not designed to do what you’re trying to make it do. Remember, “The street finds its own uses for things”, though in this case that street might be Wall Street. The investors and forecasters at Goldman Sachs are recognizing that disconnect by looking at the charts and tables in the balance sheet.

But their disconnect, the part that they’re missing, is that the driving force towards AI may be one more of ideology. And that ideology is the California ideology, a term that’s been floating around since at least the mid 1990s. And we’ll take a look at it next episode and return to the works of Lev Manovich, as well as Richard Barbrook, Andy Cameron, and Adrian Daub, as well as a recent post by Sam Altman titled ‘The Intelligence Age’.

There’s definitely a lot more going on behind the scenes.

Once again, thank you for joining us on the Implausipod. I’m your host, Dr. Implausible. You can reach me at drimplausible at implausipod. com. And you can also find the show archives and transcripts of all our previous shows at implausipod. com as well. I’m responsible for all elements of the show, including research, writing, mixing, mastering, and music.

And perhaps somewhat surprisingly, given the topic of our episode, no AI is used in the production of this podcast. Though I think some machine learning goes into the transcription service that we use. And the show is licensed under Creative Commons 4. 0 share alike license. You may have noticed at the beginning of the show that we described the show as an academic podcast and you should be able to find us on the Academic Podcast Network when that gets updated.

You may have also noted that there was no advertising during the program and there’s no cost associated with the show. But it does grow from word of mouth of the community. So if you enjoy the show, please share it with a friend or two, and pass it along. There’s also a buy me a coffee link on each show at implausopod.

com, which will go to any hosting costs associated with the show. I’ve put a bit of a hold on the blog and the newsletter, as WordPress is turning into a bit of a dumpster fire, and I need to figure out how to re host it. But the material is still up there, I own the domain. It’ll just probably look a little bit more basic soon.

Join us next time as we explore that Californian ideology, and then we’ll be asking, who are Roads for? And do a deeper dive into how we model the world. Until next time, take care and have fun.



Bibliography

A bottle of water per email: The hidden environmental costs of using AI chatbots. (2024, September 18). Washington Post. https://www.washingtonpost.com/technology/2024/09/18/energy-ai-use-electricity-water-data-centers/

A Note to Our Community About our Comments on AI – September 2024 | NaNoWriMo. (n.d.). Retrieved October 5, 2024, from https://nanowrimo.org/a-note-to-our-community-about-our-comments-on-ai-september-2024/

Advances in Brain-Computer Interface Technology Help One Man Find His Voice | The ALS Association. (n.d.). Retrieved October 5, 2024, from https://www.als.org/blog/advances-brain-computer-interface-technology-help-one-man-find-his-voice

Balevic, K. (n.d.). Goldman Sachs says the return on investment for AI might be disappointing. Business Insider. Retrieved October 5, 2024, from https://www.businessinsider.com/ai-return-investment-disappointing-goldman-sachs-report-2024-6

Broad, W. J. (2024, July 29). Artificial Intelligence Gives Weather Forecasters a New Edge. The New York Times. https://www.nytimes.com/interactive/2024/07/29/science/ai-weather-forecast-hurricane.html

Card, N. S., Wairagkar, M., Iacobacci, C., Hou, X., Singer-Clark, T., Willett, F. R., Kunz, E. M., Fan, C., Nia, M. V., Deo, D. R., Srinivasan, A., Choi, E. Y., Glasser, M. F., Hochberg, L. R., Henderson, J. M., Shahlaie, K., Stavisky, S. D., & Brandman, D. M. (2024). An Accurate and Rapidly Calibrating Speech Neuroprosthesis. New England Journal of Medicine, 391(7), 609–618. https://doi.org/10.1056/NEJMoa2314132

Consumers Know More About AI Than Business Leaders Think. (2024, April 8). BCG Global. https://www.bcg.com/publications/2024/consumers-know-more-about-ai-than-businesses-think

Cosmos. (1980, September 28). [Documentary]. KCET, Carl Sagan Productions, British Broadcasting Corporation (BBC).

Donna. (2023, October 9). Banksy Replaced by a Robot: A Thought-Provoking Commentary on the Role of Technology in our World, London 2023. GraffitiStreet. https://www.graffitistreet.com/banksy-replaced-by-a-robot-a-thought-provoking-commentary-on-the-role-of-technology-in-our-world-london-2023/

Gen AI: Too much spend, too little benefit? (n.d.). Retrieved October 5, 2024, from https://www.goldmansachs.com/insights/top-of-mind/gen-ai-too-much-spend-too-little-benefit

Goodman, N. (1976). Languages of Art (2 edition). Hackett Publishing Company, Inc.

Goodman, N. (1978). Ways Of Worldmaking. http://archive.org/details/GoodmanWaysOfWorldmaking

Hill, L. W. (2024, September 11). Inside the Heated Controversy That’s Tearing a Writing Community Apart. Slate. https://slate.com/technology/2024/09/national-novel-writing-month-ai-bots-controversy.html

Hu, K. (2024, October 3). OpenAI closes $6.6 billion funding haul with investment from Microsoft and Nvidia. Reuters. https://www.reuters.com/technology/artificial-intelligence/openai-closes-66-billion-funding-haul-valuation-157-billion-with-investment-2024-10-02/

Hu, K., & Cai, K. (2024, September 26). Exclusive: OpenAI to remove non-profit control and give Sam Altman equity. Reuters. https://www.reuters.com/technology/artificial-intelligence/openai-remove-non-profit-control-give-sam-altman-equity-sources-say-2024-09-25/

Knight, W. (n.d.). An ‘AI Scientist’ Is Inventing and Running Its Own Experiments. Wired. Retrieved September 9, 2024, from https://www.wired.com/story/ai-scientist-ubc-lab/

LaBossiere, M. (n.d.). AI: I Want a Banksy vs I Want a Picture of a Dragon. Retrieved October 5, 2024, from https://aphilosopher.drmcl.com/2024/04/01/ai-i-want-a-banksy-vs-i-want-a-picture-of-a-dragon/

Lu, C., Lu, C., Lange, R. T., Foerster, J., Clune, J., & Ha, D. (2024, August 12). The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery. arXiv.Org. https://arxiv.org/abs/2408.06292v3

Manovich, L. (2001). The language of new media. MIT Press.

McLuhan, M. (1964). Understanding Media: The Extensions of Man. The New American Library.

Mickle, T. (2024, September 23). Will A.I. Be a Bust? A Wall Street Skeptic Rings the Alarm. The New York Times. https://www.nytimes.com/2024/09/23/technology/ai-jim-covello-goldman-sachs.html

Milman, O. (2024, March 7). AI likely to increase energy use and accelerate climate misinformation – report. The Guardian. https://www.theguardian.com/technology/2024/mar/07/ai-climate-change-energy-disinformation-report

Mueller, B. (2024, August 14). A.L.S. Stole His Voice. A.I. Retrieved It. The New York Times. https://www.nytimes.com/2024/08/14/health/als-ai-brain-implants.html

Overview and key findings – World Energy Investment 2024 – Analysis. (n.d.). IEA. Retrieved October 5, 2024, from https://www.iea.org/reports/world-energy-investment-2024/overview-and-key-findings

Roberts, S. (2024, July 25). Move Over, Mathematicians, Here Comes AlphaProof. The New York Times. https://www.nytimes.com/2024/07/25/science/ai-math-alphaproof-deepmind.html

Schacter, R. (2024, August 18). How does Banksy feel about the destruction of his art? He may well be cheering. The Guardian. https://www.theguardian.com/commentisfree/article/2024/aug/18/banksy-art-destruction-graffiti-street-art

Science in the age of AI | Royal Society. (n.d.). Retrieved October 2, 2024, from https://royalsociety.org/news-resources/projects/science-in-the-age-of-ai/

Sullivan, S. (2024, September 25). New Mozart Song Released 200 Years Later—How It Was Found. Woman’s World. https://www.womansworld.com/entertainment/music/new-mozart-song-released-200-yaers-later-how-it-was-found

Taylor, C. (2024, September 3). How much is AI hurting the planet? Big tech won’t tell us. Mashable. https://mashable.com/article/ai-environment-energy

The AI Risk Repository. (n.d.). Retrieved October 5, 2024, from https://airisk.mit.edu/

The Intelligence Age. (2024, September 23). https://ia.samaltman.com/

What is NaNoWriMo’s position on Artificial Intelligence (AI)? (2024, September 2). National Novel Writing Month. https://nanowrimo.zendesk.com/hc/en-us/articles/29933455931412-What-is-NaNoWriMo-s-position-on-Artificial-Intelligence-AI

Wickelgren, I. (n.d.). Brain-to-Speech Tech Good Enough for Everyday Use Debuts in a Man with ALS. Scientific American. Retrieved October 5, 2024, from https://www.scientificamerican.com/article/brain-to-speech-tech-good-enough-for-everyday-use-debuts-in-a-man-with-als/