The California Ideology

(this was originally published as Implausipod Episode 39 on December 7th, 2024)

What do you think of when you heard the word California?  What do you think it’s “ideology” might be?  If you work in or on high technology, that California ideology may be shaping the way that you work, the projects that you work on, and the business models that high technology pursues. 

What does it all mean?  The thinking that is driving the pursuit of certain developments in technology, such as robotics and artificial intelligence, and the rise of accelerationism need to be understood by looking at the underlying philosophies.  Join us as we dig deep to find out what’s going on.


Let’s start with a question. What do you think of when you hear the word California? What’s the picture that comes into your head? If you had to hazard a guess, what would something called the California Ideology be? Take a moment and walk in your answer. We’re going to have a look during this episode of The ImplausiPod.

Welcome to The ImplausiPod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. And what is the California Ideology? Let’s see. Well, if you pictured a mix of hippies and high tech, of new wave and new money, you’d be pretty close. But the California ideology is something that didn’t start in the 2020s or even the 2000s.

We have to go back even earlier. It’s something that came about in the 60s and 70s, that mix of new mysticism and new technology that was coming through, funded in part by a whole lot of U. S. Cold War defense spending. Writing in 2001, Mark Tribe described it as, quote, a deadly cocktail of naïve optimism, techno utopianism, and new libertarian politics popularized by Wired magazine, end quote.

And from the tone you can sense that there was a point of criticism there. Because the Californian ideology was being defined by European academics, media theorists, and thinkers, who might not have had a technological edge, but definitely had the upper hand when it came to theory. Mark Tribe wrote that definition in 2001, in the introduction to a book by one of those European thinkers, Russian émigré artist and media theorist Lev Manovich.

A few years earlier, in the mid 90s, Manovich had published a piece on Mark Tribe’s Rhizome mailing list, This is back before blogs were even a thing. We might call it a web ring or a web forum now. In that piece, called On Totalitarian Interactivity, which, in 2024, reads like it was written by a time traveler, in the way it absolutely nails our current situation, Manovich compared the two opposing schools of new media philosophy, the Eastern and the Western, and he was Critical of both, having seen both of them first hand.

For Manovich, the belief in the power and potential of a new technology is drawn from the experiences of the user, to which we wholeheartedly agree. Those beliefs are going to shape a lot of the way things try and get used, which we’ve talked about a lot before here. But those beliefs are also going to shape the types of things that try to be made.

The technologies that engineers will try and work on, that companies will try and bring to market, that governments will try and fund research in, and that users will eventually adopt. Or not. And this is why it all boils back down to ideology. As Manovich said, quote, Western media artists usually take technology absolutely seriously and despair when it does not work, end quote.

And the solution for the Western artists is often more technology. Manovich goes on further and states, quote, A Western artist sees the internet as a perfect tool to break down all hierarchies and bring the art to the people. Parentheses, while in reality more often than not using it as a super media to promote his or her name, end parenthesis, end quote.

And in 1996, if someone was going to try and describe influencer culture on social media, I think he kind of nailed it. Like I said, time traveler. But both these quotes kind of hint at what the California ideology is. Manovich would go on further to write a book in 2001 called The Language of New Media, which went much more in depth on some of the topics we’re discussing here, and we’ll return to that at a later point in time.

To really understand the Californian ideology, we need to look at where it originally came from. And the best place to do that is to look at the paper that originally identified it. A 1995 essay by Richard Barbrook and Andy Cameron. And buckle up, this one might take a bit.

The Californian Ideology was originally published by the authors in 1995 in a British magazine titled Mute. It was a mix of online and print versions, so I can’t tell exactly which format the original came out in, and there’s been a couple different versions that have been published since. It’s still accessible online, so I’ll put the link in the notes.

You can go to the metamute. org website if you want to see their archives as well. The essay is typical of a lot of those mid 90s works on the internet, as everything’s starting to come on board, and people are really just feeling their way around it and trying to figure it out. Here, the authors describe the internet as hypermedia.

Drawing on very McLuhan esque terminology in order to situate it, but we can see where they’re going with it, and looking back with nearly 30 years of hindsight, it’s clear what they’re talking about. There’s very much a leftist, anti-capitalist view to much of their work, and we can see that in some of the terminology they use, even in the opening paragraph.

Quote Once again, capitalism’s relentless drive to diversify and intensify the creative powers of human labor is on the verge of qualitatively transforming the way in which we work, Play and live together. By integrating different technologies around common protocols, something is being created which is more than the sum of its parts.

When the ability to produce and receive unlimited amounts of information in any form is combined with the reach of the global telephone networks, Existing forms of work and leisure can be fundamentally transformed. End quote. 

And they go on further to say that anyone who can offer a simple explanation of what’s going on will be listened to, and this has come about through a quote, “Loose alliance of writers, hackers, capitalists, and artists from the west coast of the USA”.

And what those people have come up with is the Californian ideology, which is quote, A heterogeneous orthodoxy for the coming information age. The Californian ideology is this blend of hippies and high tech. It’s, as they say, an amalgamation of opposites, combining a freewheeling spirit and an entrepreneurial zeal where everyone will be both hip and rich.

And because it’s optimistic and positive and allows space for everybody, kind of like Clay Shirky said, it allows computer nerds, slackers, capitalists, social activists, academics, futuristic bureaucrats, and opportunistic politicians to say the least. To buy in, to get traction, to be seen as forward thinking if they hop on the early wave of this new technology.

And Barbrook and Cameron characterize this as an extropian cult, one that also sees buy in from various European artists and academics as well. In order to really understand the Californian ideology, Barbrook and Cameron go deep into the rise of the virtual class. who are, according to Arthur Croker and Michael Weinstein in their book Data Trash, the techno intelligentsia of cognitive scientists, engineers, computer scientists, video game developers, and all the other communications specialists.

This echoes a lot of what Daniel Bell was talking about in 1973 in The Coming of the Post-Industrial Society, and here, 20 years later, they’re starting to actually see it become reality. And we can see the roots in what all of these authors were talking about and what rose to become the gig economy. As they were discussing this already happening to the virtual class in the 1990s.

It’s important to remember that the gig economy did not first come for the taxi drivers, it came for the tech workers, and then they thought it was good enough for everybody else. But this is in part because the digital class, the virtual class, was incredibly myopic. They were a very privileged part of the labor force, and the benefits that they incurred did not necessarily apply to the population at large.

Barbrook and Cameron note that “the Californian ideology therefore simultaneously reflects the disciplines of market economics and the freedoms of hippie artisanship. This bizarre hybrid is only made possible through a nearly universal belief in technological determinism.” End quote. And this new technology allowed for the possibilities of the social liberalism that the hippies were looking for.

Along with the economic liberalism, or the libertarianism, really, that the new right was looking for. And what both of them were looking for, in a way to legitimize what they were talking about, is a link back to the founding fathers of the United States democracy. Quoting from Barbrook and Cameron again, “Above all, they are passionate advocates of what appears to be an impeccably libertarian form of politics.

They want information technologies to be used to create a new Jeffersonian democracy. Where all individuals would be able to express themselves freely within cyberspace.” And while that sounds like a great idea, looking back to the roots of American democracy, that’s not without its problems. Because Jeffersonian democracy, that popularized by the American founding father Thomas Jefferson, had very particular ideas of who counted when it came to that democracy.

Quote, their utopian vision of California depends on a willful blindness towards the other, much less positive, features of life on the west coast. Racism, poverty, and environmental degradation. End quote. 

What the authors are saying is that there’s a deep history of exploitation that goes hand in hand with the development of that ideology. And that in order to bring it about, you have to hide or ignore some of the realities of that history. 

At the core of the Californian ideology, there’s a lot of ambiguity as it’s bridging that gap between the left and the right, but the best way to understand it is probably to realize that it’s trying to have its cake and eat it too. It’s a hybrid faith that’s trying to cater to both the new left and the new right at the same time, and realize the utopian visions of both.

And regardless of whether it’s drawn from the left or the right, the Californian ideology is a capitalist ideology. As I said earlier, this was written in the mid 90s in the early days when people were figuring out what the internet would become, but for Barbrook and Cameron, they note that hypermedia, what they call the internet, would be a key component of the next stage of capitalism.

On the new left, the authors see the proponents of the virtual community with people like Howard Rheingold, where the internet could allow for the rise of a high tech gift economy based on the voluntary exchange of information and ideas and knowledge. On the new right, they note how there’s an embracer of the Laissez faire ideology, where tech culture publications like Wired would just uncritically reproduce works by Newt Gingrich, for example, buying into McLuhan esque technological determinism and thinking that the electronic telecommunications will give rise to an electronic marketplace.

For the authors writing in 1995, they weren’t sure what this would lead to. Quote, What is unknown is the social and cultural impact of allowing people to produce and exchange almost unlimited quantities of information on a global scale. End quote. And looking at the state of the internet 30 years later, we see the merger of both of those ideas of an electronic marketplace and a virtual community with the free exchange of ideas.

But that often can be deeply contested and there’s a lot of friction involved. The California ideology promises that each member of the virtual class can become a successful high tech entrepreneur, much like the way that many Americans consider themselves temporarily embarrassed millionaires, and that these people are quote, “Resourceful entrepreneurs who are the only people cool and courageous enough to take risks.”

The Californian ideology proposes a world where, quote, “visionary engineers are inventing the tools needed to create a free market within cyberspace, such as encryption, digital money, and verification procedures,” end quote. And if this sounds like it was ripped out of the pitch deck for any recently proposed crypto venture of the last five years, then I want to remind you, again, this is 1995 written by people that were critical of what was happening.

One of the things Barbrook and Cameron note about the Californian ideology is how much it ignores its own history of the government funding that went into the development of the technology, especially on the West Coast, and the rise of the mixed economy there. Much of this is covered by researcher Teng-Hui Hu in their book, A Prehistory of the Cloud, published in 2016, where they note how much of the infrastructure of the internet mirrors the physical surroundings, especially on the West Coast.

And my own take is that these particular visions of cyberspace were removed from the physical realm where it was thought that everything was formless and weightless and that anybody could be anything. We see the creation tales from many elder myths made manifest once again in the mythic visions of cyberspace and the new cyber religion, so it follows.

We talked about these mythic visions back in episode 26 titled Silicon Dreams, so I encourage you to go check that one out if you’d like. What those mythic visions were really good at was inspiring the DIY culture that really developed some of the innovative ideas that were extent within the burgeoning computer scene.

And while this includes technological developments, like the early personal computers that were developed in garages across California, it also includes social elements, like new agers, surfing, skateboarding, LGBTQ, liberation, health food, yoga, pop music, and a whole bunch of else besides. The fact you didn’t necessarily need to be a tech innovator helped get buy in from a lot more groups with respect to the California ideology, and the tech was definitely helped a whole lot by government spending.

And the contribution by all these groups, the community, the DIYers, the popular culture, and the government at large, is something that often gets ignored by the entrepreneurs and other supposed tech visionaries. As their authors state, all technological progress is cumulative. It depends on the results of a collective historical process and must be counted At least in part as a collective achievement.

But this idea of collective achievement goes against much of their narrative. But that narrative draws on many sources of inspiration, and given that we’re dealing with high technology, at least one of those is science fiction. Now, sci fi, whether it’s cyberpunk or otherwise, often has a very libertarian ethos.

The authors note how the utopian visions of the future on the right side of Californian ideology often echoed the predictions of Isaac Asimov, Robert Heinlein, and other sci fi writers, quote, whose future worlds were always filled with space traders, super slick salesmen, genius scientists, Pirate captains and other rugged individualists, end quote.

This is the trail that led back to the Jeffersonian democracy and the Founding Fathers. In the 80s and 90s, that same character would show up, a hacker, a quote, lone individual fighting for survival within the virtual world of information. End quote. And this is where the California of that present connected with the California of the past, the ideology of the gold rush, of the self sufficient individual living out on the frontier.

It never really went away, it just became part and parcel of the underlying ideology of cyberspace, of the internet, of high technology, of California. And that ideology is what tech calls thinking.

What Tech Calls Thinking is a book published in 2020 by Adrian Daub, a professor of comparative literature at Stanford. And what he shows us is that despite being 25 years later, we’re still seeing a lot of the same old thinkers show up. Even though Silicon Valley itself has gone through some major changes since 1995, as the only players of note from back then are Microsoft and Apple, as Google was just in its infancy, and Amazon, Facebook, and the rest of social media didn’t exist at all, and the owners of some of those companies are now famous enough to be recognizable by only their last name.

We can call it the Madonna Zone, or Maybe even the Cher Zone, though these guys aren’t about sharing. They have names like Bezos, and Musk, and Zuckerberg, and I guess we could add Altman to that list now, too. In Altman’s recent essay, The Intelligence Age, he outlines some of the philosophy driving his quest towards AGI.

But, regardless of the name or the company that they founded or own, not always the same thing, we need to point that out, these tech oligarchs express a strikingly similar ideology. We covered a little bit of that almost a year ago when we looked at the Tecto Optimist Manifesto published by Mark Anderson, formerly of Netscape, but Dow covers it sufficiently well.

In each of the seven chapters of the book, Daub covers one of the ideas that’s central to the philosophy behind Silicon Valley, usually characterized by a single author, perhaps two. These writers and philosophers include some familiar names like Marshall McLuhan, Ayn Rand, Aldous Huxley, Jacques Girard, Joseph Schumpeter, Cass Phillips.

And if we’ve heard a bunch of those names already, it’s not by accident. Like I said, there’s a lot of consilience and overlap. In the course of my own studies in grad school, I covered a few of these names in depth, though I’ll admit not all, but what I see here overlaps a lot of what I’ve studied elsewhere.

The overarching aim of Daub’s work is to get behind the media’s focus on the tech industry’s thought leaders, the public intellectuals that get written up so often in media pieces, and trace the ideas and where they’ve come from. And the key point of inception for Daub is Stanford. This is the inflection point, or quilting point, where everything comes together.

This makes some sense for Daub. It was where he was located and viewing his surroundings. And there are other universities involved as well. When one thinks of big tech schools, MIT surely comes to mind, too, but for a Californian perspective, we need to look at Stanford. And the university is important, because a lot of tech’s ideas are quote, university adjacent, or quote, academic.

Big Tech seeks the legitimation of their ideas via the proximity to higher learning, as the people involved have often dropped out or not completed their education. Dropping out is the focus of Chapter One, as it allows founders to buy into the pre existing narrative, one that’s pre packaged and ready for them, and makes for easier work for the journalists covering the field.

There’s a visibility of being associated with the college, but only briefly. Don’t overstay your welcome if you want to be treated as a visionary. As Daub points out, What this means is that the education of these founders is often incomplete, missing the context that would come with more advanced study and absent from a general studies survey course.

Usually, I’ll admit to having been blessed with a couple great profs back in the day myself, but dropping out allows one to fit the role of a maverick, able to reject elite institutions and not constrained by conventional thinking. to really allow one to engage in the creative destruction that comes from disrupting the market.

And that Schumpeterian creative destruction features heavily, comprising much of Chapter 6. Joseph Schumpeter was an Austrian economist who worked at Harvard starting in the 1930s, and he coined the term as part of his observations of the nature of the business cycle. Much of what he was talking about was the instability of capitalism and the inevitability of socialism, but this was done through the lens of the role of the entrepreneur in the process of innovation.

a bringing something new to market. Quote, The fundamental impulse that sets and keeps the capitalist engine in motion comes from new consumer goods, the new methods of production or transportation, the new markets, the new forms of industrial organization that capitalist enterprise creates. This is from Schumpeter, which Daub quotes at length in his work.

This shaking up is what keeps it afloat. If it wasn’t for the shakeup, the instability in the system would get too much, and it all falls apart. As Daub notes, quote, The concept of creative destruction sublimates the concept of revolution. End quote. Things continually get disrupted, and the only constant seems to be change.

Of course, the title of chapter six is disruption, that underlying ethos that impels so much change within Silicon Valley. Disruption is one of those totalizing terms that gets leveraged by Silicon Valley to suggest that this is the only way that change or innovation can happen. As Daub notes, quote, Disruption plays to our impatience with structures and situations that seem to coast on habit and inertia, and it plays to the press’s excitement about underdogs, rebels, and outsiders.

It’s that personal narrative that we talked about a few minutes ago that allows these multi billionaire founders to consider themselves still the plucky underdog from their favorite movies when they were young. And it allows them to deal with the cognitive dissonance of realizing that perhaps they’re on the other side.

Because once you’ve got a couple billion dollars behind you, you are the establishment, no matter how you might frame yourself. Narratives about disruption are ultimately narratives about change, but only in a certain constrained direction. As Daub notes, disruption is newness for people who are scared of genuine newness, revolution for people who don’t stand to gain anything from revolution.

And that idea that Silicon Valley is introducing something that’s genuinely new really needs to be looked at with a hard, critical eye. Daub notes, one ought to be skeptical of unsubstantiated claims of something being totally new and not following the hitherto unestablished rules of business, of politics, of common sense.

The amount of stuff that’s actually new or a radical innovation is incredibly tiny. For an example, one needs to look no further than a single episode of the show Connections, hosted by the British historian of science and technology, James Burke, where he traces the multiple contingencies and coincidences that have led through the path of history to our modern inventions and technologies.

And if we apply this kind of historiographic analysis through a critical To nearly anything that’s claimed to be disruptive, we can see the path through history that led up to that point. Genuine newness is very, very rare. And even the claims that the tech industry has, there’s dog quotes that they’re making fundamental transformations of how capitalism functions, can be looked at with a skeptical eye.

Because as Schumpeter was writing 100 years ago, and Marx decades before that, That’s just how capitalisms always work. Disruption is just faster and more far reaching, and as we suggested, it’s totalizing. As Daub quotes, Disruption seems to suggest that the rapids are all there is and can be. And we’ve talked about those rapids before, back in episode 27, The Old Man and the River, back in February.

But the speed is the thing. Quote, Disruption seems to lean in the direction of more capitalism, end quote. And this is not by accident. The disruptions want to go faster, and that theory of move fast and break things has a historical antecedent nearly a hundred years ago. That theory is accelerationism, and we need to talk about it.

Accelerationism is an ideology or set of philosophies that crosses between party lines. It kind of exists on both the left and the right, and what it calls for is the radical acceleration of everything that’s going on. An intensification of the capitalization of everything in order to get to some perceived next level of human growth or achievement.

There’s this idea that we’re not going fast enough, that the checks and balances that we put on society are holding us back from reaching that. And if we just go faster, harder, we’ll have enough technology or AI or whatever that’ll help solve those problems. And we can deal with it in whatever imagined future state where we have the technology.

And it should be noted that there’s left wing groups that believe in this accelerationism as well, who believe if you allow capitalism to put the pedal to the metal, it’ll be It’ll eventually go off the rails and then you can rebuild out of the ashes of whatever’s left. You know, once we get through that cool Mad Max stage and actually get around to rebuilding society.

But as you can tell from my tone, it’s an incredibly bad idea. First off is there’s this assumption that whoever is pushing the pedal to the metal that As their hand on the throttle will be there at the end to reap the rewards, once we get there. You know, that they’ll be among the survivors. And two, is that an incredibly large number of people will get hurt in the process of going faster and harder.

It’s just incredibly irresponsible, and there’s no guarantee that we get there either. It’s an assumption that they make that, hey, if we strap a rocket to our back like Wile E. Coyote, we’ll get to where we’re going faster. But it’s not necessarily borne out. It’s all in theory. We talked about it on one of our episodes of the podcast about a year ago, episode 17, called Not a Techno Optimist.

So, my apologies for recovering some old ground, but it’s worth mentioning again. Go check it out in the archives if you’d like. There’s more to talk about when it comes to accelerationism, but we’re going to have to get into that in a few episodes from now. The main thing is this idea of being a disruptor.

It isn’t a thing of science fiction, which inspires so much of Silicon Valley. It’s Fantasy. Daub also talks about the continued role of Ayn Rand and her influence on the libertarian elements that are so prevalent in technology. I think the best quote summarizing Ayn Rand can be attributed to John Rogers.

Quote, there are two novels that can transform a bookish 14 year kid’s life. The Lord of the Rings and Atlas Shrugged. One is a childish daydream that can lead to an emotionally stunted, socially crippled adulthood, in which large chunks of the day are spent inventing ways to make real life more like a fantasy novel.

The other is a book about orcs. End quote. Of course, Maybe not skipping that English lit class in the college you dropped out of would help give a little context for understanding Rand. However, we’re not here to chase that particular rabbit. The big takeaway from Dobb’s work is a look at the tech industry’s philosophical roots and its focus on money.

As he notes, The tech industry we know today is what happens when certain received notions meet with a massive amount of cash with nowhere else to go. End quote. Absent an idea of what to do with all that money, tech looked around for legitimation. And, as Daub notes, quote, the ideas that tech call thinking were developed and refined in the making of money, end quote.

This is accomplished via a blend of state intervention and capitalist entrepreneurship that leverages DIY culture, relying on it for essential contributions by innovators and early adopters, to be sure. And much of tech has resulted in the development of, quote, mass markets for private companies to sell existing information commodities, end quote, things like films and music and television.

Stuff that we would normally call art has been transformed by the shift from representation to manipulation that occurs within the digital realm, according to Manovich. Further, he notes that Western artists appear to break down hierarchies as part of the process of building a personal brand for themselves, and coming out of the influencer decade of the 20 teens where catchphrases like the brand is you get tossed around, this seems self evident.

It’s a commodification of the self. But we’ll have to wait for a later date to do a deeper dive into this process of becoming which drives influencer culture. We’ll let you know when that episode is ready to go. 

By contrast, for Manovich, the Eastern artists, quote, recognize that the nature of technology is that it does not work, it will always break down. It will never work as it is supposed to. 

For the outside observer, we can see how this makes sense, where the failures of one technology provide the opportunity for the sale of another technology to solve the problems of the first one. And one thing tech likes is another sale, because tech is ultimately a capitalist enterprise.

And it is this focus on capitalism which underlies the Californian ideology as a whole. The connection point between Daub and the work 25 years previous is that those ideas never went away. The tech industry in 2020 is pretty much still the same industry it was that Barbrook and Cameron identified back then.

Witness that quote about the crypto pitch deck we made earlier. The big difference is that there is more of it, the increased focus on the money. We’re just later along in the late stage capitalism. We’re not so far along that we’ve reached the sci fi aspirations driving some of them forward, as mentioned earlier, but those aspirations exist in both works too.

Barbrook and Cameron note that there is a drive for the emergence of the post human that we can see in N. Katherine Hayle’s work, as well as various cyberpunk authors such as William Gibson and others. Post humanism is, after all, a quote, biotechnological manifestation of the social privileges of the virtual class, end quote.

This is why there is such a strong connection to the accelerationists mentioned earlier. The remaining virtual class are aging and looking to live longer. There is a fear of death motivating much of the virtual class, characterizing them as extropian, that sect of transhumanists seeking to extend their lifespans to the extent that they may one day live indefinitely.

They seek to advance technology faster, as that dark specter inexorably catches up with them. The third point in common between what Tech calls thinking and the Californian ideology, two works separated by 25 years, a continent, and an ocean, is the critique of the underlying ideology of the virtual class itself.

There’s other names for it floating around, of course, calling them Tech Bros, or TESCREAL, or whatever, but like Manovich pointed out earlier, it’s all of the same thread of Western critiques of Tech. And seeing as we mentioned Lev Manovich, let’s return to a bit of what he had to say on totalitarian interactivity.

There, from his position as a quote, post communist subject, he saw the internet as a communal apartment of the Stalin era where everybody spies on everybody else, or as a giant garbage site for the information society, with everybody dumping their used products of intellectual labor and nobody cleaning up.

As in the moment, we are witness to a mass migration from Twitter to BlueSky, with some people deleting their posts and accounts, and others not, just fleeing, as statements ring poignantly true. We are witnessing the migration of much of the virtual class in real time, as platforms shift and become unstable, and new platforms are found.

There’s a degree of insulation that comes with this, as if moving platforms is somehow enough of an action to take. There’s a blending of beliefs going on here. As Barbrook and Cameron note, quote, Many members of the virtual class want to be seduced by the libertarian rhetoric and technological enthusiasm of the new right, end quote, a term that describes the newt gingrich era republicans in the U. S. in the mid 1990s. 

That belief and enthusiasm affords them the opportunities to continue living much as they had previously. Not all internet users are so lucky. There are clear divides. Redlining by telephoning companies creates a very real gap in accessibility to the information superhighway. 

As this was written around the same time as the U. S. Department of Commerce was warning of the digital divide in 1995, which would soon be picked up and championed as a term elsewhere by those advocating for more widespread internet adoption. We can see why. 

The scholar Teng-hui Hu traces this very real phenomenon of the physical geography’s effect in shaping the rather ephemeral nature of cyberspace in their book, A Prehistory of the Cloud, 2015.

For those members outside the virtual class, the prospects are much more bleak. Quoting from Barbrook and Cameron, The deprived only participate in the information age by providing cheap, non unionized labor for the unhealthy factories of the Silicon Valley chip factories. End quote. Fifteen years later, this could still describe Foxconn making iPhones for Apple, or the warehouses at Amazon, or drivers for Uber.

The trend toward the gigged economy had a long arc that started well before the smartphone era. The digital artisans were, quote, living within a contract culture and, quote, gigged long before others, well paid in a manner that decentralized collective action. To quote the authors again, Although they enjoy cultural freedoms won by the hippies, most of them, that is the virtual class, are no longer actively involved in the struggle to build ecotopia.

End quote. The true believers of the new left involved in the building of cyberculture took their stock options and left the suburbs behind. This cybernetic libertarianism was very much in the whatever I’ve got mine mindset, never imagined that one day those cyber leopards might eat their faces. And this follows from the ideals of the Jeffersonian democracy that drives the Californian ideology.

In a section titled, Cyborg Masters and Robot Slaves, Barbrook and Cameron note that the fear of the rebellious underclass has now corrupted the most fundamental tenet of the Californian ideology, its belief in the emancipatory potential of the new information technologies. However, as they note, those technologies of freedom are turning into machines of dominance.

The crux of the Californian ideology is in Barbrook and Cameron’s description of the racial divide in California. “If human slaves are ultimately unreliable, then mechanical ones will have to be invented. The search for the holy grail of artificial intelligence reveals this desire for the golem. A strong and loyal slave whose skin is the color of earth and whose innards are made of sand.”

As we discussed back in episode 17, there is a utopian vision here, and Barbrook and Cameron note how these techno utopians, quote, imagine that it is possible to obtain slave like labor from inanimate machines. However, slave labor cannot be obtained without somebody being enslaved, end quote. And this can be seen in very recent history, too.

Anyone wondering about the results of the voting for Proposition 6 in California during the recent national election in the United States on November 2024, for any future listeners, will find their answer here. 

Proposition 6 was a proposed amendment to California’s constitution that would bar slavery in any form and repeal involuntary servitude as punishment for a crime.

In it, Californians voted 53. 3 percent against. 

The Californian ideology has a dark history, one that still has a hand in shaping the future.

Thank you for joining us for this episode of the Implausipod. I’m your host Dr. Implausible. Join us for the next few episodes as we continue our journey into exploring what the Californian ideology has left us. As we look into those Californian roads and car culture. And then what that utopic vision of the world would look like as we delve into the world model that we hinted at when we talked about Sam Altman’s intelligence age essay.

I hope we can explore these before the end of 2024 and then we’ll see what 2025 has in store. 

You can reach me at drimplausible at implausipod. com, and you can also find the show archives and transcripts of all our previous shows at implausipod. com as well. I’m responsible for all elements of the show, including research, writing, mixing, mastering, and music, and the show is licensed under Creative Commons 4. 0 share alike license. 

You may have also noted that there was no advertising during the program, and there’s no cost associated with the show. But it does grow from word of mouth of the community, so if you enjoy the show, please share it with a friend or two, and pass it along. There’s also a Buy Me A Coffee link on each show at implausipod dot com, which will go to any hosting costs associated with the show. 

Over on the blog, we’ve started up a monthly newsletter. There will likely be some overlap with future podcast episodes, and newsletter subscribers can get a hint of what’s to come ahead of time, so consider signing up, and I’ll leave a link in the show notes.

Until next time, take care, and have fun.



Bibliography

Altman, S. (2024, September 23). The Intelligence Age. https://ia.samaltman.com/

Barbrook, R., & Cameron, A. (1995). The Californian Ideology. Mute, 1(3). http://www.imaginaryfutures.net/2007/04/17/the-californian-ideology-2/

Bell, D. (1973). The coming of post-industrial society: A venture in social forecasting. Basic Books.

Daub, A. (2020). What Tech Calls Thinking: An Inquiry into the Intellectual Bedrock of Silicon Valley. Farrar, Straus and Giroux.

Hayles, N. K. (1999). How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics (1 edition). University Of Chicago Press.

Hu, T.-H. (2016). A Prehistory of the Cloud (Illustrated edition). The MIT Press.

Manovich, L. (1996). On Totalitarian Interactivity. https://www.manovich.net

McLuhan, M. (1964). Understanding Media: The Extensions of Man. The New American Library.

Schumpeter, J. A. (1962). Capitalism, socialism and democracy (First Harper Torchbook ed). Harper & Row.

Tribe, M. (2001) “Introduction” in Manovich, L. (2001). The language of new media. MIT Press.

Half-Life 2: 20 years on

2004 was as pivotal a year for the video game industry as 1999 was for film, and two of the titles that had the biggest impact have been getting an extended retrospective. While World of Warcraft wasn’t necessarily my favourite MMO, I can’t deny the larger impact it had on the MMO market as a whole. (I wrote at length about this impact in my first peer-reviewed academic article back in 2009 too. Hopefully one day I can share that with you).

The other game with a massive impact was Half-Life 2, and there’s an extended documentary about it up on Youtube to look back at how it changed video games:

Like many gamers of the early 21st century, I played Half-Life 2 on release, playing through the full campaign, stealthily and working through every nook and cranny


Watching the clips hit me right in the feels with Nostalgia, so I fired up the install and started another playthrough. The game came back to me fast, the keys are instinctive, and the maps well worn in my memory. I moved through quickly too. The names of the various chapters of the game evoked memories: Water Hazard, Ravenholm, Nova Prospekt, each with their identifiable sections and set-pieces: the chopper fight, the flaming traps, deadly snipers along the rail line, swarming ant-lions and more.

The sections proceed naturally, a testament to the storytelling by the creators of the game. As I’m playing through, each part has me wanting to see what’s next, even though I’ve played this at least a dozen times. (Twenty years ago, I’d restart the game shortly after finishing it, as I wanted to replay some of the early chapters again. It speaks to how dynamic the gameplay is, with very different feels between the foot, jetboat, and buggy sections).

It’s not a perfect game, but it’s close. There are occasional parts where you can see some of the rough seams, and not everything is interactive. It’s fairly linear, without the dynamic ways of working through situations that can be seen in some of its contemporaries (Deux Ex, Thief, and System Shock 2 come to mind, but again, those are exemplars of the genre, in the pantheon of all time greats).

About to go for a ride…

And while the graphics looks a bit dated compared to more modern games, they’re still fine: with a great view to the distance, and so fast on a modern machine that gameplay is smooth and seamless. But I don’t find the “date” on the visuals a negative either: it’s still clearly a game, and the low-fi version of it allows for a certain amount of projection to take place. It’s “cool” media, to borrow McLuhan’s parlance, or how Scott McCloud wrote in “Understanding Comics” (around the same time this game was released) of how the less visual information conveyed on the panel allowed the audience to map themselves on to the figure on the page.

Gordon Freeman becomes Everyman, in this lo-fi version.

The amount of influence this game has had is also evident in the playthrough. I’m not a video game historian (well, I haven’t been for a while), but the entire Call of Duty / Modern Warfare section of the games industry draws a line through Half-Life 1 and 2 (and Counter Strike and Team Fortress more specifically). The design language of modern gaming can be seen here in the simple and direct playthrough, the embedded tutorials and tooltips throughout, the smooth ease of use of the various elements of the game.

For anyone who reads this who has never played Half-Life 2, you owe it to yourself to give it a shot. Its iconic for a reason, and any history of the video game industry needs to spend a few hours racing along the canals or walking through Ravenholm. It holds up remarkably well.

AI Refractions

(this was originally published as Implausipod Episode 38 on October 5th, 2024)

https://www.implausipod.com/1935232/episodes/15804659-e0038-ai-refractions

Looking back in the year since the publication of our AI Reflections episode, we take a look at the state of the AI discourse at large, where recent controversies including those surrounding NaNoWriMo and whether AI counts as art, or can assist with science, bring the challenges of studying the new medium to the forefront.


In 2024, AI is still all the rage, but some are starting to question what it’s good for. There’s even a few that will claim that there’s no good use for AI whatsoever, though this denialist argument takes it a little bit too far. We took a look at some of the positive uses of AI a little over a year ago in an episode titled AI Reflections.

But it’s time to check out the current state of the art, take another look into the mirror and see if it’s cracked. So welcome to AI Refractions, this episode of ImplausiPod.

Welcome to The ImplausiPod, an academic podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. And in this episode, we’ve got a lot to catch up on with respect to AI. So we’re going to look at some of the positive uses that have come up and how AI relates to creativity and statements from NaNoWriMo caused a bit of controversy.

And how that leads into AI’s use in science. But it’s not all sunny over in AI land. We’ve looked at some of the concerns before with things like Echange, and we’ll look at some of the current critiques as well. And then look at the value proposition for AI, and how recent shakeups with open AI in September of 2024 might relate to that.

So we’ve got a lot to cover here on our near one year anniversary of that AI Reflections episode, so let’s get into it. We’ve mentioned AI a few other times since that episode aired in August of 2023. It came up in episode 28, our discussion on black boxes and the role of AI handhelds, as well as episode 31 when we looked at AI as a general purpose technology.

And it also came up a little bit in our discussion about the arts, things like Echanger and the Sphere, and how AI might be used to assist in higher fidelity productions. So it’s been an underlying theme about a lot of our episodes. And I think that’s just the nature of where we sit with relation to culture and technology.

When you spend your academic career studying the emergence of high technology and how it’s created and developed, when a new one comes on the scene, or at least becomes widely commercially available, you’re going to spend a lot of time talking about it. And we’ve been obviously talking about it for a while.

So if you’ve been with us for a while, first off, you’re Thank you, and this may be familiar to you, and if you just started listening recently, welcome, and feel free to check out those episodes that we mentioned earlier. I’ll put links to the specific ones in the text. And looking back at episode 12, we started by laying down a definition of technology.

We looked at how it functioned as an extension of man, to borrow from Marshall McLuhan, but the working definition of technology that I use, the one that I published in my PhD, is that “Technology is the material embodiment of an artifact and its associated systems, materials, and practices employed to achieve human ends.”

And this definition of technology covers everything from the sharp stick and sharp stick- related technologies like spears, pencils, and chopsticks, to our more advanced tech like satellites and AI and VR and robots and stuff. When you really think about it, it’s a very expansive definition, but that helps us in its utility in allowing us to recognize and identify things.

And by being able to cover everything from sharp sticks to satellites, from language to pharmaceuticals to games, it really covers the gamut of things that humans use technology for, and contributes to our view of technology as an emancipatory view. That technology is ultimately assistive and can aid us in issues that we’re struggling with.

We recognize that there’s other views and perspectives, but this is where we fall down on the spectrum. Returning back to episode 12, we showed how this emancipatory stance contributes to an empathetic view of technology, where we can step outside of our own frame of reference and think about how technology can be used by somebody who isn’t us.

Whether it’s a loved one, somebody close to us, or even a member of our community or collective, or you. More widely ranging, somebody that we’ll never come into contact with. How persons with different abilities and backgrounds will find different uses for the technology. Like the famous quote goes, “the street finds its own uses for things.”

Maybe we’ll return back to that in a sec. We finished off episode 12 looking at some of the positive uses of AI at that time that had been published just within a few weeks of us recording that episode. People were recounting how they were finding it as an aid or an enhancement to their creativity, and news stories were detailing how the predictive text abilities as well as generative AI facial animations could help stroke victims, as well as persons with ALS being able to converse at a regular tempo.

So by and large it could function as an assistive technology, and in recent weeks we have started trying to Catalogue all those stories. Back in July over on the blog we created the Positive AI Archive, a place where I could put those links to all the stories that I come across. Me being me, I forgot to update it since, but we’ll get those links up there and you should be able to follow along.

We’ll put the link to the archive in the show notes regardless. And, in the interest of positivity, that’s kinda where I wanted to start the show.

The street finds its own uses for things. It’s a great quote from Burning Chrome, a collection of short stories by William Gibson. It’s the one that held Johnny Mnemonic, which led to the film with Keanu Reeves, and then subsequently The Matrix and Cyberpunk 2077 and all those other derivative works. The street finds its own uses for things is a nuanced phrase and nuance can be required when we’re talking about things, especially online when everything gets reduced to a soundbite or a five second dance clip.

The street finds its own uses for things is a bit of a mantra and it’s one that I use when I’m studying the impacts of technology and what “the street finds its own uses for things” means is that the end users may put a given technology to tasks that its creators and developers never saw. Or even intended.

And what I’ve been preaching here, what I mentioned earlier, is the empathetic view of technology. And we look at who benefits from using that technology, and what we find with the AI tools is that there are benefits. The street is finding its own uses for AI. In August of 2024, a number of news reports talked about Casey Harrell, a 46 year old father suffering from ALS, amyotrophic lateral sclerosis, who was able to communicate with his daughter using a combination of brain implants and AI assisted text and speech generation.

Some of the work on these assistive technologies was done with grant money, and there’s more information about the details behind that work, and I’ll link to that article here. There’s multiple technologies that go into this, and we’re finding that with the AI tools, there’s very real benefits for persons with disabilities and their families.

Another thing we can do when we’re evaluating a technology is see where it’s actually used, where the street is located. And when it comes to assistive AI tools like ChatGPT, The street might not be where you think it is. In a recent survey published by Boston Consulting Group in August of 2024, they showed where the usage of ChatGPT was the highest.

It’s hard to visually describe a chart, obviously, but at the top of the scale, we saw countries like India, Morocco, Argentina, Brazil, Indonesia. English speaking countries like the US, Australia, and the UK were much further down on the chart. The country where ChatGPT is finding its most adoption are countries where English is not the primary language.

They’re in the global south, countries with large populations that have also had to deal with centuries of exploitation. And that isn’t to say that the citizens of these countries don’t have concerns, they do, but they’re using it as an assistive technology. They’re using it for translation, to remove barriers and to help reduce friction, and to customize their own experience. And these are just a fraction of the stories that are out there. 

So there are positive use cases for AI, which may seem to directly contradict various denialist arguments that are trying to gaslight you into believing that there is no good use for AI. This is obviously false.

If the positive view, the use on the street, is being found by persons with disabilities, it follows that the denialist view is ableist. If the positive view, that use on the street, is being found by persons of color, non English speakers, persons in the global south, then the denialist view will carry all those elements of oppression, racism, and colonialism with it.

If the use on the street is by Those who find their creativity unlocked by the new tools and they’re finally able to express themselves where previously they may have struggled with a medium or been gatekept from having an arts education or poetry or English or what have you, only to now find themselves told that this isn’t art or this doesn’t count despite all evidence to the contrary, then there’s massive elements of class and bias that go into that as well.

So let’s be clear. An empathetic view of technology recognizes that there are positive use cases for AI. These are being found on the street by persons with disabilities, persons of the global south, non english speakers, and persons across the class spectrum. To deny this is to deny objective reality.

It’s to deny all these groups their actual uses of the technology. Are there problems? Yes, absolutely. Are there bad actors that may use the technology for nefarious means? Of course, this happens on a regular basis, and we’ll put a pin in that and return to that in a few moments, but to deny that there are no good uses is to deny the experience of all these groups that are finding uses for it, and we’re starting to see that when this denialism is pointed out, it’s causing a great degree of controversy.

In a statement made early in September of 2024, NaNoWriMo, the non profit organization behind National Novel Writing Month, it was acceptable to use AI as an assistive technology when writers were working on their pieces for NaNoWriMo, because this supports their mission, which is to quote, “provide the structure, community, and encouragement to help people use their voices, achieve creative goals, and build new worlds, on and off the page.” End quote. 

But what drew the opprobrium of the online community is that they noted that some of the objections to the use of AI tools are classist and ableist. And, as we noted, they weren’t wrong. For all the reasons we just explained and more. But, due to the online uproar, they’ve walked that back somewhat.

I’ll link to the updated statement in the show. The thing is, if you believe that using AI for something like NaNoWriMo is against the spirit of things, that’s your decision. They’ve clearly stated that they feel that assistive technologies can help for people pursuing their dreams. And if you have concerns that they’re going to take stuff that’s put into the official app and sell it off to an LLM or AI company, well, that’s a discussion you need to have with NaNoWriMo, the nonprofit. 

You’re still not held off from doing something like NaNoWriMo using notepad or obsidian or however else you take your notes, but that’s your call. I for one was glad to see that NaNoWriMo called it out. One of the things that I found both in my personal life, as well as in my research, when I was working on the PhD and looking at Tikkun Olam Makers is that it can be incredibly difficult and expensive for persons with disabilities to find a tool that can meet their needs, if it exists at all. So if you’re wondering where I come down on this, I’m on the side of the persons in need. We’re on the side of the streets. You might say we’re streets ahead.

Of course, one of the uses that the street finds for things has always been art. Or at least work that eventually gets recognized as art. It took a long time for the world to recognize that the graffiti of a street artist might count, but in 2024, if one was to argue that Banksy wasn’t an artist, you’d get some funny looks.

There are several threads of debates surrounding AI art, generative art, including the role of creativity, the provenance of the materials, the ethics of using the tools, but the primary question is what counts? What counts as art and who decides that it counts? That’s the point that we’re really raising with that question, and obviously it ties back to what we were talking about last episode when it comes to Soylent Culture, and before that when we were talking about the recently deceased Frederick Jameson as well.

In his work Nostalgia for the Present from 1989, Jameson mentioned this with respect to television. He said, Quote, “At the time, however, it was high culture in the 1950s who was authorized, as it still is, to pass judgment on reality, to say what real life is and what is mere appearance. And it is by leaving out, by ignoring, by passing over in silence and with the repugnance one may feel for the dreary stereotypes of television series, that high art palpably issues its judgments.” end quote. 

Now, High Art in Bunny Quotes isn’t issuing anything, obviously, Jameson’s reifying the term, but what Jameson is getting at is that there’s stakes for those involved about what does and does not count. And we talked about this last episode, where it took a long time for various forms of new media to finally be accepted as art on its own terms.

For some, it takes longer than others. I mean, Jameson was talking about television in the 1980s, for something that had already existed for decades at that point. And even then, it wasn’t until the 90s and 2000s, to the eras of Oz and The Sopranos and Breaking Bad and Mad Men and the quote unquote “golden age of television” that it really began to be recognized and accepted as art on its own terms.

Television was seen as disposable ephemera for decades upon decades. There’s a lot of work that goes on on behalf of high art by those invested in it to valorize it and ensure that it maintains its position. This is why we see one of the critiques about A. I. art being that it lacks creativity, that it is simply theft.

As if the provenance of the materials that get used in the creation of art suddenly matter on whether it counts or not. It would be as if the conditions in the mines of Afghanistan for the lapis lazuli that was crushed to make the ultramarine used by Vermeer had a material impact on whether his painting counted as art. Or if the gold and jewels that went into the creation of the Fabergé eggs and were subsequently gifted to the Russian royal family mattered as to whether those count. It’s a nonsense argument. It makes no sense. And it’s completely orthogonal to the question of whether these works count as art.

And similarly, where people say that good artists borrow, great artists steal, well, we’ll concede that Picasso might have known a thing or two about art, but Where exactly are they stealing it from? The artists aren’t exactly tippy toeing into the art gallery and yoinking it off the walls now, are they?

No, they’re stealing it from memory, from their experience of that thing, and the memory is the key. Here, I’ll share a quote. “Art consists in bringing the memory of things past to the surface. But the author is not a Paessiest. He is a link to history, to memory, which is linked to the common dream.” This is of course a quote by Saul Bellow, talking about his field, literature, and while I know nowadays not as many people are as familiar with his work, if you’re at a computer while you’re listening to this, it might be worth to just look him up.

Are we back? Awesome. Alright, so what the Nobel Prize Laureate and Pulitzer Prize winner Saul Bellow was getting at is that art is an act of memory, and we’ve been going in depth into memory in the last three episodes. And the artist can only work with what they have access to, what they’ve experienced during the course of their lifetime.

The more they’ve experienced, the more they can draw on and put into their art. And this is where the AI art tools come in as an assistive technology, because they would have access to much, much more than a human being can experience, right? Possibly anything that has been stored and put into the database and the creator accessing that tool will have access to everything, all the memory scanned and stored within it as well.

And so then the act of art becomes one of curation of deciding what to put forth. AI art is a digital art form, or at least everything that’s been produced to date. So how does that differ? Right? Well, let me give you an example. If I reach over to my paint shelf and grab an ultramarine paint, right, a cheap Daler Rowney acrylic ink, it’s right there with all the other colors that might be available to me on my paint shelf.

But, back in the day, if we were looking for a specific blue paint, an ultramarine, it would be made with lapis lazuli, like the stuff that Vermeer was looking for. It would be incredibly expensive, and so the artist would be limited in their selection to the paints that they had available to them, or be limited in the amount that they could actually paint within a given year.

And sometimes the cost would be exorbitant. For some paints, it still actually is, but a digital artist working on an iPad or a Wacom tablet or whatever would have access to a nigh unlimited range of colors. And so the only choice and selection for that artist is by deciding what’s right for the piece that they’re doing.

The digital artist is not working with a limited palette of, you know, a dozen paints or whatever they happen to have on hand. It’s a different kind of thing entirely. The digital artist has a much wider range of things to choose from, but it still requires skill. You know, conceptualization, composition, planning, visualization.

There’s still artistry involved. It’s no less art, but it’s a different kind of art. But one that already exists today and one that’s already existed for hundreds of years. And because of a banger that just got dropped in the last couple of weeks, it might be eligible for a Grammy next year. It’s an allographic art.

And if you’re going to try and tell me that Mozart isn’t an artist, I’m going to have a hard time believing you.

Allographic art is a type of art that was originally introduced by Nelson Goodman back in the 60s and 70s. Goodman is kind of like Gordon Freeman, except, you know, not a particle physicist. He was a mathematician and aesthetician, or sorry, philosopher interested in aesthetics, not esthetician as we normally call them now, which has a bit of a different meaning and is a reminder that I probably need to book a pedicure.

Nelson was interested in the question of what’s the difference between a painting and a symphony, and it rests on the idea of like uniqueness versus forgery. A painting, especially an oil painting, can be forged, but it relies on the strokes and the process and the materials that went into it, so you need to basically replicate the entire thing while doing it in order to make an accurate forgery, much like Pierre Menard trying to reproduce Cervantes ‘Quixote’ in the Jorge Luis Borges short story.

Whereas a symphony, or any song really, that is performed based off of a score, a notational system, is simply going to be a reproduction of that thing. And this is basically what Walter Benjamin was getting at when he was talking about art in the age of mechanical reproduction, too, right? So, a work that’s based off of a notational system can still count as a work of art.

Like, no one’s going to argue that a symphony doesn’t count as art, or that Mozart wasn’t an artist. And we can extend that to other forms of art that use a notational system as well. Like, I don’t know, architecture. Frank Lloyd Wright didn’t personally build Falling Water or the Guggenheim, but he created the plans for it, right?

And those were enacted. He did. We can say that, yeah, there’s artistic value there. So these things, composition, architecture, et cetera, are allographic arts, as opposed to autographic arts, things like painting or sculpture, or in some instances, the performance of an allographic work. If I go to see an orchestra playing a symphony, a work based off of a score, I’m not saying that I’m not engaged with art.

And this brings us back to the AI Art question, because one of the arguments you often see against it is that it’s just, you know, typing in some prompts to a computer and then poof, getting some results back. At a very high level, this is an approximation of what’s going on, but it kind of misses some of the finer points, right?

When we look at notational systems, we could have a very, you know, simple set of notes that are there, or we could have a very complex one. We could be looking at the score for Chopsticks or Twinkle Twinkle Little Star, or a long lost piece by Mozart called Serenade in C Major that he wrote when he was a teenager and has finally come to light.

This is an allographic art, and the fact that it can be produced and played 250 years later kind of proves the point. But that difference between simplicity and complexity is part of the key. When we look at the prompts that are input into a computer, we rarely see something with the complexity of say a Mozart.

As we increase the complexity of what we’re putting into one of the generative AI tools, we increase the complexity of what we get back as well. And this is not to suggest that the current AI artists are operating at the level of Mozart either. Some of the earliest notational music we have is found on ancient cuneiform tablets called the Hurrian Hymns, dating back to about 1400 BCE, so it took us a little over 3000 years to get to the level of Mozart in the 1700s.

We can give the AI artists a little bit of time to practice. The generative AI art tools, which are very much in their infancy, appear to be allographic arts, and they’re following in their lineage from procedurally generated art has been around for a little while longer. And as an art form in its infancy, there’s still a lot of contested areas.

Whether it counts, the provenance of materials, ethics of where it’s used, all of those things are coming into question. But we’re not going to say that it’s not art, right? And as an art, as work conducted in a new medium, we have certain responsibilities for documenting its use, its procedures, how it’s created.

In the introduction to 2001’s The Language of New Media, Lev Manovich, in talking about the creation of a new media, digital media in this case, noted how there was a lost opportunity in the late 19th and early 20th century with the creation of cinema. Quote, “I wish that someone in 1895, 1897, or at least 1903 had realized the fundamental significance of the emergence of the new medium of cinema and produced a comprehensive record.

Interviews with audiences, systematic account of narrative strategies, scenography, and camera positions as they developed year by year. An analysis of the connections between the emerging language of cinema and different forms of popular entertainment that coexisted with it. Unfortunately, such records do not exist.

Instead, we are left with newspaper reports, diaries of cinema’s inventors, programs of film showings, and other bits and pieces. A set of random and unevenly distributed historical samples. Today, we are witnessing the emergence of a new medium, the meta medium of the digital computer. In contrast to a hundred years ago, when cinema was coming into being, We are fully aware of the significance of this new media revolution.

Yet I am afraid that future theorists and historians of computer media will be left with not much more than the equivalence of the newspaper reports and film programs from cinema’s first decades.” End quote. 

Manovich goes on to note that a lot of the work that was being done on computers, especially in the 90s, was stuff prognosticating about its future uses, rather than documenting what was actually going on.

And this is the risk that the denialist framing of AI art puts us in. By not recognizing that something new is going on, that art is being created, and allographic art, we lose the opportunity to document it for the future. And

And as with art, so too with science. We’ve long noted that there’s an incredible amount of creativity that goes into scientific research, that the STEM fields, science, technology, engineering, and mathematics, require and benefit so much from the arts that they’d be better classified as STEAM, and a small side effect of that may mean that we see better funding for the arts at the university level.

But I digress. In the examples I gave earlier of medical research, of AI being used as an assistive technology, we were seeing some real groundbreaking developments of the boundaries being pushed, and we’re seeing that throughout the science fields. Part of this is because of what AI does well with things like pattern recognition, allowing weather forecasts, for example, to be predicted more quickly and accurately.

It’s also been able to provide more assistance with medical diagnostics and imaging as well. The massive growth in the number of AI related projects in recent years is often due to the fact that a number of these projects are just rebranded machine learning or deep learning. In a report released by the Royal Society in England in May of 2024 as part of their Disruptive Technology for Research project, they note how, quote, “AI is a broad term covering all efforts aiming to replicate and extend human capabilities for intelligence and reasoning in machines.”

End quote. They go on further to state that, quote, “Since the founding of the AI field at the 1956 Dartmouth Summer Research Project on Artificial Intelligence, Many different techniques have been invented and studied in pursuit of this goal. Many of these techniques have developed into their own sub fields within computer science, such as expert systems and symbolic reasoning.” end quote. 

And they note how the rise of the big data paradigm has made machine learning and deep learning techniques a lot more affordable and accessible, and scalable too. And all of this has contributed to the amount of stuff that’s floating around out there that’s branded as AI. Despite this confusion in branding and nomenclature, AI is starting to contribute to basic science.

A New York Times article published July by Siobhan Roberts talked about how a couple AI models were able to compete at the level of a silver medalist at the recent International Mathematical Olympiad. And this is the first time that the AI model has medaled at that competition. So there may be a role for AI to assist even high level mathematicians to function as collaborators and, again, assistive technologies there.

And we can see this in science more broadly. In a paper submitted to arxiv. org in August of 2024, titled, The AI Scientist Towards Fully Automated Open Ended Scientific Discovery, authors Liu et al. use a frontier large language model to perform research independently. Quote, “We introduce the AI scientist, which generates novel research ideas, writes code, executes experiments, visualizes results, describes its findings by writing a scientific paper, And then runs the simulated review process for evaluation” end quote.

So, a lot of this is scripts and bots and hooking into other AI tools in order to simulate the entire scientific process. And I can’t speak to the veracity of the results that they’re producing in the fields that they’ve chosen. They state that their paper can, quote, “Produce papers that exceed the acceptance threshold at a top machine learning conference as judged by our automated reviewer,” end quote.

And that’s Fine, but it shows that the process of doing the science can be assisted in various realms as well. And in one of those areas of assistance, it’s in providing help for stuff outside the scope of knowledge of a given researcher. AI as an aid in creativity can help explore the design space and allow for the combination of new ideas outside of everything we know.

As science is increasingly interdisciplinary. We need to be able to bring in more material, more knowledge, and that can be done through collaboration, but here we have a tool that can assist us as well. As we talked about with Nessience and Excession a few episodes ago, we don’t know everything. There’s more than we can possibly know, so the AI tools help expand the field of what’s available to us.

We don’t necessarily know where new ideas are going to come from. And if you don’t believe me on this, let me reach out to another scientist who said some words on this back in 1980. Quote, “We do not know beforehand where fundamental insights will arise from about our mysterious and lovely solar system.

And the history of our study of the solar system shows clearly that accepted and conventional ideas are often wrong, and that fundamental insights can arise from the most unexpected sources.” End quote. That, of course, is Carl Sagan. From an October 1980 episode of Cosmos A Personal Journey, titled Heaven and Hell, where he talks about the Velkovsky Affair.

I haven’t spliced in the original audio because I’m not looking to grab a copyright strike, but it’s out there if you want to look for it. And what Sagan is describing there is basically the process by which a Kuhnian paradigm shift takes place. Sagan is speaking to the need to reach beyond ourselves, especially in the fields of science, and the AI assisted research tools can help us with that.

And not just in the conduction of the research, but also in the writing and dissemination of that. Not all scientists are strong or comfortable writers or speakers, and many of them come to English as a second, third, or even fourth language. And the role of AI tools as translation devices means we have more people able to communicate and share their ideas and participate in the pursuit of knowledge.

This is not to say that everything is rosy. Are there valid concerns when it comes to AI? Absolutely. Yes. We talked about a few at the outset and we’ve documented a number of them throughout the run of this podcast. One of our primary concerns is the role of the AI tools in échanger, that replacement effect that happens that leads to technological unemployment.

Much of the initial hype and furor around the AI tools was people recognizing that potential for échanger following the initial public release of ChatGPT. There’s also concerns about the degree to which the AI tools may be used as instruments of control, and how they can contribute to what Gilles Deleuze calls a control society, which we talked about in our Reflections episode last year. 

And related to that is the lack of transparency, the degree to which the AI tools are black boxes, where based on a given set of inputs, we’re not necessarily sure about how it came up with the outputs. And this is a challenge regardless of whether it’s a hardware device or a software tool.

And regardless of how the AI tool is deployed, the increased prevalence of it means we’re leading to a soylent culture. With an increased amount of data smog, or bitslop, or however you want to refer to the digital pollution that takes place with the increased amount of AI content in our channels and For-You-Feeds, and this is likely to become even more heightened as Facebook moves to pushing AI generated posts into the timelines.

Many are speculating that this is becoming so prevalent that the internet is largely bots pushing out AI generated content, what’s called the “Dead Internet Theory”, which we’ll definitely have to take a look at it in a future episode. Hint, the internet is alive and well, it’s just not necessarily where you think it is.

And with all this AI generated content, we’re still facing the risk of the hallucinations, which we talked about, holy moly, over two years ago when we discussed the LOAB, that brief little bit of creepypasta that was making the rounds as people were trying out the new digital tools. But the hallucinations still highlight one of the primary issues with the AI tools, and that’s the errors in the results.

In order to document and collate these issues, a research team over at MIT has created the AI Risk Repository. It’s available at airisk. mit. edu. Here they have created taxonomies of the causes and domains where the risks may take place. However, not all of these risks are equal. One of the primary ones that gets mentioned is the energy usage for AI.

And while it’s not insignificant, I think it needs to be looked at in context. One estimate of global data center usage was between 240 and 340 terawatt hours, which is a lot of energy, and it might be rising as data center usage for the big players like Microsoft and Google has gone up by like 30 percent since 2022.

And that still might be too low, as one report noted that the actual estimate could be as much as 600 percent higher. So when you put that all together, that initial estimate could be anywhere between a thousand and 2000 terawatts. But the AI tools are only a fraction of what goes on at the data centers, which include cloud storage and services, streaming video, gaming, social media, and other high volume activities.

So you bring that number right back down. And AI is using? The thing is, whatever that number is, 300 terawatts times 1. 3 times six divided by five. Whatever that result ends up being doesn’t even chart when looking at global energy usage. Looking at a recent chart on global primary energy consumption by source over at Our World in Data, we see that the worldwide consumption in 2023 was 180, 000 terawatt hours.

The amount of energy potentially used by AI hardly registers as a pixel on the screen compared to worldwide energy usage that were presented with the picture in the media where AI is burning up the planet. I’m not saying AI energy usage isn’t a concern. It should be green and renewable. And it needs to be verifiable, this energy usage of the AI companies, as there is the risk of greenwashing the work that is done, of painting over their activities true energy costs by highlighting their positive impacts for the environment.

And the energy usage may be far exceeded by the water usage that’s used for the cooling of the data centers. And as with the energy usage, the amount of water that’s actually going to AI is incredibly hard to dissociate from all the other activities that are taking place in these data centers. And this greenwashing, which various industries have long been accused of, might show up in another form as well.

There is always the possibility that the helpful stories that are presented, AI tools have provided for various at risk and minority populations, are presented as a form of “aidwashing”. And this is something we have to evaluate for each of the stories posted in the AI Positivity Archive. Now I can’t say for sure that “aidwashing” specifically as a term exists.

A couple searches didn’t return any hits, so you may have heard it here first. However, while positive stories about AI often do get touted, do we think this is the driving motivation for the massive investment we’re seeing in the AI technologies? No, not even for a second. These assistive uses of AI don’t really work with the value proposition for the industry, even though those street uses of technology may point the way forward in resolving some of the larger issues for AI tools with respect to resource consumption and energy usage.

The AI tools used to assist Casey Harrell, the ALS patient mentioned near the beginning of the show, use a significantly smaller model than one’s conventionally available, like those found in ChatGPT. The future of AI may be small, personalized, and local, but again, that doesn’t fit with the value proposition. 

And that value proposition is coming under increased scrutiny. In a report published by Goldman Sachs on June 25th, 2024, they question if there’s enough benefit for all the money that’s being poured into the field. In a series of interviews with a number of experts in the field, they note how initial estimates about both the cost savings, the complexity of tasks that AI is available to do, and the productivity gains that would derive from it, are all much lower than initially proposed or happening on a much longer time frame.

In it, MIT professor Daron Acemoglu forecasts minimal productivity and GDP growths, around 0. 5 percent or 1%, whereas Goldman Sachs predictions were closer to 9 percent and 6 percent increase in GDP. With such varying degrees of estimates, what the actual impact of AI in the next 10 years is, is anybody’s guess.

It could be at either extreme or somewhere in between. But the main takeaway from this is that even Goldman Sachs is starting to look at the balance sheet and question the amount of money that’s being invested in AI. And that amount of money is quite large indeed. 

In between starting recording this podcast episode and finishing it, OpenAI raised 6. 6 billion dollars in a funding round from its investors, including Microsoft and Nvidia, which is the largest ever recorded. As reported by Reuters, this could value the company at 157 billion dollars and make it one of the the world. valuable private companies in the world. And this coincides with the recent restructuring from a week earlier which would remove the non profit control and see it move to a for profit business model.

But my final question is, would this even work? Because it seems diametrically opposed to what AI might actually bring about. If assistive technology focused on automation and Echange, then the end result may be something closer to what Aaron Bastani calls “fully automated luxury communism”, where the future is a post-scarcity environment that’s much closer to Star Trek than it is to Snow Crash.

How do you make that work when you’re focused on a for profit model? The tool that you’re using is not designed to do what you’re trying to make it do. Remember, “The street finds its own uses for things”, though in this case that street might be Wall Street. The investors and forecasters at Goldman Sachs are recognizing that disconnect by looking at the charts and tables in the balance sheet.

But their disconnect, the part that they’re missing, is that the driving force towards AI may be one more of ideology. And that ideology is the California ideology, a term that’s been floating around since at least the mid 1990s. And we’ll take a look at it next episode and return to the works of Lev Manovich, as well as Richard Barbrook, Andy Cameron, and Adrian Daub, as well as a recent post by Sam Altman titled ‘The Intelligence Age’.

There’s definitely a lot more going on behind the scenes.

Once again, thank you for joining us on the Implausipod. I’m your host, Dr. Implausible. You can reach me at drimplausible at implausipod. com. And you can also find the show archives and transcripts of all our previous shows at implausipod. com as well. I’m responsible for all elements of the show, including research, writing, mixing, mastering, and music.

And perhaps somewhat surprisingly, given the topic of our episode, no AI is used in the production of this podcast. Though I think some machine learning goes into the transcription service that we use. And the show is licensed under Creative Commons 4. 0 share alike license. You may have noticed at the beginning of the show that we described the show as an academic podcast and you should be able to find us on the Academic Podcast Network when that gets updated.

You may have also noted that there was no advertising during the program and there’s no cost associated with the show. But it does grow from word of mouth of the community. So if you enjoy the show, please share it with a friend or two, and pass it along. There’s also a buy me a coffee link on each show at implausopod.

com, which will go to any hosting costs associated with the show. I’ve put a bit of a hold on the blog and the newsletter, as WordPress is turning into a bit of a dumpster fire, and I need to figure out how to re host it. But the material is still up there, I own the domain. It’ll just probably look a little bit more basic soon.

Join us next time as we explore that Californian ideology, and then we’ll be asking, who are Roads for? And do a deeper dive into how we model the world. Until next time, take care and have fun.



Bibliography

A bottle of water per email: The hidden environmental costs of using AI chatbots. (2024, September 18). Washington Post. https://www.washingtonpost.com/technology/2024/09/18/energy-ai-use-electricity-water-data-centers/

A Note to Our Community About our Comments on AI – September 2024 | NaNoWriMo. (n.d.). Retrieved October 5, 2024, from https://nanowrimo.org/a-note-to-our-community-about-our-comments-on-ai-september-2024/

Advances in Brain-Computer Interface Technology Help One Man Find His Voice | The ALS Association. (n.d.). Retrieved October 5, 2024, from https://www.als.org/blog/advances-brain-computer-interface-technology-help-one-man-find-his-voice

Balevic, K. (n.d.). Goldman Sachs says the return on investment for AI might be disappointing. Business Insider. Retrieved October 5, 2024, from https://www.businessinsider.com/ai-return-investment-disappointing-goldman-sachs-report-2024-6

Broad, W. J. (2024, July 29). Artificial Intelligence Gives Weather Forecasters a New Edge. The New York Times. https://www.nytimes.com/interactive/2024/07/29/science/ai-weather-forecast-hurricane.html

Card, N. S., Wairagkar, M., Iacobacci, C., Hou, X., Singer-Clark, T., Willett, F. R., Kunz, E. M., Fan, C., Nia, M. V., Deo, D. R., Srinivasan, A., Choi, E. Y., Glasser, M. F., Hochberg, L. R., Henderson, J. M., Shahlaie, K., Stavisky, S. D., & Brandman, D. M. (2024). An Accurate and Rapidly Calibrating Speech Neuroprosthesis. New England Journal of Medicine, 391(7), 609–618. https://doi.org/10.1056/NEJMoa2314132

Consumers Know More About AI Than Business Leaders Think. (2024, April 8). BCG Global. https://www.bcg.com/publications/2024/consumers-know-more-about-ai-than-businesses-think

Cosmos. (1980, September 28). [Documentary]. KCET, Carl Sagan Productions, British Broadcasting Corporation (BBC).

Donna. (2023, October 9). Banksy Replaced by a Robot: A Thought-Provoking Commentary on the Role of Technology in our World, London 2023. GraffitiStreet. https://www.graffitistreet.com/banksy-replaced-by-a-robot-a-thought-provoking-commentary-on-the-role-of-technology-in-our-world-london-2023/

Gen AI: Too much spend, too little benefit? (n.d.). Retrieved October 5, 2024, from https://www.goldmansachs.com/insights/top-of-mind/gen-ai-too-much-spend-too-little-benefit

Goodman, N. (1976). Languages of Art (2 edition). Hackett Publishing Company, Inc.

Goodman, N. (1978). Ways Of Worldmaking. http://archive.org/details/GoodmanWaysOfWorldmaking

Hill, L. W. (2024, September 11). Inside the Heated Controversy That’s Tearing a Writing Community Apart. Slate. https://slate.com/technology/2024/09/national-novel-writing-month-ai-bots-controversy.html

Hu, K. (2024, October 3). OpenAI closes $6.6 billion funding haul with investment from Microsoft and Nvidia. Reuters. https://www.reuters.com/technology/artificial-intelligence/openai-closes-66-billion-funding-haul-valuation-157-billion-with-investment-2024-10-02/

Hu, K., & Cai, K. (2024, September 26). Exclusive: OpenAI to remove non-profit control and give Sam Altman equity. Reuters. https://www.reuters.com/technology/artificial-intelligence/openai-remove-non-profit-control-give-sam-altman-equity-sources-say-2024-09-25/

Knight, W. (n.d.). An ‘AI Scientist’ Is Inventing and Running Its Own Experiments. Wired. Retrieved September 9, 2024, from https://www.wired.com/story/ai-scientist-ubc-lab/

LaBossiere, M. (n.d.). AI: I Want a Banksy vs I Want a Picture of a Dragon. Retrieved October 5, 2024, from https://aphilosopher.drmcl.com/2024/04/01/ai-i-want-a-banksy-vs-i-want-a-picture-of-a-dragon/

Lu, C., Lu, C., Lange, R. T., Foerster, J., Clune, J., & Ha, D. (2024, August 12). The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery. arXiv.Org. https://arxiv.org/abs/2408.06292v3

Manovich, L. (2001). The language of new media. MIT Press.

McLuhan, M. (1964). Understanding Media: The Extensions of Man. The New American Library.

Mickle, T. (2024, September 23). Will A.I. Be a Bust? A Wall Street Skeptic Rings the Alarm. The New York Times. https://www.nytimes.com/2024/09/23/technology/ai-jim-covello-goldman-sachs.html

Milman, O. (2024, March 7). AI likely to increase energy use and accelerate climate misinformation – report. The Guardian. https://www.theguardian.com/technology/2024/mar/07/ai-climate-change-energy-disinformation-report

Mueller, B. (2024, August 14). A.L.S. Stole His Voice. A.I. Retrieved It. The New York Times. https://www.nytimes.com/2024/08/14/health/als-ai-brain-implants.html

Overview and key findings – World Energy Investment 2024 – Analysis. (n.d.). IEA. Retrieved October 5, 2024, from https://www.iea.org/reports/world-energy-investment-2024/overview-and-key-findings

Roberts, S. (2024, July 25). Move Over, Mathematicians, Here Comes AlphaProof. The New York Times. https://www.nytimes.com/2024/07/25/science/ai-math-alphaproof-deepmind.html

Schacter, R. (2024, August 18). How does Banksy feel about the destruction of his art? He may well be cheering. The Guardian. https://www.theguardian.com/commentisfree/article/2024/aug/18/banksy-art-destruction-graffiti-street-art

Science in the age of AI | Royal Society. (n.d.). Retrieved October 2, 2024, from https://royalsociety.org/news-resources/projects/science-in-the-age-of-ai/

Sullivan, S. (2024, September 25). New Mozart Song Released 200 Years Later—How It Was Found. Woman’s World. https://www.womansworld.com/entertainment/music/new-mozart-song-released-200-yaers-later-how-it-was-found

Taylor, C. (2024, September 3). How much is AI hurting the planet? Big tech won’t tell us. Mashable. https://mashable.com/article/ai-environment-energy

The AI Risk Repository. (n.d.). Retrieved October 5, 2024, from https://airisk.mit.edu/

The Intelligence Age. (2024, September 23). https://ia.samaltman.com/

What is NaNoWriMo’s position on Artificial Intelligence (AI)? (2024, September 2). National Novel Writing Month. https://nanowrimo.zendesk.com/hc/en-us/articles/29933455931412-What-is-NaNoWriMo-s-position-on-Artificial-Intelligence-AI

Wickelgren, I. (n.d.). Brain-to-Speech Tech Good Enough for Everyday Use Debuts in a Man with ALS. Scientific American. Retrieved October 5, 2024, from https://www.scientificamerican.com/article/brain-to-speech-tech-good-enough-for-everyday-use-debuts-in-a-man-with-als/

Soylent Culture

(this was originally published as Implausipod Episode 37 on September 22nd, 2024)

https://www.implausipod.com/1935232/episodes/15791252-e0037-soylent-culture

What is Soylent Culture? Whether it is in the mass media, the new media, or the media consumed by the current crop of generative AI tools, it is culture that has been fed on itself. But of course, there’s more. Have a listen to find out how Soylent Culture is driving the potential for “Model Collapse” with our AI tools, and what that might mean.


In 1964, Canadian media theorist Marshall McLuhan published his work Understanding Media, The Extensions of Man. In it, he described how the content of any new medium is that of an older medium. This can help make it stronger and more intense. Quote, “The content of a movie is a novel, or a play, or an opera.

The effect of the movie form is not related to its programmed content. The content of writing or print is speech, but the reader is almost entirely unaware either of print or of speech.” End quote. 

60 years later, in 2024, this is the promise of the generative AI tools that are spreading rapidly throughout society, and has been the end result of 30 years of new media, which has seen the digitalization of anything and everything that provides some form of content on the internet.

Our culture has been built on these successive waves of media, but what happens when there’s nothing left to feed the next wave? It begins to feed on itself, which is why we live now in an era of soylent culture.

Welcome to the Implausipod, an academic podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible, and in this episode, we’re going to draw together some threads we’ve been collecting for over a year and weave them together into a tapestry that describes our current age, an era of soylent culture.

And way back in episode 8, when we introduced you to the idea of the audience commodity, where media companies real product isn’t the shiny stuff on screen, but rather the audiences that they can serve up to the advertisers, we noted how Reddit and Twitter were in a bit of a bind because other companies had come in and slurped up all the user generated content that was so fundamental to Web 2. 0 and fundamental to their business model as well, as they were still in that old model of courting the business of advertisers. 

And all that UGC – the useless byproduct of having people chat online in a community that serve up to those advertisers – got tossed into the wood chipper, added a little bit of glue and paint, and then sold back to you as shiny new furniture, just like IKEA.

And this is what the AI companies are doing. We’ve been talking about this a little bit off and on, and since then, Reddit and Twitter have both gone all in on leveraging their own resources, and either creating their own AI models, like the Grok model, or at least licensing and selling it to other LLMs.

In episode 16, we looked a little bit more at that Web 2. 0 idea of spreadable media and how the atomization of culture actually took place. How the encouragement of that user generated content by the developers and platform owners is now the very material that’s feeding the AI models. And finally, our look at nostalgia over the past two episodes, starting with our look at the Dial-up Pastorale and that wistful approach to an earlier internet, one that never actually existed.

All of these point towards the existence of Soylent Culture. What I’m saying is is that it’s been a long time coming. The atomization of culture into its component parts, the reduction and eclipsed of soundbites to TikToks to Vines, the meme-ification of culture in general were all evidence of this happening.

This isn’t inherently a bad thing. We’re not ascribing some kind of value to this. We’re just describing how culture was reduced to its bare essentials as even smaller bits were carved off of the mass audience to draw the attention of even smaller and smaller niche audiences that could be catered to.

And a lot of this is because culture is inherently memetic. That’s memetic as in memes, not memetic as in mimesis, though the latter applies as well. But when we say that culture is memetic, I want to build on it more than just Dawkins’s original formulation of the idea of a meme to describe a unit of cultural transmission.

Because, honestly, the whole field of anthropology was sitting right over there when he came up with it. A memetic form of culture allows for the combination and recombination of various cultural components in the pursuit of novelty, and this can lead to innovation in the arts and the aesthetic dimension.

In the digital era, we’ve been presented with a new medium. Well, several perhaps, but the underlying logic of the digital media – the reduction of everything to bits, to ones and zeros that allow for the mass storage and fast transmission of everything anywhere, where the limiting factors are starting to boil down to fundamental laws of physics – 

this commonality can be found across all the digital arts, whether it’s in images, audio, video, gaming. Anything that’s appearing on your computer or on your phone has this underlying logic to it. And when a new medium presents itself due to changing technology, the first forays into that new medium will often be adaptations or translations of work done in an earlier form.

As noted by Marshall McLuhan at the beginning of this episode, it can take a while for new media to come into its own. It’ll be grasped by the masses as popular entertainment and derided by the high arts, or at least those who are fans of it. Frederick Jameson, who we talked about a whole lot last episode on nostalgia noted, quote, “it was high culture in the fifties that was authorized as it still is to pass judgment on reality.

to say what real life is and what is mere appearance. And it is by leaving out, by ignoring, by passing over in silence and with the repugnance one may feel for the dreary stereotypes of television series that high art palpably issues its judgment.” End quote. 

So, the new medium, or works that are done in the new medium, can often feel derivative as it copies stories of old, retelling them in a new way.

But over time, what we see happen again and again and again are that fresh stories start to be told by those familiar with the medium that have and can leverage the strengths and weaknesses of the medium, telling tales that reflect their own experiences, their own lives, and the lives of people living in the current age, not just reflections of earlier tales.

And eventually, the new medium finds acceptance, but it can take a little while.

So let me ask you, how long does it take for a new medium to be accepted as art? First they said radio wasn’t art, and then we got War of the Worlds. They said comic books weren’t art, and then we got Maus, and Watchmen, and Dark Knight Returns. They said rock and roll wasn’t art, and we got Dark Side of the Moon and Pet Sounds, Sgt.

Pepper’s and many, many others. They said films weren’t art, and we got Citizen Kane. They said video games weren’t art, and we got Final Fantasy VII and Myst and Breath of the Wild. They said TV wasn’t art, and we got Oz and Breaking Bad and Hannibal and The Wire. And now they’re telling us that AI generated art isn’t art, and I’m wondering how long it will take until they admit that they were wrong here, too.

Because even though it’s early days, I’ve seen and heard some AI generated art pieces that would absolutely count as art. There are pieces that produce an emotional effect, they evoke a response, whether it’s whimsy or wonder or sublime awe, and for all of these reasons, I think the AI generated art that I’ve seen or experienced counts.

And the point at which creators in a new medium produce something that counts as art often happens relatively early in the life cycle of that new media. In all of the examples I gave, things like War of the Worlds, Citizen Kane, Final Fantasy VII, these weren’t the first titles produced in that medium, but they did come about relatively early, once creators became accustomed to the cultural form.

As newer creators began working with the media, they can take it further, but there’s a risk. Creators that have grown up with the media may become too familiar with the source material, drawing on the representations from within itself. And we can all think of examples of this, where writers on police procedurals or action movies have grown up watching police procedurals and action movies and they simply endlessly repeat the tropes that are foundational to the genre.

The works become pastiches, parodies of themselves, often unintentionally, and they’re unable to escape from the weight of the tropes that they carry. This is especially evident in long running shows and franchises. Think of later seasons of The Simpsons, if you’ve actually watched recent seasons of The Simpsons, compared to the earlier ones.

Or recent seasons of Saturday Night Live, with the endlessly recycled bits, because we really needed another game show knock off, or a cringy community access parody. We can see it in later seasons of Doctor Who, and Star Trek, and Star Wars, and Pro Wrestling as well, and the granddaddy of them all, the soap opera.

This is what happens with normal culture when it is trained on itself. You get Soylent Culture. 

Soylent Culture is this, the self referential culture that is fed on itself, an ouroboros of references that always point at something else. It is culture comprised of rapid fire clips coming at the audience faster than a Dennis Miller era Saturday Night Live weekend update. Or the speed of a Weird Al Yankovic polka medley.

It is 30 years of Simpsons Halloween episodes referring to the first 10 years of Simpsons Halloween episodes. It is the hyper referential titles like The Family Guy and Deadpool, whether in print or film, throwing references at the audience rapid fire with rhyme and reason but so little of it, that works like Ready Player One start to seem like the inevitable result of the form.

And I’m not suggesting that the above works aren’t creative. They’re high examples of this cultural form; of soylent culture. But the endless demand for fresh material in an era of consumption culture means that the hyper-referentiality will soon exhaust itself and turn inward. This is where the nostalgia that we’ve been discussing for the previous couple episodes comes into play.

It’s a resource for mining, providing variations of previous works to spark a glimmer in the audience’s eyes of, hey, I recognize that. But even though these works are creative, they’re limited, they’re bound to previous, more popular titles, referring to art that was more widely accessible, more widely known.

They’re derivative works and they can’t come up with anything new, perhaps. 

And I say perhaps because there’s more out there than we can know. There’s more art that’s been created that we can possibly experience in a lifetime. There’s more stuff posted to YouTube in a minute than you’ll ever see in your 80 years on the planet.

And the rate at which that is happening is increasing. So, for anybody watching these hyper referential titles, if their first exposure to Faulkner is through Family Guy, or to Diogenes is through Deadpool, then so be it. Maybe their curiosity will inspire them to track that down, to check out the originals, to get a broader sense of the culture that they’re immersed in.

If they don’t get the joke and look around and wonder why the rest of the audience is laughing at this and say, you know, maybe it’s a me thing. Maybe I need to learn more. And that’s all right. It can lead to an act of discovery; of somebody looking at other titles and curating them, bringing them together and developing their own sense of style and working on that to create an aesthetic.

And that’s ultimately what it comes down to. Is art an act of learning and discovery and curation? Or is it an act of invention and generation and creation, or these all components of it? If an artist’s aesthetic is reliant on what they’ve experienced, well, then, as I’ve said, we’re finite, tiny creatures.

How many books or TV shows can you watch in a lifetime to incorporate into your experience? And if you repeatedly watch something, the same thing, are you limiting yourself from exposure to something new? And this is where the generative art tools come back into play. The AI tools that have been facilitated by the digitalization of everything during web 1. 0 and the subsequent slurping up of everything into feeding the models. 

Because the AI tools expand the realm of what we have access to. They can draw from every movie ever made, or at least digitalized. Not just the two dozen titles that the video store clerked happened to watch on repeat while they were working on their script, before finally following through and getting it made.

In theory, the AI tools can aid the creativity of those engaging with it, and in practice we’re starting to see that as well. It comes back to that question of whether art is generative or whether it’s an act of discovery and curation. But there’s a catch. Like we said, Soylent cultures existed long before the AI art tools arrived on the scene.

The derivative stories of soap operas and police procedurals and comic books and pulp sci-fi. But it has become increasingly obvious that the AI tools facilitate Soylent culture, drive it forward, and feed off of it even more. The A. I. tools are voracious, continually wanting more, needing fresh new stuff in order to increase the fidelity of the model.

That hallowed heart that drives the beast that continually hungers. But you see, the model is weak. It is Vulnerable like the phylactery of a lich hidden away somewhere deep.

The one thing the model can’t take too much of is itself: model collapse is the very real risk of a GPT being trained on text generated by a large language model identified by Shumailov, et al, and “ubiquitous among all learned generative models” end quote. Model collapse is a risk that creators of AI tools face in further developing those tools.

Quoting again from Shumailov: “model collapse is a degenerative process affecting generations of learned generative models in which the data they generate end up polluting the training set. of the next generation. Being trained on polluted data, they then misperceive reality.” End quote. This model collapse can result in the models ‘forgetting’ or ‘hallucinating’.

Two terms drawn not just from psychology, but from our own long history of engaging with and thinking about our own minds and the minds of others. And we’re exacting them here to apply to our AI tools, which – I want to be clear – aren’t thinking, but are the results of generative processes of taking lots of things and putting them together in new ways, which is honestly what we do for art too.

But this ‘forgetting’ can be toxic to the models. It’s like a cybernetic prion disease, like the cattle that developed BSE by being fed feed that contained parts of other ground up cows that were sick with the disease. The burgeoning electronic minds of our AI tools cannot digest other generated content.

And in an era of Soylent Culture, where there’s a risk of model collapse, where these incredibly expensive AI tools that require mothballed nuclear reactors to be brought online to provide enough power to service them, that thirst for fresh water like a marathon runner in the desert, In this era, then the human generated content of the earlier pre AI web becomes a much more valuable resource, the digital equivalent of the low background steel that was sought after for the creation of precision instruments following the era of atmospheric nuclear testing, where all the above ground and newly mined ore was too irradiated for use in precision instruments.

And it should be noted that we’re no longer living in that era because we stopped doing atmospheric nuclear testing. And for some, the takeaway for that may be that to stop an era of Soylent culture, we may need to stop using these AI tools completely. But I think that would be the wrong takeaway because the Soylent culture existed long before the AI tools existed, long before new media, as shown by the soap operas and the like.

And it’s something that’s more tied to mass culture in general, though. New media and the AI tools can make Soylent Culture much, much worse, let me be clear. Despite this, despite the speed with which all this is happening, the research on model collapse is still in its early days. The long term ramifications of model collapse and its consequences will only be learned through time.

In the meantime, we can discuss some possible solutions to dealing with Soylent Culture. Both AI generated and otherwise. If Soylent Culture is art that’s fed on itself, then the most effective way to combat it would be to find new stuff. To find new things to tell stories about. To create new art about.

Historically, how has this happened with traditional art? Well, we’ve hinted at a few ways throughout this episode, even though, as we noted, in an era of mass culture, even traditional arts are not immune from becoming soylent culture as well. One of the ways we get those new artistic ideas is through mimesis, the observation of the world around us, and imitating that, putting it into artistic forms.

Another way we get new art is through soft innovation when technologies enhance or change the way that we can produce media and art, or where art inspires the development of new technology as they feed back and forth between each other, trading ideas. And as we’ve seen throughout this episode and throughout the podcast in general, new media and new modes of production can encourage new stories to be told as artists are dealing with their surroundings and whatever the current zeitgeist is and putting that into production with whatever media that they have available.

As our world and society and culture changes, we’re going to reflect upon our current condition and tell tales about that to share with those around us. And as we noted much. Earlier in this particular episode, that familiarity with a form, a technical form, allows those who are using it to innovate within that form, creating new, more complex, better produced and higher fidelity works in whatever medium they happen to be choosing to work in.

And ultimately that comes down to choice. By the artists and the audience and the associated industries that allow the audience to experience those works, whether they are audio, visual, tactile, experiential, like games, any version of art that we might come in contact with. The generation and invention in the process is important to be sure, but the curation and discovery is no less important within this process.

And this is where humans with an a sense for aesthetic and style will still be able to tell. How would an AI tool discover or create? How could it test something that’s in the loop? The generative AI tools can’t tell. They have no sense. They can provide output, but no aura, no discernment. Could an AI run a script that does A-B testing on an audience for each new generated piece of art to see how they react, and the most popular one gets put forward?

I guess so, it’s not outside the realm of possibility, but that isn’t really something that they’re able to do on their own, or at least I hope not. 

Would programming in some variance and randomness in the AI tools allow for them to avoid the model collapse that comes with ingesting soylent culture in much the same way that we saw with the reveries for the hosts in the Westworld TV series?

Well, the research by Shumailov et al that we mentioned earlier suggests that that’s possibly not the case. I mean, it might help with the variation, perhaps, but that doesn’t help with the selection mechanisms, the discernment. 

AI is a blind watch, trying to become a watchmaker, making new watches. The question might be, what would an AI even want with a watch anyways?

Thank you for joining us on the Implausipod. I’m your host Dr. Implausible. We’ll explore more on the current state of AI art tools and their role as assistive technologies in our next episode. called AI Refractions. But before we get there, we need to return to our last episode, episode 36, and offer a postscript on that one.

Even though it’s been only a week, as of the recording of this episode, September 22nd, 2024, we regret to inform you of the passing of Professor Frederick Jameson, who was the subject of episode 36. As we noted in that episode, he was a giant in the field of literary criticism and philosophy, and a long time professor at Duke University.

Our condolences go out to his family and friends. Rest in peace. If you’d like to contact the show, you can reach me at drimplausible at implausipod. com, and you can also find the show archives and transcripts of all our previous shows at implausipod. com as well. I’m responsible for all elements of the show, including research, writing, mixing, mastering, and music, and the show is licensed under a Creative Commons 4. 0 share alike license. 

You may have noticed at the beginning of the show that we described the show as an academic podcast, and you should be able to find us on the Academic Podcast Network when that gets updated. You may have also noted that there was no advertising during the program, and there is no cost associated with the show, but it does grow from word of mouth of the community, so if you enjoy the show, please share it with a friend or two.

and pass it along. There’s also a buy me a coffee link on each show at implausipod. com which will go to any hosting costs associated with the show. Over on the blog, we’ve started up a monthly newsletter. There will likely be some overlap with future podcast episodes, and newsletter subscribers can get a hint of what’s to come ahead of time, so consider signing up and I’ll leave a link in the show notes.

Until then, take care and have fun.

Bibliography

McLuhan, M. (1964). Understanding Media: The Extensions of Man. The New American Library.

Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., & Anderson, R. (2024). The Curse of Recursion: Training on Generated Data Makes Models Forget (No. arXiv:2305.17493). arXiv. https://doi.org/10.48550/arXiv.2305.17493

Shumailov, I., Shumaylov, Z., Zhao, Y., Papernot, N., Anderson, R., & Gal, Y. (2024). AI models collapse when trained on recursively generated data. Nature, 631(8022), 755–759. https://doi.org/10.1038/s41586-024-07566-y

Snoswell, A. J. (2024, August 19). What is ‘model collapse’? An expert explains the rumours about an impending AI doom. The Conversation. http://theconversation.com/what-is-model-collapse-an-expert-explains-the-rumours-about-an-impending-ai-doom-236415

Nescience and Excession: Jameson and Nostalgia

(this was originally published as Implausipod Episode 36 on September 15, 2024)

https://www.implausipod.com/1935232/episodes/15676490-e0036-nescience-and-excession-jameson-and-nostalgia

Further detail looking at The Nostalgia Curve from Episode 35, and comparing it with the Fredric Jameson’s “Nostalgia for the Present” (1989) to see what the established literature says about the topic. We go into Jameson’s writing on science fiction and Philip K Dick’s “Time Out of Joint” (1959), and take a deep look at the Rumsfeld Matrix in order to introduce the idea of Nescience: the intentional act of not engaging with a known-unknown.


Let me ask you a question. Do you ever have something that you know you need to know, but you know you can’t know just yet? Yeah, me too. In February of 2002, the world was introduced to the concept of Unknown Unknowns by then U. S. Secretary of Defense Donald Rumsfeld. 

“As we know, there are known knowns. There are things we know we know. We also know there are known unknowns. That is to say, we know there are some things we do not know. But there are also unknown unknowns. The ones we don’t know, we don’t know.” 

Because of the way it was presented, and the seeming incongruity of it, it instantly became fodder for the comedians on late night TV.

But it is one of those things that makes sense if you stop to think about it for even more than a moment. As Rumsfeld stated, Unknown unknowns are those things that we don’t know that we don’t. But here we’re talking about something a little bit different. These are things that we know we don’t know.

More like the known unknowns that Rumsfeld talked about back then. But rather than rushing out and finding out what it’s all about immediately, we hold off for a little bit longer. In order to get our own thoughts down. This is an act of nescience, and when it comes to the nostalgia curve that we talked about last episode, I had to hold off for a little while, but now it’s time to fill in those gaps in this episode of The Implausipod.

Welcome to The Implausipod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. And early on, when I began looking at Nostalgia in the beginning of August, it became very clear that there were some key authors that had written on Nostalgia. Authors that I was aware of, but authors I’d never engaged with yet.

So in order to get my own thoughts down and kind of get everything together, I had to engage in that act of nescience, of not looking at what those authors had written until I had everything down that I wanted to say for myself. And this act of nescience comes from having a pretty good idea of what the limits of my knowledge is and where the things that I know come from.

Now, this may be a side effect of working on a PhD, of developing that body of knowledge and intensely studying things, but also comes from some reflective practice of looking at what you know, citing the information and keeping track of everything. So when it came to looking at nostalgia, I knew that Frederick Jameson had written on nostalgia in a work called nostalgia for the present.

I’ve seen the title before, but I had never engaged with it directly. So I had to put that aside as a TBR to be read. So, nescience. Now nescience is lack of knowledge full stop. It’s contrasted with something like ignorance, which is the act of not knowing. And you might be saying, well, isn’t my intentional act of not engaging with Jameson an act of ignorance?

Well, kinda. The popular, or, you know, Lay understanding of ignorance is generally that wilful stupidity that happens. And here we’re trying to describe an intentional act of delayed learning. And I wanted to dissociate it from all the negative connotations that ignorance has. Nescience is the unknown. In this case, both the known unknown and unknown unknown that Rumsfeld spoke of.

The thing that we don’t know that we don’t even know. Many of the mysteries of the universe would fall within this category, for we are tiny and small creatures on a little rock far off in a distant galaxy. Besides, Nescient sounds better, and we’ll lean towards the poetic where we can. There might be lots of things we’re all Nescient about.

Often this comes up in the terms of, like, media titles, like books we haven’t read, TV shows we haven’t seen, movies we haven’t watched yet, games we haven’t played. We might know of them, and given the way modern marketing works, it might be impossible to escape them, but there could be things out there that we’ve never ever seen.

Even though we’ve seen so many clips and memes and spoofs and parodies that it feels like we’ve seen the whole movie. For me, this includes things like Titanic and Schindler’s List, Frozen, American Psycho, Sopranos, Lost, and the list goes on and on and on. Some of the titles that I haven’t seen might surprise you, but there’s a lot of stuff out there, and we’re all constrained with respect to time and resources.

Our time on this planet is finite, after all, and there’s more videos that are uploaded to YouTube every single minute that can be seen in a human lifetime, so, we gotta pick and choose, right? And sometimes what we pick and choose is dependent on what we’ve seen in the past, which reminds me of that Rumsfeld bit from the beginning.

Now I’ve put a copy of the Rumsfeld Matrix up on the blog because describing something that’s inherently visual often seems like a fruitless task, but there’s many copies of it floating around. So a quick trip to the old Bing there should find you some results. Remember we don’t Google in 2024. But within that matrix, we end up with four categories, the known-knowns, the stuff that we know that we know, stuff we can recall readily and state with confidence.

We have the known unknowns. And this is things that we know we don’t know. We’re aware of, they might be out there. It could be a book or a movie or whatever, as we mentioned before. This also includes things like weather, travel. external events that happen while you’re not paying attention, that kind of stuff.

And you might not know about it yet, but you’ll find out soon. And then there’s the unknown unknowns, things we don’t know that we don’t know. These are outside of context problems. They’re outside our ability to even imagine in some cases. And we’ll get into the details of these in just a moment. And there’s a fourth category that Rumsfeld left out that’s rather obvious.

It’s the unknown-knowns. Philosopher Slavoj Žižek sniffed this out, and these are the things that we are unaware that we know. These could be tacit knowledge, or instinctual knowledge that we would struggle to explain, or things that we’ve forgotten that were part of our memory. And according to Žižek, they’re also items which one intentionally refuses to acknowledge.

Like, I can’t know that. These include Disavowed beliefs and other things we pretend not to know about, even though they’re probably part of our public values. This can be hazardous in some cases. But Zizek has somewhat of a narrow focus here. In The Unknown Knowns, one of the key elements is that of memory, and memory ties directly into nostalgia.

Memories can be with us constantly, but they often can lay dormant and come rushing back to us in a flood if they’re triggered by something. And those groups that are trying to operationalize the nostalgia curve, and often for monetary gain, are doing a whole lot to bounce up and down on those triggers.

Trying to evoke or elicit long forgotten memories of childhood, of toys or cartoons, of lazy Saturday mornings and long summer days, and market them or re market them to an older, more mature, and gainfully employed audience that’s been carefully diagnosed and segmented. And this is where a lot of the literature on nostalgia resides.

And why I had to engage in an act of nescience. Frederic Jameson is a literary critic and philosopher who, as of the recording of this episode in 2024, is the director of the Institute for Critical Theory at Duke University. He’s written a lot in a lot of fields, most notably on things like postmodernism and capitalism, and Nostalgia for the Present was one of his key works.

Originally published in the South Atlantic Quarterly in 1989, it’s been reprinted in various books and collections of his since, such as 1992’s Postmodernism or The Cultural Logic of Late Capitalism, which, given some of the topics that we’ve talked about here on this podcast, you might be surprised I haven’t read either.

But, as we said, time is finite, and we come to these things as we’re meant to. So for me, that intentional act of not engaging with it, that act of nescience was me understanding that, yes, he’s written a lot on it, but I wanted to get my own thoughts on nostalgia down as best I could, which we’ve seen in the previous episode on the podcast, as well as the number of blog posts over on the implosive.

blog and. Getting those down helped me to get a sense of where I am and how that would be in relation to what Jameson has written. So to quickly summarize our last episode, for us nostalgia is representational in a memetic way. You might say that nostalgia is an assemblage that puts various parts together and that the perceived value of the nostalgia of a property can impact financing and development of that property.

This value is subjective and also relative, so Different producers might value it differently. Nostalgia is often subjective and can be constraining because you’re limited by what’s gone before. Nostalgia can be contrasted with novelty or that idea of something new. And real nostalgia can be the audience longing for something that was actually produced.

Whereas imagined nostalgia is something the audience thinks they’ve seen before. And nostalgia can be organic, coming from the audience, or manufactured by the producer. Finally, we could say that nostalgia is also substrate neutral. It means it can happen in almost any field, especially with respect to the arts.

But it’s also transferable. It’s a transmedia property. That, if I have nostalgia for Pokemon, for instance, I might be interested in a Pokemon video game, even though I only really watched the cartoons when I was young. I don’t know why I’m referencing Pokemon specifically. But It’s clearly after my time, but In any event, what does Jameson have to say about nostalgia?

Nostalgia for the Present is a piece of media criticism where Jameson looks at the role of nostalgia in three works, Philip K. Dick’s novel Time Out of Joint from 1959, Jonathan Demme’s Something Wild from 1986, and David Lynch’s Blue Velvet, also from 1986. The three titles comprise a unique selection of content, or at least as diverse as one as one might choose to analyze on any given topic, I suppose, though given the breadth of what we cover here on this channel, I shouldn’t be much to criticize or throw stones in glass houses and all that.

Time Out of Joint is a faux time travel story where a man who was apparently trapped in the 1950s notices small differences in errors in reality, which leads him to suspect that something weird is going on. Kind of like the deja vu moment in The Matrix. These themes are typical of Philip K. Dick.

They’re what we’ve come to expect, the representations of reality and the notion that there’s something behind the scenes and the wavering nature of it. The false consciousness that often pervades his work. Looking at it in 2024, we’ve seen so many of those elements and other adaptations of it. The Blade Runner, A Scanner Darkly, Total Recall, Minority Report, and more.

Time Out of Joint seems almost unique among Philip K. Dick’s works in that it hasn’t been adapted for film or television yet. Truth be told has been copied many, many times before in time out a joint. The protagonist sense that there’s something else going on behind the reality is quite astute. He is captured in a Potemkin village of the 1950s, rebuilt in 1997 during an interstellar civil war.

It’s not quite like the 1997 and our reality, of course, we’re obviously nowhere near to interstellar capabilities and like a lot of older science fiction is now firmly rooted in our past. In a future that will not come to be. At times, Time Out of Joint feels more like a rough draft of The Truman Show, the 1998 movie starring Jim Carrey, where the apparatus moves around to ensure the world is static for this one particular man, and this feeds into our various narcissistic main character desires.

And while The Truman Show isn’t quite a direct copy, the film clip that best describes Time Out of Joint would be the epilogue to Captain America the First Avenger. Where he wakes in a room and recognizes from the radio broadcasts that things are not quite what they seem. If there was a Cliff Notes version of this 220 page novel, that would probably be it.

But, there’s more. Jameson notes how Time Out of Joint is set up to be a model of the 1950s. As something that the protagonist will accept. which again echoes the Matrix in that the machine’s creation of the late 1990s as their virtual world in order to pacify the humans that are kept in the endless rows of creches.

So aside from elements from Time Out of Joint appearing in at least three major motion pictures, I’m pretty Much like many of the works of Philip K. Dick, which have been copied so many times, like at least six by our count, that it’s hard to recognize that original source. Maybe that speaks to why this hasn’t been adapted anywhere else, or at least not directly.

As Jameson states, Time Out of Joint, quote, is a collective wish fulfillment and the expression of a deep unconscious yearning for a simpler and more human social system. A small town utopia very much in the North American frontier. tradition. And this is where that nostalgia comes in. We mentioned last episode how you can have cultural and social and political nostalgia for those simpler times where things were kind of more manageable.

And that yearning can be felt by a lot of people, which means it could be operationalized and mobilized and directed to various purposes. But again, this is nothing new. Jameson was writing in 1989 about something from 1959, and this cycles back much, much further. Jameson wrote about two other titles, too, of course, Demme’s Something Wild and Lynch’s Blue Velvet, and while they’re fantastic films, they’re here mostly to bolster Jameson’s case and provide further evidence that allowed him to triangulate towards the element of nostalgia that he’s looking for, as our familiarity and focus is more towards the science fiction side of things here on the ImplausiPod.

We’ll stick towards that and see what Jameson has to say about science fiction.

For Jameson, science fiction is a category. And if you’re hearing that with me making bunny ear signs, then you’re hearing correctly. Nowadays, we might just want to call it a genre. One that came about during that Eisenhower period, a period of the U. S. conquering space and battling communists. And all the ideology that’s inherently bound within the literature from that era.

The category might be bigger, going large to include some real lit, like Moore’s Utopia and others. Or it might be more tightly bound to the pulp novels. Personally, I like the expansive view of sci fi for our point of view, one that loops in Shelley’s Frankenstein by definition and intent and starts maybe with Jules Verne writing Journey to the Center of the Earth in 1864 because that scoops up H.

G. Wells’s stuff as well and gives us a really strong foundation for what science fiction is. The classic era of science fiction is probably that 1950s era, the golden age of rocket ships and the like. A particular vision of the future, both technologically and aesthetically. An aspirational view of the future that helps to come to terms and process our own history, understand how we feel.

fit within the current era. Basically, how did we get to now? Jameson contrasts sci fi with the historical novel, a cultural form that along with costume films and period dramas on TV reflected the ideology of the feudal classes and had fallen off throughout the late 20th century as the then new middle class sought something different, something alien that amped up their own achievements. 

Sci fi came on the scene and said, hold my ray gun, I got this. The historical novel failed not simply due to the feudalist ideals, but because according to Jameson, quote In the postmodern age, we no longer tell ourselves our history in that fashion, but also because we no longer experience it that way, and indeed, perhaps no longer experience it at all.

End quote. For Jameson, at least at the time, our mediated nature meant that we were living in an ahistorical age. And while this may have been true in 1989, I don’t know if that’s any longer the case. The recent rise in historicism and historicity in its forms in the 21st century may suggest that various authors talking about the rise of techno feudalism might be more right than we suppose.

But there’s another question there. Did the return to those historical feudal ideals, the types of stories you tell about kings and queens, become more popular because we are living in that type of age? Or did they help bring it about? Which came first, Shakespeare in Love and Lord of the Rings, or Technofeudalism. Hard to say, but this feels like something we should save for the ongoing debate about fantasy versus sci fi, and we’ll touch in on that at a later point in time. For Jameson, science fiction is an aspirational vehicle for the masses who are rejecting the previous historical viewpoint.

Compared to the historical novel, Quote, Science fiction equally corresponds to the waning of the blockage of that historicity, and particularly in our own time in the postmodern era, to its crisis and paralysis, its enfeeblement and repression. End quote. There are a lot of reasons why this occurs, and they have less to do with the content, though there are parts of that too, to be sure, or at least particular aesthetic choices that are made, and more to do with the socio economic conditions of today.

post World War II USA, and North America, and the United Kingdom. And again, this is another place where nostalgia starts to come in, because both historical novels and sci fi have a tie to the imagination, an imagined past, or an imagined future. They can use representation in their relationship with the past or future, but they are really a perception of the present as history, a way that we can look at our own situation through a few steps removed.

This is the conceit that’s seen throughout the Star Treks, the Star Wars, the Warhammers, the Aliens, the other is but an aspect of ourselves, our society, and our culture that we are trying to take a closer look at. And in Time Out of Joint, that society that we’re trying to take a closer look at is the 1950s.

Philip K. Dick was writing Time Out of Joint in 1959, or at least it was published then, he was probably writing it a little bit earlier, and he was looking at the decade that just passed and choosing what the essential elements might look like from the perspective of someone from 1997. the year of the fictional interstellar war in the novel, and for the most part, he got it right.

Jameson presents us with a list of things that evoke the 1950s from time out of joint. Eisenhower, Marilyn Monroe, PTAs, and the like. And if the list that Jameson gives us reads like a certain Billy Joel song, that’s probably not by accident. Though, we didn’t start the fire also being released in 1989 is almost certainly coincidental.

Nostalgia can often look like a collection of stuff in some hoarder’s back room. The items are referents to that era, not facts per se, but ideas about those facts. The question Jameson asks, the thesis for his whole paper, is did the period see itself this way? And Philip K. Dick’s choices seem to suggest that the answer is yes.

There’s a realistic feel to how PKD describes the 1950s, a feel that arises from the cultural reference that are used. And Jameson notes that if there is a quote unquote realism in the 50s, in other words, it is presumably to be found there in mass cultural representation, the only kind of art willing and able to deal with the stifling Eisenhower realities of the happy family in the small town of normalcy and non deviant everyday life, end quote.

So for a spectator looking back from the 1980s The image of the 1950s comes from the pop culture artifacts that the people in the 1950s understood themselves by as well. We’re looking at them from a distance, through a scanner, darkly. And one that’s getting darker over time.

What this whole process accomplishes is a process of reification. The reality gets blurred by the nostalgic elements, and this ends up becoming the signifier that represents the whole. So our sense of ourselves and of any moment in history may have little or nothing to do with reality. The objective reality, that is.

Which is the biggest Philip K. Dick style head trip that you’ve ever felt before. It’s hard to put it into words. Though all the works of Philip K. Dick and all the Philip K. Dickensian inspired media out there keep trying to show us and tell us over and over again, it’s tricky though. There’s a lot of speculation that’s required, and time out of joint is ultimately a piece of space.

Speculative fiction, quote, it is a speculation which presupposes the possibility that at an outer limit, the sense people have of themselves and their own moment of history may ultimately have nothing whatsoever to do with its reality. End quote, how we think of ourselves, our histories and our generations are only tied to a fractions of the things that are out there.

And much of it may be that imagined nostalgia we talked about a little while ago. There’s a whole lot of unknowns out there, and all of us are privy to only a small fraction of what’s available. And this brings us back to what we were talking about near the beginning. Now, what did Frederick Jameson have to say about nostalgia in total, and how does that connect with the concept of the nostalgia curve that we introduced last episode?

Are there elements of the Jamesonian idea of nostalgia and what he was talking about that at least connect with us? And we can kind of see that in at least three of his books. four of our categories. We can see how our idea of nostalgia being a representation of a thing rather than being the thing itself is fundamental to Jameson’s work and carries on throughout it.

The idea of a thing, not the thing themselves. And for Jameson, those mediated examples coming from pop culture versions then informing the quote unquote generational logic for successive viewers is important too. It connects with our idea of imagined nostalgia, the kind that the audience thinks that they are remembering rather than they actually experienced.

Jameson himself doesn’t really distinguish between different kinds of nostalgia, at least not in the ways that we do. He doesn’t look at the source of where it is produced, but looks at what the nostalgia is for, hence the title, obviously. A 1980s audience looking for the imagined view of the 1950s or an interstellar warrior in the text longing for their imagined view of the same decade, or a writer from that decade of the 1950s constructing a longing for the decade while it is still happening.

These are all nostalgia writ large to Jameson, whereas we’ve increased the granularity a little bit to fine tune our analysis in the nostalgia curve last episode. Jameson looks at the construction of nostalgia in various media, novels and film in this case, though there could be others, and this ties in with our idea of substrate neutrality, that the nostalgia curve could be a transmedia property and not particularly tied to any one kind or another.

So whether we’re looking at Pokemon or action figures or whatever, we can see it across the various realms. The elements of nostalgia that we looked at that were focused on value are largely absent from Jameson’s work. They’re not completely absent, but he was looking for reification of ideology that takes place via nostalgia and not necessarily at the production culture, political economy elements that we’re looking at that tie back directly to the development of new titles in Hollywood or beyond.

Now, there’s more to nostalgia than just the meaty aspects, though, and we’ll need to take a look at the connection that nostalgia has with memory. The other place that nostalgia is showing up in is part of our soylent culture, which we mentioned earlier. The various bits and pieces of past properties that show up or are dredged back up by the cultural saves that are our generative AI tools and the platforms that encourage their use as spreadable media.

Media theorist Marshall McLuhan talked about how new media is built out of the pieces of the old, and nowhere is that more true than our current online culture. So we’ll have to take a deeper look at this next episode. I hope you join us then, on the Implausipod.

Once again, thank you for joining us on the Implausipod. I’m your host, Dr. Implausible. You can reach me at drimplausible at implausipod. com, and you can also find the show archives and transcripts of all our previous shows at implausipod. com as well. I’m responsible for all elements of the show, including research, writing, mixing, mastering, and music, and the show is licensed under Creative Commons 4.

0 share alike license. You may have noticed at the beginning of the show that we described the show as an academic podcast, and you should be able to find us on the Academic Podcast Network when that gets updated. You may have also noted that there was no advertising during the program, and there’s no cost associated with the show.

But it does grow from word of mouth of the community, so if you enjoy the show, please share it with a friend or two, and pass it along. There’s also a Buy Me A Coffee link on each show at implausipod. com, which will go to any hosting costs associated with the show. Over on the blog, we’ve started up a monthly newsletter.

There will likely be some overlap with future podcast episodes, and newsletter subscribers can get a hint of what’s to come ahead of time, so consider signing up and I’ll leave a link in the show notes. Until then, take care and have fun.