Newsletter Issue 7

Working on the upcoming issue of the Newsletter, we’ve got some info on accelerationism on the deck, so I’ll just add some links:

These, plus a few more are things that are mentioned in passing in the second half of the California Ideology. That one has taken a little bit, as life got a little hectic in October and November, but I’m happy to be bringing it your way shortly.

Caught up

And with yesterday’s post, we’ve caught up with our backlog of ImplausiPod episodes and transcripts here on the site. You can still find them over at the dedicated site, as well as through most* podcast apps.

We’ll continue posting those episodes here now, as they air, as well as on the Indie version of the site. We’re still in the process of transitioning the blog and feeds there. It’s moving along, but still not quite ready for prime time. That’s part of the indie charm, right?


*: We’re not on Spotify, iHeartMusic, or Amazon / Audible for reasons. Mostly because I don’t want to be. I know that limits reach a bit, but that’s okay.

AI Refractions

(this was originally published as Implausipod Episode 38 on October 5th, 2024)

https://www.implausipod.com/1935232/episodes/15804659-e0038-ai-refractions

Looking back in the year since the publication of our AI Reflections episode, we take a look at the state of the AI discourse at large, where recent controversies including those surrounding NaNoWriMo and whether AI counts as art, or can assist with science, bring the challenges of studying the new medium to the forefront.


In 2024, AI is still all the rage, but some are starting to question what it’s good for. There’s even a few that will claim that there’s no good use for AI whatsoever, though this denialist argument takes it a little bit too far. We took a look at some of the positive uses of AI a little over a year ago in an episode titled AI Reflections.

But it’s time to check out the current state of the art, take another look into the mirror and see if it’s cracked. So welcome to AI Refractions, this episode of ImplausiPod.

Welcome to The ImplausiPod, an academic podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. And in this episode, we’ve got a lot to catch up on with respect to AI. So we’re going to look at some of the positive uses that have come up and how AI relates to creativity and statements from NaNoWriMo caused a bit of controversy.

And how that leads into AI’s use in science. But it’s not all sunny over in AI land. We’ve looked at some of the concerns before with things like Echange, and we’ll look at some of the current critiques as well. And then look at the value proposition for AI, and how recent shakeups with open AI in September of 2024 might relate to that.

So we’ve got a lot to cover here on our near one year anniversary of that AI Reflections episode, so let’s get into it. We’ve mentioned AI a few other times since that episode aired in August of 2023. It came up in episode 28, our discussion on black boxes and the role of AI handhelds, as well as episode 31 when we looked at AI as a general purpose technology.

And it also came up a little bit in our discussion about the arts, things like Echanger and the Sphere, and how AI might be used to assist in higher fidelity productions. So it’s been an underlying theme about a lot of our episodes. And I think that’s just the nature of where we sit with relation to culture and technology.

When you spend your academic career studying the emergence of high technology and how it’s created and developed, when a new one comes on the scene, or at least becomes widely commercially available, you’re going to spend a lot of time talking about it. And we’ve been obviously talking about it for a while.

So if you’ve been with us for a while, first off, you’re Thank you, and this may be familiar to you, and if you just started listening recently, welcome, and feel free to check out those episodes that we mentioned earlier. I’ll put links to the specific ones in the text. And looking back at episode 12, we started by laying down a definition of technology.

We looked at how it functioned as an extension of man, to borrow from Marshall McLuhan, but the working definition of technology that I use, the one that I published in my PhD, is that “Technology is the material embodiment of an artifact and its associated systems, materials, and practices employed to achieve human ends.”

And this definition of technology covers everything from the sharp stick and sharp stick- related technologies like spears, pencils, and chopsticks, to our more advanced tech like satellites and AI and VR and robots and stuff. When you really think about it, it’s a very expansive definition, but that helps us in its utility in allowing us to recognize and identify things.

And by being able to cover everything from sharp sticks to satellites, from language to pharmaceuticals to games, it really covers the gamut of things that humans use technology for, and contributes to our view of technology as an emancipatory view. That technology is ultimately assistive and can aid us in issues that we’re struggling with.

We recognize that there’s other views and perspectives, but this is where we fall down on the spectrum. Returning back to episode 12, we showed how this emancipatory stance contributes to an empathetic view of technology, where we can step outside of our own frame of reference and think about how technology can be used by somebody who isn’t us.

Whether it’s a loved one, somebody close to us, or even a member of our community or collective, or you. More widely ranging, somebody that we’ll never come into contact with. How persons with different abilities and backgrounds will find different uses for the technology. Like the famous quote goes, “the street finds its own uses for things.”

Maybe we’ll return back to that in a sec. We finished off episode 12 looking at some of the positive uses of AI at that time that had been published just within a few weeks of us recording that episode. People were recounting how they were finding it as an aid or an enhancement to their creativity, and news stories were detailing how the predictive text abilities as well as generative AI facial animations could help stroke victims, as well as persons with ALS being able to converse at a regular tempo.

So by and large it could function as an assistive technology, and in recent weeks we have started trying to Catalogue all those stories. Back in July over on the blog we created the Positive AI Archive, a place where I could put those links to all the stories that I come across. Me being me, I forgot to update it since, but we’ll get those links up there and you should be able to follow along.

We’ll put the link to the archive in the show notes regardless. And, in the interest of positivity, that’s kinda where I wanted to start the show.

The street finds its own uses for things. It’s a great quote from Burning Chrome, a collection of short stories by William Gibson. It’s the one that held Johnny Mnemonic, which led to the film with Keanu Reeves, and then subsequently The Matrix and Cyberpunk 2077 and all those other derivative works. The street finds its own uses for things is a nuanced phrase and nuance can be required when we’re talking about things, especially online when everything gets reduced to a soundbite or a five second dance clip.

The street finds its own uses for things is a bit of a mantra and it’s one that I use when I’m studying the impacts of technology and what “the street finds its own uses for things” means is that the end users may put a given technology to tasks that its creators and developers never saw. Or even intended.

And what I’ve been preaching here, what I mentioned earlier, is the empathetic view of technology. And we look at who benefits from using that technology, and what we find with the AI tools is that there are benefits. The street is finding its own uses for AI. In August of 2024, a number of news reports talked about Casey Harrell, a 46 year old father suffering from ALS, amyotrophic lateral sclerosis, who was able to communicate with his daughter using a combination of brain implants and AI assisted text and speech generation.

Some of the work on these assistive technologies was done with grant money, and there’s more information about the details behind that work, and I’ll link to that article here. There’s multiple technologies that go into this, and we’re finding that with the AI tools, there’s very real benefits for persons with disabilities and their families.

Another thing we can do when we’re evaluating a technology is see where it’s actually used, where the street is located. And when it comes to assistive AI tools like ChatGPT, The street might not be where you think it is. In a recent survey published by Boston Consulting Group in August of 2024, they showed where the usage of ChatGPT was the highest.

It’s hard to visually describe a chart, obviously, but at the top of the scale, we saw countries like India, Morocco, Argentina, Brazil, Indonesia. English speaking countries like the US, Australia, and the UK were much further down on the chart. The country where ChatGPT is finding its most adoption are countries where English is not the primary language.

They’re in the global south, countries with large populations that have also had to deal with centuries of exploitation. And that isn’t to say that the citizens of these countries don’t have concerns, they do, but they’re using it as an assistive technology. They’re using it for translation, to remove barriers and to help reduce friction, and to customize their own experience. And these are just a fraction of the stories that are out there. 

So there are positive use cases for AI, which may seem to directly contradict various denialist arguments that are trying to gaslight you into believing that there is no good use for AI. This is obviously false.

If the positive view, the use on the street, is being found by persons with disabilities, it follows that the denialist view is ableist. If the positive view, that use on the street, is being found by persons of color, non English speakers, persons in the global south, then the denialist view will carry all those elements of oppression, racism, and colonialism with it.

If the use on the street is by Those who find their creativity unlocked by the new tools and they’re finally able to express themselves where previously they may have struggled with a medium or been gatekept from having an arts education or poetry or English or what have you, only to now find themselves told that this isn’t art or this doesn’t count despite all evidence to the contrary, then there’s massive elements of class and bias that go into that as well.

So let’s be clear. An empathetic view of technology recognizes that there are positive use cases for AI. These are being found on the street by persons with disabilities, persons of the global south, non english speakers, and persons across the class spectrum. To deny this is to deny objective reality.

It’s to deny all these groups their actual uses of the technology. Are there problems? Yes, absolutely. Are there bad actors that may use the technology for nefarious means? Of course, this happens on a regular basis, and we’ll put a pin in that and return to that in a few moments, but to deny that there are no good uses is to deny the experience of all these groups that are finding uses for it, and we’re starting to see that when this denialism is pointed out, it’s causing a great degree of controversy.

In a statement made early in September of 2024, NaNoWriMo, the non profit organization behind National Novel Writing Month, it was acceptable to use AI as an assistive technology when writers were working on their pieces for NaNoWriMo, because this supports their mission, which is to quote, “provide the structure, community, and encouragement to help people use their voices, achieve creative goals, and build new worlds, on and off the page.” End quote. 

But what drew the opprobrium of the online community is that they noted that some of the objections to the use of AI tools are classist and ableist. And, as we noted, they weren’t wrong. For all the reasons we just explained and more. But, due to the online uproar, they’ve walked that back somewhat.

I’ll link to the updated statement in the show. The thing is, if you believe that using AI for something like NaNoWriMo is against the spirit of things, that’s your decision. They’ve clearly stated that they feel that assistive technologies can help for people pursuing their dreams. And if you have concerns that they’re going to take stuff that’s put into the official app and sell it off to an LLM or AI company, well, that’s a discussion you need to have with NaNoWriMo, the nonprofit. 

You’re still not held off from doing something like NaNoWriMo using notepad or obsidian or however else you take your notes, but that’s your call. I for one was glad to see that NaNoWriMo called it out. One of the things that I found both in my personal life, as well as in my research, when I was working on the PhD and looking at Tikkun Olam Makers is that it can be incredibly difficult and expensive for persons with disabilities to find a tool that can meet their needs, if it exists at all. So if you’re wondering where I come down on this, I’m on the side of the persons in need. We’re on the side of the streets. You might say we’re streets ahead.

Of course, one of the uses that the street finds for things has always been art. Or at least work that eventually gets recognized as art. It took a long time for the world to recognize that the graffiti of a street artist might count, but in 2024, if one was to argue that Banksy wasn’t an artist, you’d get some funny looks.

There are several threads of debates surrounding AI art, generative art, including the role of creativity, the provenance of the materials, the ethics of using the tools, but the primary question is what counts? What counts as art and who decides that it counts? That’s the point that we’re really raising with that question, and obviously it ties back to what we were talking about last episode when it comes to Soylent Culture, and before that when we were talking about the recently deceased Frederick Jameson as well.

In his work Nostalgia for the Present from 1989, Jameson mentioned this with respect to television. He said, Quote, “At the time, however, it was high culture in the 1950s who was authorized, as it still is, to pass judgment on reality, to say what real life is and what is mere appearance. And it is by leaving out, by ignoring, by passing over in silence and with the repugnance one may feel for the dreary stereotypes of television series, that high art palpably issues its judgments.” end quote. 

Now, High Art in Bunny Quotes isn’t issuing anything, obviously, Jameson’s reifying the term, but what Jameson is getting at is that there’s stakes for those involved about what does and does not count. And we talked about this last episode, where it took a long time for various forms of new media to finally be accepted as art on its own terms.

For some, it takes longer than others. I mean, Jameson was talking about television in the 1980s, for something that had already existed for decades at that point. And even then, it wasn’t until the 90s and 2000s, to the eras of Oz and The Sopranos and Breaking Bad and Mad Men and the quote unquote “golden age of television” that it really began to be recognized and accepted as art on its own terms.

Television was seen as disposable ephemera for decades upon decades. There’s a lot of work that goes on on behalf of high art by those invested in it to valorize it and ensure that it maintains its position. This is why we see one of the critiques about A. I. art being that it lacks creativity, that it is simply theft.

As if the provenance of the materials that get used in the creation of art suddenly matter on whether it counts or not. It would be as if the conditions in the mines of Afghanistan for the lapis lazuli that was crushed to make the ultramarine used by Vermeer had a material impact on whether his painting counted as art. Or if the gold and jewels that went into the creation of the Fabergé eggs and were subsequently gifted to the Russian royal family mattered as to whether those count. It’s a nonsense argument. It makes no sense. And it’s completely orthogonal to the question of whether these works count as art.

And similarly, where people say that good artists borrow, great artists steal, well, we’ll concede that Picasso might have known a thing or two about art, but Where exactly are they stealing it from? The artists aren’t exactly tippy toeing into the art gallery and yoinking it off the walls now, are they?

No, they’re stealing it from memory, from their experience of that thing, and the memory is the key. Here, I’ll share a quote. “Art consists in bringing the memory of things past to the surface. But the author is not a Paessiest. He is a link to history, to memory, which is linked to the common dream.” This is of course a quote by Saul Bellow, talking about his field, literature, and while I know nowadays not as many people are as familiar with his work, if you’re at a computer while you’re listening to this, it might be worth to just look him up.

Are we back? Awesome. Alright, so what the Nobel Prize Laureate and Pulitzer Prize winner Saul Bellow was getting at is that art is an act of memory, and we’ve been going in depth into memory in the last three episodes. And the artist can only work with what they have access to, what they’ve experienced during the course of their lifetime.

The more they’ve experienced, the more they can draw on and put into their art. And this is where the AI art tools come in as an assistive technology, because they would have access to much, much more than a human being can experience, right? Possibly anything that has been stored and put into the database and the creator accessing that tool will have access to everything, all the memory scanned and stored within it as well.

And so then the act of art becomes one of curation of deciding what to put forth. AI art is a digital art form, or at least everything that’s been produced to date. So how does that differ? Right? Well, let me give you an example. If I reach over to my paint shelf and grab an ultramarine paint, right, a cheap Daler Rowney acrylic ink, it’s right there with all the other colors that might be available to me on my paint shelf.

But, back in the day, if we were looking for a specific blue paint, an ultramarine, it would be made with lapis lazuli, like the stuff that Vermeer was looking for. It would be incredibly expensive, and so the artist would be limited in their selection to the paints that they had available to them, or be limited in the amount that they could actually paint within a given year.

And sometimes the cost would be exorbitant. For some paints, it still actually is, but a digital artist working on an iPad or a Wacom tablet or whatever would have access to a nigh unlimited range of colors. And so the only choice and selection for that artist is by deciding what’s right for the piece that they’re doing.

The digital artist is not working with a limited palette of, you know, a dozen paints or whatever they happen to have on hand. It’s a different kind of thing entirely. The digital artist has a much wider range of things to choose from, but it still requires skill. You know, conceptualization, composition, planning, visualization.

There’s still artistry involved. It’s no less art, but it’s a different kind of art. But one that already exists today and one that’s already existed for hundreds of years. And because of a banger that just got dropped in the last couple of weeks, it might be eligible for a Grammy next year. It’s an allographic art.

And if you’re going to try and tell me that Mozart isn’t an artist, I’m going to have a hard time believing you.

Allographic art is a type of art that was originally introduced by Nelson Goodman back in the 60s and 70s. Goodman is kind of like Gordon Freeman, except, you know, not a particle physicist. He was a mathematician and aesthetician, or sorry, philosopher interested in aesthetics, not esthetician as we normally call them now, which has a bit of a different meaning and is a reminder that I probably need to book a pedicure.

Nelson was interested in the question of what’s the difference between a painting and a symphony, and it rests on the idea of like uniqueness versus forgery. A painting, especially an oil painting, can be forged, but it relies on the strokes and the process and the materials that went into it, so you need to basically replicate the entire thing while doing it in order to make an accurate forgery, much like Pierre Menard trying to reproduce Cervantes ‘Quixote’ in the Jorge Luis Borges short story.

Whereas a symphony, or any song really, that is performed based off of a score, a notational system, is simply going to be a reproduction of that thing. And this is basically what Walter Benjamin was getting at when he was talking about art in the age of mechanical reproduction, too, right? So, a work that’s based off of a notational system can still count as a work of art.

Like, no one’s going to argue that a symphony doesn’t count as art, or that Mozart wasn’t an artist. And we can extend that to other forms of art that use a notational system as well. Like, I don’t know, architecture. Frank Lloyd Wright didn’t personally build Falling Water or the Guggenheim, but he created the plans for it, right?

And those were enacted. He did. We can say that, yeah, there’s artistic value there. So these things, composition, architecture, et cetera, are allographic arts, as opposed to autographic arts, things like painting or sculpture, or in some instances, the performance of an allographic work. If I go to see an orchestra playing a symphony, a work based off of a score, I’m not saying that I’m not engaged with art.

And this brings us back to the AI Art question, because one of the arguments you often see against it is that it’s just, you know, typing in some prompts to a computer and then poof, getting some results back. At a very high level, this is an approximation of what’s going on, but it kind of misses some of the finer points, right?

When we look at notational systems, we could have a very, you know, simple set of notes that are there, or we could have a very complex one. We could be looking at the score for Chopsticks or Twinkle Twinkle Little Star, or a long lost piece by Mozart called Serenade in C Major that he wrote when he was a teenager and has finally come to light.

This is an allographic art, and the fact that it can be produced and played 250 years later kind of proves the point. But that difference between simplicity and complexity is part of the key. When we look at the prompts that are input into a computer, we rarely see something with the complexity of say a Mozart.

As we increase the complexity of what we’re putting into one of the generative AI tools, we increase the complexity of what we get back as well. And this is not to suggest that the current AI artists are operating at the level of Mozart either. Some of the earliest notational music we have is found on ancient cuneiform tablets called the Hurrian Hymns, dating back to about 1400 BCE, so it took us a little over 3000 years to get to the level of Mozart in the 1700s.

We can give the AI artists a little bit of time to practice. The generative AI art tools, which are very much in their infancy, appear to be allographic arts, and they’re following in their lineage from procedurally generated art has been around for a little while longer. And as an art form in its infancy, there’s still a lot of contested areas.

Whether it counts, the provenance of materials, ethics of where it’s used, all of those things are coming into question. But we’re not going to say that it’s not art, right? And as an art, as work conducted in a new medium, we have certain responsibilities for documenting its use, its procedures, how it’s created.

In the introduction to 2001’s The Language of New Media, Lev Manovich, in talking about the creation of a new media, digital media in this case, noted how there was a lost opportunity in the late 19th and early 20th century with the creation of cinema. Quote, “I wish that someone in 1895, 1897, or at least 1903 had realized the fundamental significance of the emergence of the new medium of cinema and produced a comprehensive record.

Interviews with audiences, systematic account of narrative strategies, scenography, and camera positions as they developed year by year. An analysis of the connections between the emerging language of cinema and different forms of popular entertainment that coexisted with it. Unfortunately, such records do not exist.

Instead, we are left with newspaper reports, diaries of cinema’s inventors, programs of film showings, and other bits and pieces. A set of random and unevenly distributed historical samples. Today, we are witnessing the emergence of a new medium, the meta medium of the digital computer. In contrast to a hundred years ago, when cinema was coming into being, We are fully aware of the significance of this new media revolution.

Yet I am afraid that future theorists and historians of computer media will be left with not much more than the equivalence of the newspaper reports and film programs from cinema’s first decades.” End quote. 

Manovich goes on to note that a lot of the work that was being done on computers, especially in the 90s, was stuff prognosticating about its future uses, rather than documenting what was actually going on.

And this is the risk that the denialist framing of AI art puts us in. By not recognizing that something new is going on, that art is being created, and allographic art, we lose the opportunity to document it for the future. And

And as with art, so too with science. We’ve long noted that there’s an incredible amount of creativity that goes into scientific research, that the STEM fields, science, technology, engineering, and mathematics, require and benefit so much from the arts that they’d be better classified as STEAM, and a small side effect of that may mean that we see better funding for the arts at the university level.

But I digress. In the examples I gave earlier of medical research, of AI being used as an assistive technology, we were seeing some real groundbreaking developments of the boundaries being pushed, and we’re seeing that throughout the science fields. Part of this is because of what AI does well with things like pattern recognition, allowing weather forecasts, for example, to be predicted more quickly and accurately.

It’s also been able to provide more assistance with medical diagnostics and imaging as well. The massive growth in the number of AI related projects in recent years is often due to the fact that a number of these projects are just rebranded machine learning or deep learning. In a report released by the Royal Society in England in May of 2024 as part of their Disruptive Technology for Research project, they note how, quote, “AI is a broad term covering all efforts aiming to replicate and extend human capabilities for intelligence and reasoning in machines.”

End quote. They go on further to state that, quote, “Since the founding of the AI field at the 1956 Dartmouth Summer Research Project on Artificial Intelligence, Many different techniques have been invented and studied in pursuit of this goal. Many of these techniques have developed into their own sub fields within computer science, such as expert systems and symbolic reasoning.” end quote. 

And they note how the rise of the big data paradigm has made machine learning and deep learning techniques a lot more affordable and accessible, and scalable too. And all of this has contributed to the amount of stuff that’s floating around out there that’s branded as AI. Despite this confusion in branding and nomenclature, AI is starting to contribute to basic science.

A New York Times article published July by Siobhan Roberts talked about how a couple AI models were able to compete at the level of a silver medalist at the recent International Mathematical Olympiad. And this is the first time that the AI model has medaled at that competition. So there may be a role for AI to assist even high level mathematicians to function as collaborators and, again, assistive technologies there.

And we can see this in science more broadly. In a paper submitted to arxiv. org in August of 2024, titled, The AI Scientist Towards Fully Automated Open Ended Scientific Discovery, authors Liu et al. use a frontier large language model to perform research independently. Quote, “We introduce the AI scientist, which generates novel research ideas, writes code, executes experiments, visualizes results, describes its findings by writing a scientific paper, And then runs the simulated review process for evaluation” end quote.

So, a lot of this is scripts and bots and hooking into other AI tools in order to simulate the entire scientific process. And I can’t speak to the veracity of the results that they’re producing in the fields that they’ve chosen. They state that their paper can, quote, “Produce papers that exceed the acceptance threshold at a top machine learning conference as judged by our automated reviewer,” end quote.

And that’s Fine, but it shows that the process of doing the science can be assisted in various realms as well. And in one of those areas of assistance, it’s in providing help for stuff outside the scope of knowledge of a given researcher. AI as an aid in creativity can help explore the design space and allow for the combination of new ideas outside of everything we know.

As science is increasingly interdisciplinary. We need to be able to bring in more material, more knowledge, and that can be done through collaboration, but here we have a tool that can assist us as well. As we talked about with Nessience and Excession a few episodes ago, we don’t know everything. There’s more than we can possibly know, so the AI tools help expand the field of what’s available to us.

We don’t necessarily know where new ideas are going to come from. And if you don’t believe me on this, let me reach out to another scientist who said some words on this back in 1980. Quote, “We do not know beforehand where fundamental insights will arise from about our mysterious and lovely solar system.

And the history of our study of the solar system shows clearly that accepted and conventional ideas are often wrong, and that fundamental insights can arise from the most unexpected sources.” End quote. That, of course, is Carl Sagan. From an October 1980 episode of Cosmos A Personal Journey, titled Heaven and Hell, where he talks about the Velkovsky Affair.

I haven’t spliced in the original audio because I’m not looking to grab a copyright strike, but it’s out there if you want to look for it. And what Sagan is describing there is basically the process by which a Kuhnian paradigm shift takes place. Sagan is speaking to the need to reach beyond ourselves, especially in the fields of science, and the AI assisted research tools can help us with that.

And not just in the conduction of the research, but also in the writing and dissemination of that. Not all scientists are strong or comfortable writers or speakers, and many of them come to English as a second, third, or even fourth language. And the role of AI tools as translation devices means we have more people able to communicate and share their ideas and participate in the pursuit of knowledge.

This is not to say that everything is rosy. Are there valid concerns when it comes to AI? Absolutely. Yes. We talked about a few at the outset and we’ve documented a number of them throughout the run of this podcast. One of our primary concerns is the role of the AI tools in échanger, that replacement effect that happens that leads to technological unemployment.

Much of the initial hype and furor around the AI tools was people recognizing that potential for échanger following the initial public release of ChatGPT. There’s also concerns about the degree to which the AI tools may be used as instruments of control, and how they can contribute to what Gilles Deleuze calls a control society, which we talked about in our Reflections episode last year. 

And related to that is the lack of transparency, the degree to which the AI tools are black boxes, where based on a given set of inputs, we’re not necessarily sure about how it came up with the outputs. And this is a challenge regardless of whether it’s a hardware device or a software tool.

And regardless of how the AI tool is deployed, the increased prevalence of it means we’re leading to a soylent culture. With an increased amount of data smog, or bitslop, or however you want to refer to the digital pollution that takes place with the increased amount of AI content in our channels and For-You-Feeds, and this is likely to become even more heightened as Facebook moves to pushing AI generated posts into the timelines.

Many are speculating that this is becoming so prevalent that the internet is largely bots pushing out AI generated content, what’s called the “Dead Internet Theory”, which we’ll definitely have to take a look at it in a future episode. Hint, the internet is alive and well, it’s just not necessarily where you think it is.

And with all this AI generated content, we’re still facing the risk of the hallucinations, which we talked about, holy moly, over two years ago when we discussed the LOAB, that brief little bit of creepypasta that was making the rounds as people were trying out the new digital tools. But the hallucinations still highlight one of the primary issues with the AI tools, and that’s the errors in the results.

In order to document and collate these issues, a research team over at MIT has created the AI Risk Repository. It’s available at airisk. mit. edu. Here they have created taxonomies of the causes and domains where the risks may take place. However, not all of these risks are equal. One of the primary ones that gets mentioned is the energy usage for AI.

And while it’s not insignificant, I think it needs to be looked at in context. One estimate of global data center usage was between 240 and 340 terawatt hours, which is a lot of energy, and it might be rising as data center usage for the big players like Microsoft and Google has gone up by like 30 percent since 2022.

And that still might be too low, as one report noted that the actual estimate could be as much as 600 percent higher. So when you put that all together, that initial estimate could be anywhere between a thousand and 2000 terawatts. But the AI tools are only a fraction of what goes on at the data centers, which include cloud storage and services, streaming video, gaming, social media, and other high volume activities.

So you bring that number right back down. And AI is using? The thing is, whatever that number is, 300 terawatts times 1. 3 times six divided by five. Whatever that result ends up being doesn’t even chart when looking at global energy usage. Looking at a recent chart on global primary energy consumption by source over at Our World in Data, we see that the worldwide consumption in 2023 was 180, 000 terawatt hours.

The amount of energy potentially used by AI hardly registers as a pixel on the screen compared to worldwide energy usage that were presented with the picture in the media where AI is burning up the planet. I’m not saying AI energy usage isn’t a concern. It should be green and renewable. And it needs to be verifiable, this energy usage of the AI companies, as there is the risk of greenwashing the work that is done, of painting over their activities true energy costs by highlighting their positive impacts for the environment.

And the energy usage may be far exceeded by the water usage that’s used for the cooling of the data centers. And as with the energy usage, the amount of water that’s actually going to AI is incredibly hard to dissociate from all the other activities that are taking place in these data centers. And this greenwashing, which various industries have long been accused of, might show up in another form as well.

There is always the possibility that the helpful stories that are presented, AI tools have provided for various at risk and minority populations, are presented as a form of “aidwashing”. And this is something we have to evaluate for each of the stories posted in the AI Positivity Archive. Now I can’t say for sure that “aidwashing” specifically as a term exists.

A couple searches didn’t return any hits, so you may have heard it here first. However, while positive stories about AI often do get touted, do we think this is the driving motivation for the massive investment we’re seeing in the AI technologies? No, not even for a second. These assistive uses of AI don’t really work with the value proposition for the industry, even though those street uses of technology may point the way forward in resolving some of the larger issues for AI tools with respect to resource consumption and energy usage.

The AI tools used to assist Casey Harrell, the ALS patient mentioned near the beginning of the show, use a significantly smaller model than one’s conventionally available, like those found in ChatGPT. The future of AI may be small, personalized, and local, but again, that doesn’t fit with the value proposition. 

And that value proposition is coming under increased scrutiny. In a report published by Goldman Sachs on June 25th, 2024, they question if there’s enough benefit for all the money that’s being poured into the field. In a series of interviews with a number of experts in the field, they note how initial estimates about both the cost savings, the complexity of tasks that AI is available to do, and the productivity gains that would derive from it, are all much lower than initially proposed or happening on a much longer time frame.

In it, MIT professor Daron Acemoglu forecasts minimal productivity and GDP growths, around 0. 5 percent or 1%, whereas Goldman Sachs predictions were closer to 9 percent and 6 percent increase in GDP. With such varying degrees of estimates, what the actual impact of AI in the next 10 years is, is anybody’s guess.

It could be at either extreme or somewhere in between. But the main takeaway from this is that even Goldman Sachs is starting to look at the balance sheet and question the amount of money that’s being invested in AI. And that amount of money is quite large indeed. 

In between starting recording this podcast episode and finishing it, OpenAI raised 6. 6 billion dollars in a funding round from its investors, including Microsoft and Nvidia, which is the largest ever recorded. As reported by Reuters, this could value the company at 157 billion dollars and make it one of the the world. valuable private companies in the world. And this coincides with the recent restructuring from a week earlier which would remove the non profit control and see it move to a for profit business model.

But my final question is, would this even work? Because it seems diametrically opposed to what AI might actually bring about. If assistive technology focused on automation and Echange, then the end result may be something closer to what Aaron Bastani calls “fully automated luxury communism”, where the future is a post-scarcity environment that’s much closer to Star Trek than it is to Snow Crash.

How do you make that work when you’re focused on a for profit model? The tool that you’re using is not designed to do what you’re trying to make it do. Remember, “The street finds its own uses for things”, though in this case that street might be Wall Street. The investors and forecasters at Goldman Sachs are recognizing that disconnect by looking at the charts and tables in the balance sheet.

But their disconnect, the part that they’re missing, is that the driving force towards AI may be one more of ideology. And that ideology is the California ideology, a term that’s been floating around since at least the mid 1990s. And we’ll take a look at it next episode and return to the works of Lev Manovich, as well as Richard Barbrook, Andy Cameron, and Adrian Daub, as well as a recent post by Sam Altman titled ‘The Intelligence Age’.

There’s definitely a lot more going on behind the scenes.

Once again, thank you for joining us on the Implausipod. I’m your host, Dr. Implausible. You can reach me at drimplausible at implausipod. com. And you can also find the show archives and transcripts of all our previous shows at implausipod. com as well. I’m responsible for all elements of the show, including research, writing, mixing, mastering, and music.

And perhaps somewhat surprisingly, given the topic of our episode, no AI is used in the production of this podcast. Though I think some machine learning goes into the transcription service that we use. And the show is licensed under Creative Commons 4. 0 share alike license. You may have noticed at the beginning of the show that we described the show as an academic podcast and you should be able to find us on the Academic Podcast Network when that gets updated.

You may have also noted that there was no advertising during the program and there’s no cost associated with the show. But it does grow from word of mouth of the community. So if you enjoy the show, please share it with a friend or two, and pass it along. There’s also a buy me a coffee link on each show at implausopod.

com, which will go to any hosting costs associated with the show. I’ve put a bit of a hold on the blog and the newsletter, as WordPress is turning into a bit of a dumpster fire, and I need to figure out how to re host it. But the material is still up there, I own the domain. It’ll just probably look a little bit more basic soon.

Join us next time as we explore that Californian ideology, and then we’ll be asking, who are Roads for? And do a deeper dive into how we model the world. Until next time, take care and have fun.



Bibliography

A bottle of water per email: The hidden environmental costs of using AI chatbots. (2024, September 18). Washington Post. https://www.washingtonpost.com/technology/2024/09/18/energy-ai-use-electricity-water-data-centers/

A Note to Our Community About our Comments on AI – September 2024 | NaNoWriMo. (n.d.). Retrieved October 5, 2024, from https://nanowrimo.org/a-note-to-our-community-about-our-comments-on-ai-september-2024/

Advances in Brain-Computer Interface Technology Help One Man Find His Voice | The ALS Association. (n.d.). Retrieved October 5, 2024, from https://www.als.org/blog/advances-brain-computer-interface-technology-help-one-man-find-his-voice

Balevic, K. (n.d.). Goldman Sachs says the return on investment for AI might be disappointing. Business Insider. Retrieved October 5, 2024, from https://www.businessinsider.com/ai-return-investment-disappointing-goldman-sachs-report-2024-6

Broad, W. J. (2024, July 29). Artificial Intelligence Gives Weather Forecasters a New Edge. The New York Times. https://www.nytimes.com/interactive/2024/07/29/science/ai-weather-forecast-hurricane.html

Card, N. S., Wairagkar, M., Iacobacci, C., Hou, X., Singer-Clark, T., Willett, F. R., Kunz, E. M., Fan, C., Nia, M. V., Deo, D. R., Srinivasan, A., Choi, E. Y., Glasser, M. F., Hochberg, L. R., Henderson, J. M., Shahlaie, K., Stavisky, S. D., & Brandman, D. M. (2024). An Accurate and Rapidly Calibrating Speech Neuroprosthesis. New England Journal of Medicine, 391(7), 609–618. https://doi.org/10.1056/NEJMoa2314132

Consumers Know More About AI Than Business Leaders Think. (2024, April 8). BCG Global. https://www.bcg.com/publications/2024/consumers-know-more-about-ai-than-businesses-think

Cosmos. (1980, September 28). [Documentary]. KCET, Carl Sagan Productions, British Broadcasting Corporation (BBC).

Donna. (2023, October 9). Banksy Replaced by a Robot: A Thought-Provoking Commentary on the Role of Technology in our World, London 2023. GraffitiStreet. https://www.graffitistreet.com/banksy-replaced-by-a-robot-a-thought-provoking-commentary-on-the-role-of-technology-in-our-world-london-2023/

Gen AI: Too much spend, too little benefit? (n.d.). Retrieved October 5, 2024, from https://www.goldmansachs.com/insights/top-of-mind/gen-ai-too-much-spend-too-little-benefit

Goodman, N. (1976). Languages of Art (2 edition). Hackett Publishing Company, Inc.

Goodman, N. (1978). Ways Of Worldmaking. http://archive.org/details/GoodmanWaysOfWorldmaking

Hill, L. W. (2024, September 11). Inside the Heated Controversy That’s Tearing a Writing Community Apart. Slate. https://slate.com/technology/2024/09/national-novel-writing-month-ai-bots-controversy.html

Hu, K. (2024, October 3). OpenAI closes $6.6 billion funding haul with investment from Microsoft and Nvidia. Reuters. https://www.reuters.com/technology/artificial-intelligence/openai-closes-66-billion-funding-haul-valuation-157-billion-with-investment-2024-10-02/

Hu, K., & Cai, K. (2024, September 26). Exclusive: OpenAI to remove non-profit control and give Sam Altman equity. Reuters. https://www.reuters.com/technology/artificial-intelligence/openai-remove-non-profit-control-give-sam-altman-equity-sources-say-2024-09-25/

Knight, W. (n.d.). An ‘AI Scientist’ Is Inventing and Running Its Own Experiments. Wired. Retrieved September 9, 2024, from https://www.wired.com/story/ai-scientist-ubc-lab/

LaBossiere, M. (n.d.). AI: I Want a Banksy vs I Want a Picture of a Dragon. Retrieved October 5, 2024, from https://aphilosopher.drmcl.com/2024/04/01/ai-i-want-a-banksy-vs-i-want-a-picture-of-a-dragon/

Lu, C., Lu, C., Lange, R. T., Foerster, J., Clune, J., & Ha, D. (2024, August 12). The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery. arXiv.Org. https://arxiv.org/abs/2408.06292v3

Manovich, L. (2001). The language of new media. MIT Press.

McLuhan, M. (1964). Understanding Media: The Extensions of Man. The New American Library.

Mickle, T. (2024, September 23). Will A.I. Be a Bust? A Wall Street Skeptic Rings the Alarm. The New York Times. https://www.nytimes.com/2024/09/23/technology/ai-jim-covello-goldman-sachs.html

Milman, O. (2024, March 7). AI likely to increase energy use and accelerate climate misinformation – report. The Guardian. https://www.theguardian.com/technology/2024/mar/07/ai-climate-change-energy-disinformation-report

Mueller, B. (2024, August 14). A.L.S. Stole His Voice. A.I. Retrieved It. The New York Times. https://www.nytimes.com/2024/08/14/health/als-ai-brain-implants.html

Overview and key findings – World Energy Investment 2024 – Analysis. (n.d.). IEA. Retrieved October 5, 2024, from https://www.iea.org/reports/world-energy-investment-2024/overview-and-key-findings

Roberts, S. (2024, July 25). Move Over, Mathematicians, Here Comes AlphaProof. The New York Times. https://www.nytimes.com/2024/07/25/science/ai-math-alphaproof-deepmind.html

Schacter, R. (2024, August 18). How does Banksy feel about the destruction of his art? He may well be cheering. The Guardian. https://www.theguardian.com/commentisfree/article/2024/aug/18/banksy-art-destruction-graffiti-street-art

Science in the age of AI | Royal Society. (n.d.). Retrieved October 2, 2024, from https://royalsociety.org/news-resources/projects/science-in-the-age-of-ai/

Sullivan, S. (2024, September 25). New Mozart Song Released 200 Years Later—How It Was Found. Woman’s World. https://www.womansworld.com/entertainment/music/new-mozart-song-released-200-yaers-later-how-it-was-found

Taylor, C. (2024, September 3). How much is AI hurting the planet? Big tech won’t tell us. Mashable. https://mashable.com/article/ai-environment-energy

The AI Risk Repository. (n.d.). Retrieved October 5, 2024, from https://airisk.mit.edu/

The Intelligence Age. (2024, September 23). https://ia.samaltman.com/

What is NaNoWriMo’s position on Artificial Intelligence (AI)? (2024, September 2). National Novel Writing Month. https://nanowrimo.zendesk.com/hc/en-us/articles/29933455931412-What-is-NaNoWriMo-s-position-on-Artificial-Intelligence-AI

Wickelgren, I. (n.d.). Brain-to-Speech Tech Good Enough for Everyday Use Debuts in a Man with ALS. Scientific American. Retrieved October 5, 2024, from https://www.scientificamerican.com/article/brain-to-speech-tech-good-enough-for-everyday-use-debuts-in-a-man-with-als/

Soylent Culture

(this was originally published as Implausipod Episode 37 on September 22nd, 2024)

https://www.implausipod.com/1935232/episodes/15791252-e0037-soylent-culture

What is Soylent Culture? Whether it is in the mass media, the new media, or the media consumed by the current crop of generative AI tools, it is culture that has been fed on itself. But of course, there’s more. Have a listen to find out how Soylent Culture is driving the potential for “Model Collapse” with our AI tools, and what that might mean.


In 1964, Canadian media theorist Marshall McLuhan published his work Understanding Media, The Extensions of Man. In it, he described how the content of any new medium is that of an older medium. This can help make it stronger and more intense. Quote, “The content of a movie is a novel, or a play, or an opera.

The effect of the movie form is not related to its programmed content. The content of writing or print is speech, but the reader is almost entirely unaware either of print or of speech.” End quote. 

60 years later, in 2024, this is the promise of the generative AI tools that are spreading rapidly throughout society, and has been the end result of 30 years of new media, which has seen the digitalization of anything and everything that provides some form of content on the internet.

Our culture has been built on these successive waves of media, but what happens when there’s nothing left to feed the next wave? It begins to feed on itself, which is why we live now in an era of soylent culture.

Welcome to the Implausipod, an academic podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible, and in this episode, we’re going to draw together some threads we’ve been collecting for over a year and weave them together into a tapestry that describes our current age, an era of soylent culture.

And way back in episode 8, when we introduced you to the idea of the audience commodity, where media companies real product isn’t the shiny stuff on screen, but rather the audiences that they can serve up to the advertisers, we noted how Reddit and Twitter were in a bit of a bind because other companies had come in and slurped up all the user generated content that was so fundamental to Web 2. 0 and fundamental to their business model as well, as they were still in that old model of courting the business of advertisers. 

And all that UGC – the useless byproduct of having people chat online in a community that serve up to those advertisers – got tossed into the wood chipper, added a little bit of glue and paint, and then sold back to you as shiny new furniture, just like IKEA.

And this is what the AI companies are doing. We’ve been talking about this a little bit off and on, and since then, Reddit and Twitter have both gone all in on leveraging their own resources, and either creating their own AI models, like the Grok model, or at least licensing and selling it to other LLMs.

In episode 16, we looked a little bit more at that Web 2. 0 idea of spreadable media and how the atomization of culture actually took place. How the encouragement of that user generated content by the developers and platform owners is now the very material that’s feeding the AI models. And finally, our look at nostalgia over the past two episodes, starting with our look at the Dial-up Pastorale and that wistful approach to an earlier internet, one that never actually existed.

All of these point towards the existence of Soylent Culture. What I’m saying is is that it’s been a long time coming. The atomization of culture into its component parts, the reduction and eclipsed of soundbites to TikToks to Vines, the meme-ification of culture in general were all evidence of this happening.

This isn’t inherently a bad thing. We’re not ascribing some kind of value to this. We’re just describing how culture was reduced to its bare essentials as even smaller bits were carved off of the mass audience to draw the attention of even smaller and smaller niche audiences that could be catered to.

And a lot of this is because culture is inherently memetic. That’s memetic as in memes, not memetic as in mimesis, though the latter applies as well. But when we say that culture is memetic, I want to build on it more than just Dawkins’s original formulation of the idea of a meme to describe a unit of cultural transmission.

Because, honestly, the whole field of anthropology was sitting right over there when he came up with it. A memetic form of culture allows for the combination and recombination of various cultural components in the pursuit of novelty, and this can lead to innovation in the arts and the aesthetic dimension.

In the digital era, we’ve been presented with a new medium. Well, several perhaps, but the underlying logic of the digital media – the reduction of everything to bits, to ones and zeros that allow for the mass storage and fast transmission of everything anywhere, where the limiting factors are starting to boil down to fundamental laws of physics – 

this commonality can be found across all the digital arts, whether it’s in images, audio, video, gaming. Anything that’s appearing on your computer or on your phone has this underlying logic to it. And when a new medium presents itself due to changing technology, the first forays into that new medium will often be adaptations or translations of work done in an earlier form.

As noted by Marshall McLuhan at the beginning of this episode, it can take a while for new media to come into its own. It’ll be grasped by the masses as popular entertainment and derided by the high arts, or at least those who are fans of it. Frederick Jameson, who we talked about a whole lot last episode on nostalgia noted, quote, “it was high culture in the fifties that was authorized as it still is to pass judgment on reality.

to say what real life is and what is mere appearance. And it is by leaving out, by ignoring, by passing over in silence and with the repugnance one may feel for the dreary stereotypes of television series that high art palpably issues its judgment.” End quote. 

So, the new medium, or works that are done in the new medium, can often feel derivative as it copies stories of old, retelling them in a new way.

But over time, what we see happen again and again and again are that fresh stories start to be told by those familiar with the medium that have and can leverage the strengths and weaknesses of the medium, telling tales that reflect their own experiences, their own lives, and the lives of people living in the current age, not just reflections of earlier tales.

And eventually, the new medium finds acceptance, but it can take a little while.

So let me ask you, how long does it take for a new medium to be accepted as art? First they said radio wasn’t art, and then we got War of the Worlds. They said comic books weren’t art, and then we got Maus, and Watchmen, and Dark Knight Returns. They said rock and roll wasn’t art, and we got Dark Side of the Moon and Pet Sounds, Sgt.

Pepper’s and many, many others. They said films weren’t art, and we got Citizen Kane. They said video games weren’t art, and we got Final Fantasy VII and Myst and Breath of the Wild. They said TV wasn’t art, and we got Oz and Breaking Bad and Hannibal and The Wire. And now they’re telling us that AI generated art isn’t art, and I’m wondering how long it will take until they admit that they were wrong here, too.

Because even though it’s early days, I’ve seen and heard some AI generated art pieces that would absolutely count as art. There are pieces that produce an emotional effect, they evoke a response, whether it’s whimsy or wonder or sublime awe, and for all of these reasons, I think the AI generated art that I’ve seen or experienced counts.

And the point at which creators in a new medium produce something that counts as art often happens relatively early in the life cycle of that new media. In all of the examples I gave, things like War of the Worlds, Citizen Kane, Final Fantasy VII, these weren’t the first titles produced in that medium, but they did come about relatively early, once creators became accustomed to the cultural form.

As newer creators began working with the media, they can take it further, but there’s a risk. Creators that have grown up with the media may become too familiar with the source material, drawing on the representations from within itself. And we can all think of examples of this, where writers on police procedurals or action movies have grown up watching police procedurals and action movies and they simply endlessly repeat the tropes that are foundational to the genre.

The works become pastiches, parodies of themselves, often unintentionally, and they’re unable to escape from the weight of the tropes that they carry. This is especially evident in long running shows and franchises. Think of later seasons of The Simpsons, if you’ve actually watched recent seasons of The Simpsons, compared to the earlier ones.

Or recent seasons of Saturday Night Live, with the endlessly recycled bits, because we really needed another game show knock off, or a cringy community access parody. We can see it in later seasons of Doctor Who, and Star Trek, and Star Wars, and Pro Wrestling as well, and the granddaddy of them all, the soap opera.

This is what happens with normal culture when it is trained on itself. You get Soylent Culture. 

Soylent Culture is this, the self referential culture that is fed on itself, an ouroboros of references that always point at something else. It is culture comprised of rapid fire clips coming at the audience faster than a Dennis Miller era Saturday Night Live weekend update. Or the speed of a Weird Al Yankovic polka medley.

It is 30 years of Simpsons Halloween episodes referring to the first 10 years of Simpsons Halloween episodes. It is the hyper referential titles like The Family Guy and Deadpool, whether in print or film, throwing references at the audience rapid fire with rhyme and reason but so little of it, that works like Ready Player One start to seem like the inevitable result of the form.

And I’m not suggesting that the above works aren’t creative. They’re high examples of this cultural form; of soylent culture. But the endless demand for fresh material in an era of consumption culture means that the hyper-referentiality will soon exhaust itself and turn inward. This is where the nostalgia that we’ve been discussing for the previous couple episodes comes into play.

It’s a resource for mining, providing variations of previous works to spark a glimmer in the audience’s eyes of, hey, I recognize that. But even though these works are creative, they’re limited, they’re bound to previous, more popular titles, referring to art that was more widely accessible, more widely known.

They’re derivative works and they can’t come up with anything new, perhaps. 

And I say perhaps because there’s more out there than we can know. There’s more art that’s been created that we can possibly experience in a lifetime. There’s more stuff posted to YouTube in a minute than you’ll ever see in your 80 years on the planet.

And the rate at which that is happening is increasing. So, for anybody watching these hyper referential titles, if their first exposure to Faulkner is through Family Guy, or to Diogenes is through Deadpool, then so be it. Maybe their curiosity will inspire them to track that down, to check out the originals, to get a broader sense of the culture that they’re immersed in.

If they don’t get the joke and look around and wonder why the rest of the audience is laughing at this and say, you know, maybe it’s a me thing. Maybe I need to learn more. And that’s all right. It can lead to an act of discovery; of somebody looking at other titles and curating them, bringing them together and developing their own sense of style and working on that to create an aesthetic.

And that’s ultimately what it comes down to. Is art an act of learning and discovery and curation? Or is it an act of invention and generation and creation, or these all components of it? If an artist’s aesthetic is reliant on what they’ve experienced, well, then, as I’ve said, we’re finite, tiny creatures.

How many books or TV shows can you watch in a lifetime to incorporate into your experience? And if you repeatedly watch something, the same thing, are you limiting yourself from exposure to something new? And this is where the generative art tools come back into play. The AI tools that have been facilitated by the digitalization of everything during web 1. 0 and the subsequent slurping up of everything into feeding the models. 

Because the AI tools expand the realm of what we have access to. They can draw from every movie ever made, or at least digitalized. Not just the two dozen titles that the video store clerked happened to watch on repeat while they were working on their script, before finally following through and getting it made.

In theory, the AI tools can aid the creativity of those engaging with it, and in practice we’re starting to see that as well. It comes back to that question of whether art is generative or whether it’s an act of discovery and curation. But there’s a catch. Like we said, Soylent cultures existed long before the AI art tools arrived on the scene.

The derivative stories of soap operas and police procedurals and comic books and pulp sci-fi. But it has become increasingly obvious that the AI tools facilitate Soylent culture, drive it forward, and feed off of it even more. The A. I. tools are voracious, continually wanting more, needing fresh new stuff in order to increase the fidelity of the model.

That hallowed heart that drives the beast that continually hungers. But you see, the model is weak. It is Vulnerable like the phylactery of a lich hidden away somewhere deep.

The one thing the model can’t take too much of is itself: model collapse is the very real risk of a GPT being trained on text generated by a large language model identified by Shumailov, et al, and “ubiquitous among all learned generative models” end quote. Model collapse is a risk that creators of AI tools face in further developing those tools.

Quoting again from Shumailov: “model collapse is a degenerative process affecting generations of learned generative models in which the data they generate end up polluting the training set. of the next generation. Being trained on polluted data, they then misperceive reality.” End quote. This model collapse can result in the models ‘forgetting’ or ‘hallucinating’.

Two terms drawn not just from psychology, but from our own long history of engaging with and thinking about our own minds and the minds of others. And we’re exacting them here to apply to our AI tools, which – I want to be clear – aren’t thinking, but are the results of generative processes of taking lots of things and putting them together in new ways, which is honestly what we do for art too.

But this ‘forgetting’ can be toxic to the models. It’s like a cybernetic prion disease, like the cattle that developed BSE by being fed feed that contained parts of other ground up cows that were sick with the disease. The burgeoning electronic minds of our AI tools cannot digest other generated content.

And in an era of Soylent Culture, where there’s a risk of model collapse, where these incredibly expensive AI tools that require mothballed nuclear reactors to be brought online to provide enough power to service them, that thirst for fresh water like a marathon runner in the desert, In this era, then the human generated content of the earlier pre AI web becomes a much more valuable resource, the digital equivalent of the low background steel that was sought after for the creation of precision instruments following the era of atmospheric nuclear testing, where all the above ground and newly mined ore was too irradiated for use in precision instruments.

And it should be noted that we’re no longer living in that era because we stopped doing atmospheric nuclear testing. And for some, the takeaway for that may be that to stop an era of Soylent culture, we may need to stop using these AI tools completely. But I think that would be the wrong takeaway because the Soylent culture existed long before the AI tools existed, long before new media, as shown by the soap operas and the like.

And it’s something that’s more tied to mass culture in general, though. New media and the AI tools can make Soylent Culture much, much worse, let me be clear. Despite this, despite the speed with which all this is happening, the research on model collapse is still in its early days. The long term ramifications of model collapse and its consequences will only be learned through time.

In the meantime, we can discuss some possible solutions to dealing with Soylent Culture. Both AI generated and otherwise. If Soylent Culture is art that’s fed on itself, then the most effective way to combat it would be to find new stuff. To find new things to tell stories about. To create new art about.

Historically, how has this happened with traditional art? Well, we’ve hinted at a few ways throughout this episode, even though, as we noted, in an era of mass culture, even traditional arts are not immune from becoming soylent culture as well. One of the ways we get those new artistic ideas is through mimesis, the observation of the world around us, and imitating that, putting it into artistic forms.

Another way we get new art is through soft innovation when technologies enhance or change the way that we can produce media and art, or where art inspires the development of new technology as they feed back and forth between each other, trading ideas. And as we’ve seen throughout this episode and throughout the podcast in general, new media and new modes of production can encourage new stories to be told as artists are dealing with their surroundings and whatever the current zeitgeist is and putting that into production with whatever media that they have available.

As our world and society and culture changes, we’re going to reflect upon our current condition and tell tales about that to share with those around us. And as we noted much. Earlier in this particular episode, that familiarity with a form, a technical form, allows those who are using it to innovate within that form, creating new, more complex, better produced and higher fidelity works in whatever medium they happen to be choosing to work in.

And ultimately that comes down to choice. By the artists and the audience and the associated industries that allow the audience to experience those works, whether they are audio, visual, tactile, experiential, like games, any version of art that we might come in contact with. The generation and invention in the process is important to be sure, but the curation and discovery is no less important within this process.

And this is where humans with an a sense for aesthetic and style will still be able to tell. How would an AI tool discover or create? How could it test something that’s in the loop? The generative AI tools can’t tell. They have no sense. They can provide output, but no aura, no discernment. Could an AI run a script that does A-B testing on an audience for each new generated piece of art to see how they react, and the most popular one gets put forward?

I guess so, it’s not outside the realm of possibility, but that isn’t really something that they’re able to do on their own, or at least I hope not. 

Would programming in some variance and randomness in the AI tools allow for them to avoid the model collapse that comes with ingesting soylent culture in much the same way that we saw with the reveries for the hosts in the Westworld TV series?

Well, the research by Shumailov et al that we mentioned earlier suggests that that’s possibly not the case. I mean, it might help with the variation, perhaps, but that doesn’t help with the selection mechanisms, the discernment. 

AI is a blind watch, trying to become a watchmaker, making new watches. The question might be, what would an AI even want with a watch anyways?

Thank you for joining us on the Implausipod. I’m your host Dr. Implausible. We’ll explore more on the current state of AI art tools and their role as assistive technologies in our next episode. called AI Refractions. But before we get there, we need to return to our last episode, episode 36, and offer a postscript on that one.

Even though it’s been only a week, as of the recording of this episode, September 22nd, 2024, we regret to inform you of the passing of Professor Frederick Jameson, who was the subject of episode 36. As we noted in that episode, he was a giant in the field of literary criticism and philosophy, and a long time professor at Duke University.

Our condolences go out to his family and friends. Rest in peace. If you’d like to contact the show, you can reach me at drimplausible at implausipod. com, and you can also find the show archives and transcripts of all our previous shows at implausipod. com as well. I’m responsible for all elements of the show, including research, writing, mixing, mastering, and music, and the show is licensed under a Creative Commons 4. 0 share alike license. 

You may have noticed at the beginning of the show that we described the show as an academic podcast, and you should be able to find us on the Academic Podcast Network when that gets updated. You may have also noted that there was no advertising during the program, and there is no cost associated with the show, but it does grow from word of mouth of the community, so if you enjoy the show, please share it with a friend or two.

and pass it along. There’s also a buy me a coffee link on each show at implausipod. com which will go to any hosting costs associated with the show. Over on the blog, we’ve started up a monthly newsletter. There will likely be some overlap with future podcast episodes, and newsletter subscribers can get a hint of what’s to come ahead of time, so consider signing up and I’ll leave a link in the show notes.

Until then, take care and have fun.

Bibliography

McLuhan, M. (1964). Understanding Media: The Extensions of Man. The New American Library.

Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., & Anderson, R. (2024). The Curse of Recursion: Training on Generated Data Makes Models Forget (No. arXiv:2305.17493). arXiv. https://doi.org/10.48550/arXiv.2305.17493

Shumailov, I., Shumaylov, Z., Zhao, Y., Papernot, N., Anderson, R., & Gal, Y. (2024). AI models collapse when trained on recursively generated data. Nature, 631(8022), 755–759. https://doi.org/10.1038/s41586-024-07566-y

Snoswell, A. J. (2024, August 19). What is ‘model collapse’? An expert explains the rumours about an impending AI doom. The Conversation. http://theconversation.com/what-is-model-collapse-an-expert-explains-the-rumours-about-an-impending-ai-doom-236415

Nescience and Excession: Jameson and Nostalgia

(this was originally published as Implausipod Episode 36 on September 15, 2024)

https://www.implausipod.com/1935232/episodes/15676490-e0036-nescience-and-excession-jameson-and-nostalgia

Further detail looking at The Nostalgia Curve from Episode 35, and comparing it with the Fredric Jameson’s “Nostalgia for the Present” (1989) to see what the established literature says about the topic. We go into Jameson’s writing on science fiction and Philip K Dick’s “Time Out of Joint” (1959), and take a deep look at the Rumsfeld Matrix in order to introduce the idea of Nescience: the intentional act of not engaging with a known-unknown.


Let me ask you a question. Do you ever have something that you know you need to know, but you know you can’t know just yet? Yeah, me too. In February of 2002, the world was introduced to the concept of Unknown Unknowns by then U. S. Secretary of Defense Donald Rumsfeld. 

“As we know, there are known knowns. There are things we know we know. We also know there are known unknowns. That is to say, we know there are some things we do not know. But there are also unknown unknowns. The ones we don’t know, we don’t know.” 

Because of the way it was presented, and the seeming incongruity of it, it instantly became fodder for the comedians on late night TV.

But it is one of those things that makes sense if you stop to think about it for even more than a moment. As Rumsfeld stated, Unknown unknowns are those things that we don’t know that we don’t. But here we’re talking about something a little bit different. These are things that we know we don’t know.

More like the known unknowns that Rumsfeld talked about back then. But rather than rushing out and finding out what it’s all about immediately, we hold off for a little bit longer. In order to get our own thoughts down. This is an act of nescience, and when it comes to the nostalgia curve that we talked about last episode, I had to hold off for a little while, but now it’s time to fill in those gaps in this episode of The Implausipod.

Welcome to The Implausipod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. And early on, when I began looking at Nostalgia in the beginning of August, it became very clear that there were some key authors that had written on Nostalgia. Authors that I was aware of, but authors I’d never engaged with yet.

So in order to get my own thoughts down and kind of get everything together, I had to engage in that act of nescience, of not looking at what those authors had written until I had everything down that I wanted to say for myself. And this act of nescience comes from having a pretty good idea of what the limits of my knowledge is and where the things that I know come from.

Now, this may be a side effect of working on a PhD, of developing that body of knowledge and intensely studying things, but also comes from some reflective practice of looking at what you know, citing the information and keeping track of everything. So when it came to looking at nostalgia, I knew that Frederick Jameson had written on nostalgia in a work called nostalgia for the present.

I’ve seen the title before, but I had never engaged with it directly. So I had to put that aside as a TBR to be read. So, nescience. Now nescience is lack of knowledge full stop. It’s contrasted with something like ignorance, which is the act of not knowing. And you might be saying, well, isn’t my intentional act of not engaging with Jameson an act of ignorance?

Well, kinda. The popular, or, you know, Lay understanding of ignorance is generally that wilful stupidity that happens. And here we’re trying to describe an intentional act of delayed learning. And I wanted to dissociate it from all the negative connotations that ignorance has. Nescience is the unknown. In this case, both the known unknown and unknown unknown that Rumsfeld spoke of.

The thing that we don’t know that we don’t even know. Many of the mysteries of the universe would fall within this category, for we are tiny and small creatures on a little rock far off in a distant galaxy. Besides, Nescient sounds better, and we’ll lean towards the poetic where we can. There might be lots of things we’re all Nescient about.

Often this comes up in the terms of, like, media titles, like books we haven’t read, TV shows we haven’t seen, movies we haven’t watched yet, games we haven’t played. We might know of them, and given the way modern marketing works, it might be impossible to escape them, but there could be things out there that we’ve never ever seen.

Even though we’ve seen so many clips and memes and spoofs and parodies that it feels like we’ve seen the whole movie. For me, this includes things like Titanic and Schindler’s List, Frozen, American Psycho, Sopranos, Lost, and the list goes on and on and on. Some of the titles that I haven’t seen might surprise you, but there’s a lot of stuff out there, and we’re all constrained with respect to time and resources.

Our time on this planet is finite, after all, and there’s more videos that are uploaded to YouTube every single minute that can be seen in a human lifetime, so, we gotta pick and choose, right? And sometimes what we pick and choose is dependent on what we’ve seen in the past, which reminds me of that Rumsfeld bit from the beginning.

Now I’ve put a copy of the Rumsfeld Matrix up on the blog because describing something that’s inherently visual often seems like a fruitless task, but there’s many copies of it floating around. So a quick trip to the old Bing there should find you some results. Remember we don’t Google in 2024. But within that matrix, we end up with four categories, the known-knowns, the stuff that we know that we know, stuff we can recall readily and state with confidence.

We have the known unknowns. And this is things that we know we don’t know. We’re aware of, they might be out there. It could be a book or a movie or whatever, as we mentioned before. This also includes things like weather, travel. external events that happen while you’re not paying attention, that kind of stuff.

And you might not know about it yet, but you’ll find out soon. And then there’s the unknown unknowns, things we don’t know that we don’t know. These are outside of context problems. They’re outside our ability to even imagine in some cases. And we’ll get into the details of these in just a moment. And there’s a fourth category that Rumsfeld left out that’s rather obvious.

It’s the unknown-knowns. Philosopher Slavoj Žižek sniffed this out, and these are the things that we are unaware that we know. These could be tacit knowledge, or instinctual knowledge that we would struggle to explain, or things that we’ve forgotten that were part of our memory. And according to Žižek, they’re also items which one intentionally refuses to acknowledge.

Like, I can’t know that. These include Disavowed beliefs and other things we pretend not to know about, even though they’re probably part of our public values. This can be hazardous in some cases. But Zizek has somewhat of a narrow focus here. In The Unknown Knowns, one of the key elements is that of memory, and memory ties directly into nostalgia.

Memories can be with us constantly, but they often can lay dormant and come rushing back to us in a flood if they’re triggered by something. And those groups that are trying to operationalize the nostalgia curve, and often for monetary gain, are doing a whole lot to bounce up and down on those triggers.

Trying to evoke or elicit long forgotten memories of childhood, of toys or cartoons, of lazy Saturday mornings and long summer days, and market them or re market them to an older, more mature, and gainfully employed audience that’s been carefully diagnosed and segmented. And this is where a lot of the literature on nostalgia resides.

And why I had to engage in an act of nescience. Frederic Jameson is a literary critic and philosopher who, as of the recording of this episode in 2024, is the director of the Institute for Critical Theory at Duke University. He’s written a lot in a lot of fields, most notably on things like postmodernism and capitalism, and Nostalgia for the Present was one of his key works.

Originally published in the South Atlantic Quarterly in 1989, it’s been reprinted in various books and collections of his since, such as 1992’s Postmodernism or The Cultural Logic of Late Capitalism, which, given some of the topics that we’ve talked about here on this podcast, you might be surprised I haven’t read either.

But, as we said, time is finite, and we come to these things as we’re meant to. So for me, that intentional act of not engaging with it, that act of nescience was me understanding that, yes, he’s written a lot on it, but I wanted to get my own thoughts on nostalgia down as best I could, which we’ve seen in the previous episode on the podcast, as well as the number of blog posts over on the implosive.

blog and. Getting those down helped me to get a sense of where I am and how that would be in relation to what Jameson has written. So to quickly summarize our last episode, for us nostalgia is representational in a memetic way. You might say that nostalgia is an assemblage that puts various parts together and that the perceived value of the nostalgia of a property can impact financing and development of that property.

This value is subjective and also relative, so Different producers might value it differently. Nostalgia is often subjective and can be constraining because you’re limited by what’s gone before. Nostalgia can be contrasted with novelty or that idea of something new. And real nostalgia can be the audience longing for something that was actually produced.

Whereas imagined nostalgia is something the audience thinks they’ve seen before. And nostalgia can be organic, coming from the audience, or manufactured by the producer. Finally, we could say that nostalgia is also substrate neutral. It means it can happen in almost any field, especially with respect to the arts.

But it’s also transferable. It’s a transmedia property. That, if I have nostalgia for Pokemon, for instance, I might be interested in a Pokemon video game, even though I only really watched the cartoons when I was young. I don’t know why I’m referencing Pokemon specifically. But It’s clearly after my time, but In any event, what does Jameson have to say about nostalgia?

Nostalgia for the Present is a piece of media criticism where Jameson looks at the role of nostalgia in three works, Philip K. Dick’s novel Time Out of Joint from 1959, Jonathan Demme’s Something Wild from 1986, and David Lynch’s Blue Velvet, also from 1986. The three titles comprise a unique selection of content, or at least as diverse as one as one might choose to analyze on any given topic, I suppose, though given the breadth of what we cover here on this channel, I shouldn’t be much to criticize or throw stones in glass houses and all that.

Time Out of Joint is a faux time travel story where a man who was apparently trapped in the 1950s notices small differences in errors in reality, which leads him to suspect that something weird is going on. Kind of like the deja vu moment in The Matrix. These themes are typical of Philip K. Dick.

They’re what we’ve come to expect, the representations of reality and the notion that there’s something behind the scenes and the wavering nature of it. The false consciousness that often pervades his work. Looking at it in 2024, we’ve seen so many of those elements and other adaptations of it. The Blade Runner, A Scanner Darkly, Total Recall, Minority Report, and more.

Time Out of Joint seems almost unique among Philip K. Dick’s works in that it hasn’t been adapted for film or television yet. Truth be told has been copied many, many times before in time out a joint. The protagonist sense that there’s something else going on behind the reality is quite astute. He is captured in a Potemkin village of the 1950s, rebuilt in 1997 during an interstellar civil war.

It’s not quite like the 1997 and our reality, of course, we’re obviously nowhere near to interstellar capabilities and like a lot of older science fiction is now firmly rooted in our past. In a future that will not come to be. At times, Time Out of Joint feels more like a rough draft of The Truman Show, the 1998 movie starring Jim Carrey, where the apparatus moves around to ensure the world is static for this one particular man, and this feeds into our various narcissistic main character desires.

And while The Truman Show isn’t quite a direct copy, the film clip that best describes Time Out of Joint would be the epilogue to Captain America the First Avenger. Where he wakes in a room and recognizes from the radio broadcasts that things are not quite what they seem. If there was a Cliff Notes version of this 220 page novel, that would probably be it.

But, there’s more. Jameson notes how Time Out of Joint is set up to be a model of the 1950s. As something that the protagonist will accept. which again echoes the Matrix in that the machine’s creation of the late 1990s as their virtual world in order to pacify the humans that are kept in the endless rows of creches.

So aside from elements from Time Out of Joint appearing in at least three major motion pictures, I’m pretty Much like many of the works of Philip K. Dick, which have been copied so many times, like at least six by our count, that it’s hard to recognize that original source. Maybe that speaks to why this hasn’t been adapted anywhere else, or at least not directly.

As Jameson states, Time Out of Joint, quote, is a collective wish fulfillment and the expression of a deep unconscious yearning for a simpler and more human social system. A small town utopia very much in the North American frontier. tradition. And this is where that nostalgia comes in. We mentioned last episode how you can have cultural and social and political nostalgia for those simpler times where things were kind of more manageable.

And that yearning can be felt by a lot of people, which means it could be operationalized and mobilized and directed to various purposes. But again, this is nothing new. Jameson was writing in 1989 about something from 1959, and this cycles back much, much further. Jameson wrote about two other titles, too, of course, Demme’s Something Wild and Lynch’s Blue Velvet, and while they’re fantastic films, they’re here mostly to bolster Jameson’s case and provide further evidence that allowed him to triangulate towards the element of nostalgia that he’s looking for, as our familiarity and focus is more towards the science fiction side of things here on the ImplausiPod.

We’ll stick towards that and see what Jameson has to say about science fiction.

For Jameson, science fiction is a category. And if you’re hearing that with me making bunny ear signs, then you’re hearing correctly. Nowadays, we might just want to call it a genre. One that came about during that Eisenhower period, a period of the U. S. conquering space and battling communists. And all the ideology that’s inherently bound within the literature from that era.

The category might be bigger, going large to include some real lit, like Moore’s Utopia and others. Or it might be more tightly bound to the pulp novels. Personally, I like the expansive view of sci fi for our point of view, one that loops in Shelley’s Frankenstein by definition and intent and starts maybe with Jules Verne writing Journey to the Center of the Earth in 1864 because that scoops up H.

G. Wells’s stuff as well and gives us a really strong foundation for what science fiction is. The classic era of science fiction is probably that 1950s era, the golden age of rocket ships and the like. A particular vision of the future, both technologically and aesthetically. An aspirational view of the future that helps to come to terms and process our own history, understand how we feel.

fit within the current era. Basically, how did we get to now? Jameson contrasts sci fi with the historical novel, a cultural form that along with costume films and period dramas on TV reflected the ideology of the feudal classes and had fallen off throughout the late 20th century as the then new middle class sought something different, something alien that amped up their own achievements. 

Sci fi came on the scene and said, hold my ray gun, I got this. The historical novel failed not simply due to the feudalist ideals, but because according to Jameson, quote In the postmodern age, we no longer tell ourselves our history in that fashion, but also because we no longer experience it that way, and indeed, perhaps no longer experience it at all.

End quote. For Jameson, at least at the time, our mediated nature meant that we were living in an ahistorical age. And while this may have been true in 1989, I don’t know if that’s any longer the case. The recent rise in historicism and historicity in its forms in the 21st century may suggest that various authors talking about the rise of techno feudalism might be more right than we suppose.

But there’s another question there. Did the return to those historical feudal ideals, the types of stories you tell about kings and queens, become more popular because we are living in that type of age? Or did they help bring it about? Which came first, Shakespeare in Love and Lord of the Rings, or Technofeudalism. Hard to say, but this feels like something we should save for the ongoing debate about fantasy versus sci fi, and we’ll touch in on that at a later point in time. For Jameson, science fiction is an aspirational vehicle for the masses who are rejecting the previous historical viewpoint.

Compared to the historical novel, Quote, Science fiction equally corresponds to the waning of the blockage of that historicity, and particularly in our own time in the postmodern era, to its crisis and paralysis, its enfeeblement and repression. End quote. There are a lot of reasons why this occurs, and they have less to do with the content, though there are parts of that too, to be sure, or at least particular aesthetic choices that are made, and more to do with the socio economic conditions of today.

post World War II USA, and North America, and the United Kingdom. And again, this is another place where nostalgia starts to come in, because both historical novels and sci fi have a tie to the imagination, an imagined past, or an imagined future. They can use representation in their relationship with the past or future, but they are really a perception of the present as history, a way that we can look at our own situation through a few steps removed.

This is the conceit that’s seen throughout the Star Treks, the Star Wars, the Warhammers, the Aliens, the other is but an aspect of ourselves, our society, and our culture that we are trying to take a closer look at. And in Time Out of Joint, that society that we’re trying to take a closer look at is the 1950s.

Philip K. Dick was writing Time Out of Joint in 1959, or at least it was published then, he was probably writing it a little bit earlier, and he was looking at the decade that just passed and choosing what the essential elements might look like from the perspective of someone from 1997. the year of the fictional interstellar war in the novel, and for the most part, he got it right.

Jameson presents us with a list of things that evoke the 1950s from time out of joint. Eisenhower, Marilyn Monroe, PTAs, and the like. And if the list that Jameson gives us reads like a certain Billy Joel song, that’s probably not by accident. Though, we didn’t start the fire also being released in 1989 is almost certainly coincidental.

Nostalgia can often look like a collection of stuff in some hoarder’s back room. The items are referents to that era, not facts per se, but ideas about those facts. The question Jameson asks, the thesis for his whole paper, is did the period see itself this way? And Philip K. Dick’s choices seem to suggest that the answer is yes.

There’s a realistic feel to how PKD describes the 1950s, a feel that arises from the cultural reference that are used. And Jameson notes that if there is a quote unquote realism in the 50s, in other words, it is presumably to be found there in mass cultural representation, the only kind of art willing and able to deal with the stifling Eisenhower realities of the happy family in the small town of normalcy and non deviant everyday life, end quote.

So for a spectator looking back from the 1980s The image of the 1950s comes from the pop culture artifacts that the people in the 1950s understood themselves by as well. We’re looking at them from a distance, through a scanner, darkly. And one that’s getting darker over time.

What this whole process accomplishes is a process of reification. The reality gets blurred by the nostalgic elements, and this ends up becoming the signifier that represents the whole. So our sense of ourselves and of any moment in history may have little or nothing to do with reality. The objective reality, that is.

Which is the biggest Philip K. Dick style head trip that you’ve ever felt before. It’s hard to put it into words. Though all the works of Philip K. Dick and all the Philip K. Dickensian inspired media out there keep trying to show us and tell us over and over again, it’s tricky though. There’s a lot of speculation that’s required, and time out of joint is ultimately a piece of space.

Speculative fiction, quote, it is a speculation which presupposes the possibility that at an outer limit, the sense people have of themselves and their own moment of history may ultimately have nothing whatsoever to do with its reality. End quote, how we think of ourselves, our histories and our generations are only tied to a fractions of the things that are out there.

And much of it may be that imagined nostalgia we talked about a little while ago. There’s a whole lot of unknowns out there, and all of us are privy to only a small fraction of what’s available. And this brings us back to what we were talking about near the beginning. Now, what did Frederick Jameson have to say about nostalgia in total, and how does that connect with the concept of the nostalgia curve that we introduced last episode?

Are there elements of the Jamesonian idea of nostalgia and what he was talking about that at least connect with us? And we can kind of see that in at least three of his books. four of our categories. We can see how our idea of nostalgia being a representation of a thing rather than being the thing itself is fundamental to Jameson’s work and carries on throughout it.

The idea of a thing, not the thing themselves. And for Jameson, those mediated examples coming from pop culture versions then informing the quote unquote generational logic for successive viewers is important too. It connects with our idea of imagined nostalgia, the kind that the audience thinks that they are remembering rather than they actually experienced.

Jameson himself doesn’t really distinguish between different kinds of nostalgia, at least not in the ways that we do. He doesn’t look at the source of where it is produced, but looks at what the nostalgia is for, hence the title, obviously. A 1980s audience looking for the imagined view of the 1950s or an interstellar warrior in the text longing for their imagined view of the same decade, or a writer from that decade of the 1950s constructing a longing for the decade while it is still happening.

These are all nostalgia writ large to Jameson, whereas we’ve increased the granularity a little bit to fine tune our analysis in the nostalgia curve last episode. Jameson looks at the construction of nostalgia in various media, novels and film in this case, though there could be others, and this ties in with our idea of substrate neutrality, that the nostalgia curve could be a transmedia property and not particularly tied to any one kind or another.

So whether we’re looking at Pokemon or action figures or whatever, we can see it across the various realms. The elements of nostalgia that we looked at that were focused on value are largely absent from Jameson’s work. They’re not completely absent, but he was looking for reification of ideology that takes place via nostalgia and not necessarily at the production culture, political economy elements that we’re looking at that tie back directly to the development of new titles in Hollywood or beyond.

Now, there’s more to nostalgia than just the meaty aspects, though, and we’ll need to take a look at the connection that nostalgia has with memory. The other place that nostalgia is showing up in is part of our soylent culture, which we mentioned earlier. The various bits and pieces of past properties that show up or are dredged back up by the cultural saves that are our generative AI tools and the platforms that encourage their use as spreadable media.

Media theorist Marshall McLuhan talked about how new media is built out of the pieces of the old, and nowhere is that more true than our current online culture. So we’ll have to take a deeper look at this next episode. I hope you join us then, on the Implausipod.

Once again, thank you for joining us on the Implausipod. I’m your host, Dr. Implausible. You can reach me at drimplausible at implausipod. com, and you can also find the show archives and transcripts of all our previous shows at implausipod. com as well. I’m responsible for all elements of the show, including research, writing, mixing, mastering, and music, and the show is licensed under Creative Commons 4.

0 share alike license. You may have noticed at the beginning of the show that we described the show as an academic podcast, and you should be able to find us on the Academic Podcast Network when that gets updated. You may have also noted that there was no advertising during the program, and there’s no cost associated with the show.

But it does grow from word of mouth of the community, so if you enjoy the show, please share it with a friend or two, and pass it along. There’s also a Buy Me A Coffee link on each show at implausipod. com, which will go to any hosting costs associated with the show. Over on the blog, we’ve started up a monthly newsletter.

There will likely be some overlap with future podcast episodes, and newsletter subscribers can get a hint of what’s to come ahead of time, so consider signing up and I’ll leave a link in the show notes. Until then, take care and have fun.