TikTok Tribulations

(this was originally published as Implausipod E0033 on June 10th, 2024)

What happens if your community disappears? How do online groups deal with the challenges of maintaining their social ties across fickle and fleeting platforms? And are there lessons to be learned by the TikTok creators from the online MMO communities that were shut down in the early 2000s?

https://www.implausipod.com/1935232/episodes/15146242-e0033-tiktok-tribulations


[00:00:00] DrI: On the last episode of the ImplausiPod, we asked what happened if you built an app and the community was still toxic, like, whoops, what do you do next? But there’s a darker side to that question. What if you built a successful community and then it disappeared? On April 24th, 2024, the US President Joe Biden signed a foreign aid package bill that included legislation demanding that ByteDance, the parent company of TikTok, divest itself of those holdings to an American owned firm or face banning in the United States. If the sale doesn’t happen within 270 days, TikTok would be prevented from appearing in app stores, as well as certain internet hosting services. Now, of course the story isn’t over, this will be contested and appealed, but for those individuals who had developed or participated in communities on TikTok, it can be a significant loss.

A loss that we’re going to look at in episode 33 of the Implausipod.

Welcome to the Implausipod, an academic podcast about the intersection of art, technology and popular culture. I’m your host, Dr. Implausible. And today we’re talking about the closure of online communities. It’s rare that a thriving online community is shut down, or explicitly banned. Often what happens is that a new competing service opens up and the user base dwindles until all that is left is a shell of the former community.

Other times, the service gets sold off, changing hands, and the community gets parceled off, the data being sold, the policy changes making the community lose interest and find alternatives. The latter can be seen in services like Yahoo Groups, Tumblr, Google Groups, Google Wave, Google Plus. There might be a bit of a trend there, is what I’m saying.

Examples of services actively shutting down can be seen more often in the video game market, especially in MMOs. The glut of MMOs in the early 21st century, all built on the assumption of online play and needing an engaged community to drive the operation, led to the abandonment of that community when the service shut down, the game was canceled, or the servers were closed.

Now, in some cases, the community was strong and was able to keep things going after a fashion, but in most cases, closure of the servers meant the end of the game, and the dispersal of the members of the community. Sometimes the community knew it was coming and were able to go out with a blaze of glory, as seen on the Matrix Online or the original City of Heroes, but sometimes the community just ended.

The server’s turned off, and the light’s no longer on. And this closure, with a looming deadline, is what communities and creators on TikTok are now facing. The announcement on April 24th started a ticking clock, a 270 day countdown timer with a date for divestment of the app by its parent company. And, in late April and early May following the announcement, a number of creators on the app, some recognizable figures, some longtime lurkers, first time posters, made heartfelt appeals.

To the communities that they built or discovered during their time on TikTok. I’d like to share a couple of those with you right now. They’re short because, well, it is TikTok after all, but if there is a video version of this podcast, I’ll try my best to splice them in. The first is by a creator by the name of Vegas Starfish, an events planner in Las Vegas, Nevada, USA.

At the time of recording of this episode, Her post had received a quarter of a million views, garnering 40, 000 likes and several thousand comments. Here’s her post, in her own words. 

[00:03:39] Vegas: This is my farewell to TikTok. As you know, TikTok was just banned in the United States. This app changed my life. This is me before TikTok, and this is me after.

I was a miserable, mid level casino executive. I started making content about my city and how much I loved it, and then I started living life. I have never made this platform about me. It was always about the city, but I want to show you a glimpse at the creator behind the videos. I’ve always been socially awkward.

And it was through this app that I was able to meet other creators and most importantly, meet so many of you, every single one of you changed my life. Suddenly my voice mattered and I had a purpose and I started living boldly. I began traveling all over the world. As my self worth and self confidence grew, I became a better parent, a better friend, and I’ve never been great at making friends, but the best ones I’ve ever had came through this app.

I’ve had the opportunity to work with incredible artists and creators, people that I would have never had access to otherwise, and together by creating dynamic content, we’ve been able to change the paths for thousands of small businesses by directly highlighting great people doing great things. We’ve done so much good.

I know that the loss of this app will hurt creators and businesses financially, but I’m afraid of losing the human connection. We’ve been able to take you along for amazing resorts opening and iconic ones closing. Together we were among the first to discover a massive corporate hack last fall. You were with me when the sphere opened and we saw F1 cars race down the Las Vegas Strip together.

I have shared thousands of moments with millions of people. It has fundamentally changed my life and the lives of so many others. I am eternally grateful for every experience and every interaction. It has been a whirlwind. And I appreciate you more than you know. I hope to see some of you on IG. And thank you for following me for all the Vegas.

A special shout out to the feral cat from the Rio who helped me go viral in the beginning. You’re the real MVP. 

[00:05:40] DrI: Here we can see how a person was able to change their career, find and build a community, and increase their personal happiness by becoming more engaged with the job they were doing. sharing that and then reaching out and taking a more active role within the community to the extent that they experienced better mental and physical health and career growth and wellbeing.

Pretty awesome all around. And while her story was specific to TikTok, there are similar stories like hers on many other platforms. During the same week that Vegas starfish posted, there was another post that was made that also. went somewhat viral, and it went into the benefits of TikTok for that person.

This was a first time post by a long time lurker, who felt compelled to reach out to her community for the first time because of the impending ban. I’ll play a portion of that post here, as the full post is over four and a half minutes long. 

[00:06:36] Katy: Hi, my name’s Katie. And I’ve never posted on TikTok before, and I probably never will again, but I was watching the live vote today on TikTok, um, for Congress to ban it.

And I just started really reflecting on the past four years that I’ve been watching TikTok. I’ve been just a lurker. I don’t post. I just watch. Um, but it’s meant a lot to me and I wanted to maybe record my first and only video as a thank you. It’s going to be pretty rough because I had to look up how to do all of this.

So I apologize for that. I found TikTok in 2020 during COVID when my children with disabilities came home from school and instead of just mother, I was mother and teacher. And it was overwhelming. And I lived in a pretty homogenous suburban neighborhood where there was very much one way to be. And. I had a mental breakdown.

I know I’m not the only one and I was prescribed more antidepressants or maybe a stay in a treatment facility for an eating disorder. But instead, the thing that really helped me was discovering TikTok and all of you. I Learned a new parenting language toward my children that was very different from the one that I was taught from Mama Cusses.

Um, I was diagnosed with ADHD, as were we all, and I learned how to manage it and do struggle care, closing duties, and reset to functional with Casey Davis. Um, I learned how to normalize being normal from Emily Jean, I, um, watched TV shows and movies and pieces that I never would have watched before because of ADHD and anxiety comfort.

Always like watching the same thing. I learned that it’s. Um, normal and okay to cosplay, to, um, treat your fandoms like old friends, to like to read spicy fiction. Um, I learned more about my neurodivergent or neurospicy children in the last four years on TikTok than I did online. Almost all of the earlier childhood.

[00:08:49] DrI: And from there, Katie goes on to thank some of the specific creators that she followed and whose content she enjoyed. And we can see within her posts some of the challenges that she was facing, both as a mother and a teacher, dealing with a mental breakdown and parenting children with special needs, learning concepts like struggle care and normalize, and being exposed to new media, new hobbies, new fandoms, basically learning in all of these instances.

And in her post, we can see how much community contributed to that. And this is the power of community to the audience. Now, sometimes they’re derogatorily referred to as lurkers and the level of involvement and investment that they perceive to have of themselves with relation to the community. These can often be referred to as

parasocial relationships, and this can be true. Parasocial relationships are one sided relationships where someone develops a sense of connection or familiarity with someone they don’t know, like a celebrity or a media figure. With the rise of social media, creating more media figures than ever before, People have observed the rise of these relationships, but the term has been around since the 1950s when Horton and Wohl observed it in television audiences.

These relationships may look fake to the outside observer, but we can also see the power that these invisible social ties have. This is the demonstration of a well known phenomenon in the social sciences. In 1973, Mark Granovetter wrote a famous paper called The Strength of Weak Ties. You might not have heard of the paper, but judging by the nearly 40, 000 times it’s been cited, perhaps what was in the paper has been filtered out to become common knowledge.

In this paper, Granovetter was looking at job hunting specifically, and how people use their connections when searching for a job. And found that it was the secondary social ties, not your best friends, but your more casual acquaintances, that were more likely to come through in something like a job search.

Because your best friends, your strong ties, are more likely to run in similar social circles. They would be aware of similar opportunities. But those more Distant ties allow for further reach, and can be helpful as one looking for a career change, for example. We can see the effects of both of these in the posts I included above.

Both creators spoke of new connections they made, the knowledge they gained, and how they both Benefited from those social connections. There was another benefit that both creators had as well, though it isn’t as obvious. In the second post, Katie’s post, we can see how easy it was for a first time creator to reach out and make a post that was able to reach a million.

This has been one of the strengths of TikTok as a platform. As a tool, it democratized content production, turning users into Creators able to produce fully edited videos along with effects, captions, and connected to other content at the push of a button. And I cannot stress this enough, comparing something like TikTok to what needs to be done to produce this podcast or YouTube video, for instance, is night and day.

As the saying goes, the purpose of a system is what it does. A well known systems theory quote from Stafford Beer. And this is what TikTok succeeds at more than most. It isn’t just the algorithmic content delivery and sorting mechanisms that go on behind the scenes, but also turning more and more people into content creators.

To this end, TikTok democratizes the opportunity to create. It removes gatekeepers from the products and allows users to make the materials that they want to see. Often, when we talk about democratization, we’re talking about material things, but here we’re seeing it with informational objects as well.

People can create exactly what they want to see and then share it with everybody and perhaps find an audience for those kinds of things, whether they knew one existed or not. And as Eric von Hippel points out in his 2005 book on innovation, it’s more than just the products quote, it’s the joy and the learning associated with creativity and membership in creative communities that are also important.

These experiences too are made more widely available as innovation is democratized. End quote. And I really want to stress this because this is what pretty much every article that I’ve seen on TikTok misses the fact on. Everybody points towards the algorithm or the social network and those elements of it, but the true secret sauce of TikTok is the ease of use of the content creation tools.

It can literally, with the push of a button, turn anybody and everybody into a television producer. Or director, or actor, or creative of some form. If TikTok is the new television, which I argued four years ago or so now, then everybody who posts on TikTok is a TV content creator of some kind. And I’m gonna let that sit for a second.

To expand further on that idea of democratization, I’m gonna return to Eric Von Hippel and quote at length. User firms, and increasingly even individual hobbyists, have access to sophisticated design tools for fields ranging from software to electronics to musical composition. All these information based tools can be run on a personal computer and are rapidly coming down in price.

With relatively little training and practice, they enable users to design new products and services, and music and art. At a satisfyingly sophisticated level, then if what has been created is an information product, such as software or music, the design is the actual product, software you can use or music you can play, end quote.

Now that was published in 2005, so we’re seeing him capture in writing the effects of both the dot com revolution and the wide scale rollout of new computing in advance of the Y2K issue. That saw a massive expanse in computing products as everybody was purchasing new machines that were Y2K compatible.

But let’s go back to Von Hippel’s quote there. So, individual hobbyists having access to sophisticated design tools. Check. Allowing musical composition, video editing, all at the touch of a button. Absolutely. That’s what TikTok does. They could run on a personal computer at the time or now just the phone that is pretty much readily available to everybody.

Check. Rapidly coming down in price. Check. Basically free with an app or several apps in some cases with relatively little training and practice. Yes, new products and services and music and art all these things and we see some of this with AI tools Even though that’s not what we’re talking about right now and at a satisfyingly sophisticated level Good enough to show on the internet and a lot of people are obviously engaged with it and then software you can use music You can play Yes, the design is the product.

The thing that gets put out, gets shared with everybody, and that is the thing. And, as he said in the previous quote, this builds and allows access to creative communities, which ties directly to the quotes from the two TikTok users that we saw. There’s also another side effect of this democratization of content, and that is the increasing media literacy.

If we posit that literacy is not just being an informed reader, but also allows one the ability to write, so both input and output, upstream and downstream, then being more aware of content production The difference between what gets recorded, what gets seen, and how the audience reacts makes everybody involved more media literate.

Or at least it would if they’re paying attention. And I think to a large degree people are becoming more aware. However, more than just examples of democratizing content production and enhancing media literacy, Both posts from the users that I shared are evidence of the positive benefits of community.

We’ve referred to Howard Rheingold’s work on the virtual community earlier, and he quotes at length from M. Scott Peck’s Different Drum at the start of his book, and Scott writes, quote, We know the rules of community. We know the healing effect of community in terms of individual lives. If we could somehow find a way across the bridge of our knowledge, would not these same rules have a healing effect upon our world?

We human beings have often been referred to as social animals, but we are not yet community creatures. We are impelled to relate with each other for our survival, but we do not yet relate with the inclusivity, realism, self awareness, vulnerability, commitment, openness, freedom, equality, and love of genuine community.

It is clearly no longer enough to be simply social animals babbling together at cocktail parties and brawling with each other in business and over boundaries. It is our task, our essential, central, crucial task, to transform ourselves from mere social creatures into community creatures. It is the only way that human evolution will be able to proceed.

It’s a rather lengthy list that Scott has there in the middle of that quote. Inclusivity, realism, self awareness, vulnerability, commitment, openness, freedom, equality, and love of genuine community. But, I think it’s an essential one. When we think of the world around us, those are all things that we could use a little bit more of.

And as sociologist Richard Sennett notes in his book, Together, this community can be vocational as well. That working towards building the community can have such significant effects that it’s beneficial to all those involved, even the bystanders. As we saw with The Lurker in our second quote, that the audience gains benefits from the community as well.

The communities described by both creators are both meaningful. real despite being online. As we mentioned last episode, and probably often, is that there is no difference between online and offline communities save for the annihilation of distance and time. The distinctions made between cyberspace and quote meat space is often a false dichotomy.

Within academic writing on online communities, social networks, and the like, This difference was sometimes highlighted early in the literature, though more recent critical or reflective writing may no longer make that distinction. And that happens because in the 30 years or so since the publication of Rheingold’s Virtual Community, we have some Fantastic real world examples of what happens in online communities, especially when they go away.

And the reason there are so many online communities that went away is that in the early 2000s, having an online community was part of the business model of a number of companies. Including companies that were developing online games. And specifically those developing MMOs. The wave of massively multiplayer online roleplaying games that relied on a monthly subscription model.

This largely paralleled the shift to Web 2. 0 that was occurring at that time. around 1999 to 2004. But as we’ve been seeing with a lot of things gaming related during the course of this podcast, the gaming community somewhat preceded it, acting as a harbinger of things to come. Web 2. 0 is of course the change in the web from static web pages to user generated content, or UGC.

The MMO boom started in 1997 with the release of Ultima Online. where the term was coined, but it really took off beginning in 1999 with the release of EverQuest, and then heading straight to the moon with the release of World of Warcraft in 2004, and not 2001’s Shadows of Luclin expansion as maybe three people listening to this podcast might have been guessing.

Within the window of the MMO boom, numerous MMOs were launched based on a wide variety of intellectual property. Some licensed, some original, and all developed a community of some fashion around them. Even though the subscription based model that most used during this initial period represented a kind of Software as a Service, or SAAS, They were really more like community in a box.

The games relied on the volunteer labor provided by the community in terms of guides, maps, strategies, and communication hubs, external to the games themselves. In many cases, the games would be extremely difficult without the shared knowledge bases that the communities provided. It was the epitome of participatory culture that we discussed back in episode 16 on Spreadable Media.

And the communities. built around these games in part on the shared labour and collective action that was put into their creation. MMOs lived and died by the communities that existed around them. Alas, in a very dense and competitive marketplace, not every MMO succeeded, even if the community was there.

So I’d like to take a look at three that had high aspirations but ended up shutting down. These three were Sony Online Entertainment’s Star Wars Galaxies, released in 2003, Cryptic Studios slash NCSoft’s City of Heroes, launched in 2004, and Monolith Productions 2005 release of The Matrix Online. Each of these were big budget MMOs with a large fanbase.

Some due to the tie ins with existing popular media licenses, and in City of Heroes case, being a generic superhero simulator in the era prior to the rise of the MCU wasn’t a bad thing. It emphasized team play, with groups of heroes working together to complete missions and fight larger threats, emulating the fiction of the superhero comics in general.

Star Wars Galaxies was developed by Sony Online, with a rich user driven in game economy developed by Raph Koster, one of the more notable MMO designers from his work on Ultima Online, who pushed for a simulationist view, where players would be crafting all the gear and materials used in the game. At least, initially.

And the Matrix Online provided a rich narrative experience, providing what is called transmedia storytelling, as the events taking place in the game are part of the larger continuity of stories told about the Matrix, coexisting with the events of the movies and other properties like the Animatrix. Each of these games managed to develop a dedicated community of players, active participants in engaging and extending the world.

But despite this active community, each of these properties failed, and the MMOs were closed. For The Matrix Online, it was shut down in 2009 due to low player numbers, as competition was tough, and honestly, the 2008 crash saw a number of properties struggle with their business model. For Star Wars Galaxies, when it closed in 2011, it was stated it was due to the loss of the license for Star Wars gaming, 

which is a risk for any media property as well. For City of Heroes, without the licensing issues of the other two, it was a change in the focus of the publisher as the stated reason for its closure in 2012. At least, for a little while. The interesting thing is how these communities reacted to the closing of the servers, of knowing that the community that they had lovingly built was was going to disappear at a specified point in the future.

Each of the games had a massive farewell event, with the community coming online to celebrate the last moments. The Matrix Online turned it into a story event, and you can check out the link to the videos of that storyline in the show notes. The fans of Star Wars Galaxies created a similar event, and I’ll link that one too, culminating in a massive battle between the Empire and the Rebel Alliance that was live streamed on the internet.

City of Heroes had a number of player run events leading up to the servers being shut down. When they went dark, all three Of these MMOs saw their communities dispersed, a virtual diaspora drifting out to other online places and virtual spaces.

But for both Star Wars Galaxies and City of Heroes, the game lived on. Fans of each game had started private servers using emulation software, allowing the members of the community to meet up again and play the game, after a fashion, much the same as they had before. Not every member of the old community signed up for the emulator servers, of course, and they did skirt the bounds of legality, but it allowed the games to continue.

It allowed the community to continue. And for City of Heroes, the under the radar private server launched in 2019 became an officially licensed private server in 2024, free to play but funded via donations for server costs and the like. The online community was able to rebuild and bring it back to an audience 12 years after it closed, at least officially.

SInce the private server relaunched in 2019, the devs working on the game have added new material, new missions, and new features, showing that an active community can still support a game enough to allow future development. The gaming community may be showing the TikTok community a path forward if the proposed legislation goes through in the United States.

While there are current alternatives to the short form video that TikTok popularized, like Instagram Reels, YouTube Shorts, Clapper, and others, each of those have appealed to a different community and haven’t seen the wholesale move of the TikTok user base. It may happen, as often users will move to a site or page or app or whatever that they find most appealing, but this isn’t always the case.

There may be an opportunity for users to build their own. Tools like loops. video, which is currently in alpha testing at the time of this show’s publication, allow a very similar short video format. built on the ActivityPub protocol that we’ve discussed last episode and several times before. And much like Meta’s threads was built in record time to capture disaffected Twitter users, we may see other options spring up if TikTok is truly banned in the United States.

We’ll keep an eye on this story as it develops, and come back to it in a few months to see what the results are, and where the community goes.

Once again, thank you for joining us on the Implausipod. I’m your host, Dr. Implausible. You can reach me at drimplausible at implausipod. com, and you can also find the show archives and transcripts of all our previous shows at implausipod. com as well. I’m responsible for all elements of the show, including research, writing, mixing, mastering, and music, and the show is licensed under Creative Commons 4. 0 share alike license. No AI tools were used in the production of this podcast, save for the transcription software, which I believe is just machine learning. You may have noticed at the beginning of the show that we described the show as an academic podcast, and you should be able to find us on the Academic Podcast Network when that gets updated.

You may have also noted that there was no advertising during the program, and there’s no cost associated with the show, but it does grow through the word of mouth of the community. So if you enjoy the show, please share it with a friend or two and pass it along. There’s also a, buy me a coffee link on each show at applausopod.

com, which would go to any hosting costs associated with the show. Over on the blog, we’ve started up a monthly newsletter. There will likely be some overlap with future podcast episodes and newsletter subscribers can get a hint of what’s to come ahead of time. So consider signing up and I’ll leave a link in the show notes.

Coming soon on the ImplazaPod, we already have some episodes in the pipeline, though I’m not quite sure of their release order yet. We have a two part discussion on the first season of the Fallout TV series, as well as a recap of the most recent season of Doctor Who. And we’ll be looking at a few other online activities, including the emergence of the dial up pastoral and the commodification of curation.

I hope you join us for them, they’re going to be fantastic. Until then, take care, and have fun.


Bibliography:

Bartle, R. (2003). Designing Virtual Worlds. New Riders Press.

Granovetter, M. (1973). The Strength of Weak Ties. American Journal of Sociology, 78(6), 1360–1380.

Jenkins, H. (2006). Convergence Culture: Where Old and New Media Collide. NYU Press.

Koster, R. (2004). A theory of fun for game design. Paraglyph Press.

Rheingold, H. (2000). The Virtual Community: Homesteading on the electronic frontier. MIT Press.

Sennett, R. (2012). Together: The rituals, pleasures and politics of cooperation. Yale University Press.

The Matrix Online Videos—Giant Bomb. (2012, July 12). https://web.archive.org/web/20120712062536/http://www.giantbomb.com/the-matrix-online/61-9124/videos/

There Is Another: The End Of Star Wars Galaxies – Part 01 – Giant Bomb. (2012, January 7). https://web.archive.org/web/20120107150559/http://www.giantbomb.com/there-is-another-the-end-of-star-wars-galaxies-part-01/17-5439/

von Hippel, E. (2005). Democratizing Innovation. The MIT Press.

Links:

City of Heroes: Homecoming

Implausipod Episode 16 – Spreadable Media

The Implausi.blog Newsletter

Black Boxes and AI

(this was originally published as Implausipod E0028 on February 26, 2024)

https://www.implausipod.com/1935232/episodes/14575421-e0028-black-boxes-and-ai

How does your technology work? Do you have a deep understanding of the tech, or is it effectively a “black box”? And does this even matter? We do a deep dive into the history of the black box, how it’s understood when it comes to science and technology, and what that means for AI-assisted science.


On January 9th, 2024, rabbit Incorporated introduced their R one, their handheld device that would let you get away from using apps on your phone by connecting them all together through using the power of ai. The handheld device is aimed at consumers and is about half the size of an iPhone, and as the CEO claims, it is the beginning of a new era in human machine interaction.

End quote. By using what they call a large action model, or LAM, it’s supposed to interpret the user’s intention and behavior and allow them to use the apps quicker. It’s acceleration in a box. But what exactly does that box do? When you look at a new tool from the outside, it may seem odd to trust all your actions to something that you barely know how it works.

But let me ask you, can you tell me how anything you own works? Your car, your phone, your laptop, your furnace, your fridge, anything at all. What makes it run? I mean, we might have some grade school ideas from a Richard Scarry book or a past episode of How It’s Made, but But what makes any of those things that we think we know different from an AI device that nobody’s ever seen before?

They’re all effectively black boxes. And we’re going to explore what that means in this episode of the Implosipod.

Welcome to the Implausipod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. And in all this discussion of black boxes, you might have already formed a particular mental image. The most common one is probably that of the airline flight recorder, the device that’s embedded in every modern airplane and becomes the subject of a frantic search in case of an accident.

Now, the thing is, they’re no longer black, they’re rather a bright orange, much like the Rabbit R1 that was demoed. But associating black boxes with the flight recorder isn’t that far off, because its origin was tied to that of the airline industry, specifically in World War II, when the massive amount of flights generated a need to find out what was going on with the planes that were flying continual missions across the English Channel.

Following World War II, the use of black boxes Boxes expanded as the industry shifted from military to commercial applications. I mean, the military still used them too. It was useful to find out what was going on with the flights, but that idea that they became embedded within commercial aircraft and were used to test the conditions and find out what happened so they could fix things and make things safer and more reliable overall.

meant that their existence and use became widely known. But, by using them to figure out the cause of accidents and increase the reliability, they were able to increase trust. To the point that air travel was less dangerous than the drive to the airport in your car, and few, if any, passengers had many qualms left about the Safety of the flight.

And while this is the origin of the black box in other areas, it can have a different meaning in fields like science or engineering or systems. Theory can be something complex where it’s just judged by its inputs and outputs. Now that could be anything from as simple as I can. Simple integrated circuit to a guitar pedal to something complex like a computer or your car or furnace or any of those devices we talked about before but it could also be something super complex like an institution or an organization or the human brain or an AI.

I think the best way to describe it is an old New Yorker cartoon that had a couple scientists in front of a blackboard filled with equations and in the middle of it says, Then a miracle occurs. It’s a good joke. Everyone thinks it’s a Far Side cartoon, but it was actually done by Sidney Harris, but The point being that right now in 2024, it looks like that miracle.

It’s called AI.

So how did we get to thinking that AI is a miracle product? I mean, aside from using the LLMs and generative art tools, things like DALL-E and Sora, and seeing the results, well, as we’ve spent the last couple episodes kinda setting up, a lot of this can occur through the mythic representations of it that we often have in pop culture.

And we have lots of choices to choose from. There’s lots of representations of AI in media in the first nearly two and a half decades of the 21st century. We can look at movies like Her from 2013, where the virtual assistant of Joaquin Phoenix becomes a romantic liaison. Or how Tony Stark’s supercomputer Jarvis is represented in the first Iron Man film in 2008.

Or, for a longer, more intimate look, the growth and development of Samaritan through the five seasons of the CBS show Person of Interest from 2011 through 2016. And I’d be remiss if I didn’t mention their granddaddy Hal from 2001 A Space Odyssey by Kubrick in 1968. But I think we’ll have to return to that one a little bit more in next episode.

The point being that we have lots of representations of AI or artificial intelligences that are not ambulatory machines, but are actually just embedded within a box. And this is why I’m mentioning these examples specifically, because they’re more directly relevant to our current AI tools that we have access to.

The way that these ones are presented not only shapes the cultural form of them, but our expected patterns of use. And that shaping of technology is key by treating AI as a black box, something that can take almost anything from us and output something magical. We project a lot of our hopes and fears upon what it might actually be capable of accomplishing.

What we’re seeing with extended use is that the capabilities might be a little bit more limited than originally anticipated. But every time something new gets shown off, like Sora or the rabbit or what have you, then that expectation grows again, and the fears and hopes and dreams return. So because of these different interpretations, we end up effectively putting another black box around the AI technology itself, which to reiterate is still pretty opaque, but it means our interpretation of it is going to be very contextual.

Our interpretation of the technology is going to be very different based on our particular position or our goals, what we’re hoping to do with the technology or what problems we’re looking for it to solve. That’s something we might call interpretive flexibility, and that leads us into another black box, the black box of the social construction of technology, or SCOT.

So SCOT is one of a cluster of theories or models within the field of science and technology studies that aims to have a sociological understanding of technology in this case. Originally presented in 1987 by Wiebe Biejker and Trevor Pinch, a lot of work was being done within the field throughout the 80s, 90s, and early 2000s when I entered grad school.

So, so if you studied technology as I was, you’d have to grapple with Scott and the STS field in general. One of the arguments that Pinch and Biejker were making was that Science and technology were both often treated as black boxes within their field of study. Now, they were drawing on earlier scholarship for this.

One of their key sources was Leighton, who in 1977 wrote, What is needed is an understanding of technology from inside. Both as a body of knowledge and as a social system. Instead, technology is often treated as a black box whose contents and behavior may be assumed to be common knowledge. End quote. So whether the study was the field of science and the science itself was.

irrelevant, it didn’t have to be known, it could just be treated as a black box and the theory applied to whatever particular thing was being studied. Or people studying innovation that had all the interest in the inputs to innovation but had no particular interest or insight into the technology on its own.

So obviously the studies up to 1987 had a bit of a blind spot in what they were looking at. And Pinch and Becker are arguing that it’s more than just the users and producers, but any relevant social group that might be involved with a particular artifact needs to be examined when we’re trying to understand what’s going on.

Now, their arguments about interpretive flexibility in relevant social groups is just another way of saying, this street finds its own uses for things, the quote from William Gibson in But their main point is that even during the early stages, all these technologies have different groups that are using them in different ways, according to their own needs.

Over time, it kind of becomes rationalized. It’s something that they call closure. And that the technology becomes, you know, what we all think of it. We could look at, say, an iPhone, to use one recent example, as being pretty much static now. There’s some small innovations, incremental innovations, that happen on a regular basis.

But, by and large, the smartphone as it stands is kind of closed. It’s just the thing that it is now. And there isn’t a lot of innovation happening there anymore. But, Perhaps I’ve said too much and we’ll get to the iPhone and the details of that at a later date. But the thing is that once the technology becomes closed like that, it returns to being a black box.

It is what we thought it is, you know? And so if you ask somebody what a smartphone is and how does it work, those are kind of irrelevant questions. A smartphone is what a smartphone is, and it doesn’t really matter how the insides work, its product is its output. It’s what it’s used for. Now, this model of a black box with respect to technology isn’t without its critiques.

Six years after its publication, in 1993, the academic Langdon Winner wrote a critique of Scott in the works of Pinch and Biker. It was called Upon Opening the Black Box and Finding it Empty. Now, Langdon Winner is well known for his 1980 article, Do Artifacts Have Politics? And I think that that text in particular is, like, required reading.

So let’s bust that out in a future episode and take a deep dive on it. But in the meantime, the critique that he had with respect to social constructivism is In four main areas. The first one is the consequences. This is from like page 368 of his article. And he says, the problem there is that they’re so focused on what shapes the technology, what brings it into being that they don’t look at anything that happens afterwards, the consequences.

And we can see that with respect to AI, where there’s a lot of work on the development, but now people are actually going, Hey, what are the impacts of this getting introduced large scale throughout our society? So we can see how our own discourse about technology is actually looking at the impacts. And this is something that was kind of missing from the theory.

theoretical point of view back in like 1987. Now I’ll argue that there’s value in understanding how we came up with a particular technology with how it’s formed so that you can see those signs again, when they happen. And one of the challenges whenever you’re studying technology is looking at something that’s incipient or under development and being able to pick the next big one.

Well, what? AI, we’re already past that point. We know it’s going to have a massive impact. The question is what are going to be the consequences of that impact? How big of a crater is that meteorite going to leave? Now for Winner, a second critique is that Scot looks at all the people that are involved in the production of a technology, but not necessarily at the groups that are excluded from that production.

For AI, we can look at the tech giants and the CEOs, the people doing a lot to promote and roll out this technology as well as those companies that are adopting it, but we’re often not seeing the impacts on those who are going to be directly affected by the large scale. introduction of AI into our economy.

We saw it a little bit with the Hollywood Strikes of 2023, but again, those are the high profile cases and not the vast majority of people that will be impacted by the deployment of a new technology. And this feeds right into Scot’s third critique, that Scot focuses on certain social groups but misses the larger impact or even like the dynamics of what’s going on.

How technological change may impact much wider across our, you know, civilization. And by ignoring these larger scale social processes, the deeper, as Langdon Winters says, the deeper cultural, intellectual, or economic regions of social choices about technology, these things remain hidden, they remain obfuscated, they remain part of the black box and closed off.

And this ties directly into Wiener’s fourth critique as well, is that when Scottis looking at particular technology it doesn’t necessarily make a claim about what it all means. Now in some cases that’s fine because it’s happening in the moment, the technology is dynamic and it’s currently under development like what we’re seeing with AI.

But if you’re looking at something historical that’s been going on for decades and decades and decades, like Oh, the black boxes we mentioned at the beginning, the flight recorders that we started the episode with. That’s pretty much a set thing now. And the only question is, you know, when, say, a new accident happens and we have a search for it.

But by and large, that’s a set technology. Can’t we make an evaluative claim about that, what that means for us as a society? I mean, there’s value in an analysis of maintaining some objectivity and distance, but at some point you have to be able to make a claim. Because if you don’t, you may just end up providing some cover by saying that the construction of a given technology is value neutral, which is what that interpretive flexibility is basically saying.

Near the end of the paper, in his critique of another scholar by the name of Stephen Woolgar, Langdon Winner states, Quote, power holders who have technological megaprojects in mind could well find comfort in a vision like that now offered by the social constructivists. Unlike the inquiries of previous generations of critical social thinkers, social constructivism provides no solid systematic standpoint or core of moral concerns from which to criticize or oppose any particular patterns of technical development.

end quote. And to be absolutely clear, the current development of AI tools around the globe are absolutely technological mega projects. We discussed this back in episode 12 when we looked at Nick Bostrom’s work on superintelligence. So as this global race to develop AI or AGI is taking place, it would serve us well to have a theory of technology that allows us to provide some critique.

Now that Steve Woolgar guy that Winder was critiquing had a writing partner back in the seventies, and they started looking at science from an anthropological perspective in their study of laboratory life. And he wrote that with Bruno Latour. And Bruno Latour was working with another set of theorists who studied technology as a black box and that was called Actor Network Theory.

And that had a couple key components that might help us out. Now, the other people involved were like John Law and Michel Callon, and I think we might have mentioned both of them before. But one of the basic things about actor network theory is that it looks at things involved in a given technology symmetrically.

That means it doesn’t matter whether it’s an artifact, or a creature, or a set of documents, or a person, they’re all actors, and they can be looked at through the actions that they have. Latour calls it a sociology of translation. It’s more about the relationships between the various elements within the network rather than the attributes of any one given thing.

So it’s the study of power relationships between various types of things. It’s what Nick Bostrom would call a flat ontology, but I know as I’m saying those words out loud I’m probably losing, you know, listeners by the droves here. So we’ll just keep it simple and state that a person using a tool is going to have normative expectancy.

About how it works. Like they’re gonna have some basic assumptions, right? If you grab a hammer, it’s gonna have a handle and a head and, and depending on its size or its shape or material, it might, you know, determine its use. It might also have some affordances that suggest how it could be used, but generally that assemblage, that conjunction of the hammer and the user.

I don’t know, we’ll call him Hammer Guy, is going to be different than a guy without a hammer, right? We’re going to say, hey, Hammer Guy, put some nails in that board there, put that thing together, rather than, you know, please hammer, don’t hurt him, or whatever. All right, I might be recording this too late at night, but the point being is that people with tools will have expectations about how they get used, and some of that goes into how those tools are constructed, and that can be shaped by the construction of the technology, but it can also be shaped by our relation to that technology.

And that’s what we’re seeing with AI, as we argued way back in episode 12 that, you know, AI is a assistive technology. It does allow for us to do certain things and extends our reach in certain areas. But here’s the problem. Generally, we can see what kind of condition the hammer’s in. And we can have a good idea of how it’s going to work for us, right?

But we can’t say that with AI. We can maybe trust the hammer or the tools that we become accustomed to using through practice and trial and error. But AI is both too new and too opaque. The black box is too dark that we really don’t know what’s going on. And while we might put in inputs, we can’t trust the output.

And that brings us to the last part of our story.

In the previous section, the authors that we were mentioning, Latour and Woolgar, like winner pitch biker, are key figures, not just in the study of technology, but also in the philosophy of science. Latour and Woolgar’s Laboratory Life from 1977 is a key text and it really sent shockwaves through the whole study of science and is a foundational text within that field.

And part of that is recognizing. Even from a cursory glance, once you start looking at science from a anthropological point of view, is the unique relationship that scientists have with their instruments. And the author Inkeri Koskinen sums up a lot of this in an article from 2023, and they termed the relationship that scientists have with their tools the necessary trust view.

Trust is necessary because collective knowledge production is characterized by relationships of epistemic dependence. Not everything scientists do can be double checked. Scientific collaborations are in practice possible only if its members accept it. Teach other’s contributions without such checks.

Not only does a scientist have to rely on the skills of their colleagues, but they must also trust that the colleagues are honest and will not betray them. For instance, by intentionally or recklessly breaching the standards of practices accepted in the field or by plagiarizing them or someone else.

End quote. And we could probably all think of examples where this relationship of trust is breached, but. The point being is that science, as it normally operates, relies on relative levels of trust between the actors that are involved, in this case scientists and their tools as well. And that’s embedded in the practice throughout science, that idea of peer review of, or of reproducibility or verifiability.

It’s part of the whole process. But the challenge is, especially for large projects, you can’t know how everything works. So you’re dependent in some way that the material or products or tools that you’re using has been verified or checked by at least somebody else that you have that trust with. And this trust is the same that a mountain climber might have in their tools or an airline pilot might have in their instruments.

You know, trust, but verify, because your life might depend on it. And that brings us all the way around to our black boxes that we started the discussion with. Now, scientists lives might not depend on that trust the same way that it would with airline pilots and mountain climbers, but, you know, if they’re working with dangerous materials, it absolutely does, because, you know, chemicals being what they are, we’ve all seen some Mythbusters episodes where things go foosh rather rapidly.

But for most scientists, what Koskinen notes that this trust in their instruments is really kind of a quasi trust, in that they have normative expectations about how the tools they use are going to function. And moreover, this quasi trust is based on rational expectations. They’re rationally grounded.

And this brings us back full circle. How does your AI work? Can you trust it? Is that trust rationally grounded? Now, this has been an ongoing issue in the study of science for a while now, as computer simulations and related tools have been a bigger, bigger part of the way science is conducted, especially in certain disciplines.

Now, Humphrey’s argument is that, quote, computational processes have already become so fast and complex that it was beyond our human cognitive capabilities to understand their details. Basically, computational intensive science is more reliant on the tools than ever before. And those tools are What he calls epistemically opaque.

That means it’s impossible to know all the elements of the process that go into the knowledge production. So this is becoming a challenge for the way science is conducted. And this goes way before the release of ChatGPT. Much of the research that Koskinen is quoting comes from the 20 teens. Research that’s heavily reliant on machine learning or on, say, automatic image classifiers, fields like astronomy or biology, have been finding challenges in the use of these tools.

Now, some are arguing that even though those tools are opaque, they’re black boxed, they can be relied on, and they’re used justified because we can work on the processes surrounding them. They can be tested, verified, and validated, and thus a chain of reliability can be established. This is something that some authors call computational reliabilism, which is a bit of a mouthful for me to say, but it’s basically saying that the use of the tools is grounded through validation.

Basically, it’s performing within acceptable boundaries for whatever that field is. And this gets the idea of thinking of the scientist as not just the person themselves, but also their tools. So they’re an extended agent, the same as, you know, the dude with the hammer that we discussed earlier. Or chainsaw man.

You can think about how they’re one and the same. One of the challenges there is that When a scientist is familiar with the tool, they might not be checking it constantly, you know, so again, it might start pushing out some weird results. So it’s hard to reconcile that trust we have in the combined scientist using AI.

They become, effectively, a black box. And this issue is by no means resolved. It’s still early days, and it’s changing constantly. Weekly, it seems, sometimes. And to show what some of the impacts of AI might be, I’ll take you to a 1998 paper by Martin Weitzman. Now, this is in economics, but it’s a paper that’s titled Recombinant Growth.

And this isn’t the last paper in my database that mentions black boxes, but it is one of them. What Weitzman is arguing is that when we’re looking at innovation, R& D, or Knowledge production is often treated as a black box. And if we look at how new ideas are generated, one of those is through the combination of various elements that are already existing.

If AI tools can take a much larger set of existing knowledge, far more than any one person, or even teams of people can bring together at any one point in time and put those together in new ways. then the ability to come up with new ideas far exceeds that that exists today. This directly challenges a lot of the current arguments going on, on AI and creativity, and that those arguments completely miss the point of what creativity is and how it operates.

Weitzman states that new ideas arise out of existing ideas and some kind of cumulative interactive process. And we know that there’s a lot of stuff out there that we’ve never tried before. So the field of possibilities is exceedingly vast. And the future of AI assisted science could potentially lead to some fantastic discoveries.

But we’re going to need to come to terms with how we relate to the black box of scientist and AI tool. And when it comes to AI, our relationship to our tools has not always been cordial. In our imagination, from everything from Terminator to The Matrix to Dune, it always seems to come down to violence.

So in our next episode, we’re going to look into that, into why it always comes down to a Butlerian Jihad.

Once again, thanks for joining us here on the Implausipod. I’m your host, Dr. Implausible, and the research and editing, mixing, and writing has been by me. If you have any questions, comments, or there’s elements you’d like us to go into additional detail on, please feel free to contact the show at drimplausible at implausipod dot com. And if you made it this far, you’re awesome. Thank you. A brief request. There’s no advertisement, no cost for this show. but it only grows through word of mouth. So, if you like this show, share it with a friend, or mention it elsewhere on social media. We’d appreciate that so much. Until next time, it’s been fantastic.

Take care, have fun.

Bibliography:
Bijker, W., Hughes, T., & Pinch, T. (Eds.). (1987). The Social Construction of Technological Systems. The MIT Press. 

Koskinen, I. (2023). We Have No Satisfactory Social Epistemology of AI-Based Science. Social Epistemology, 0(0), 1–18. https://doi.org/10.1080/02691728.2023.2286253 

Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford University Press. 

Latour, B., & Woolgar, S. (1979). Laboratory Life: The construction of scientific facts. Sage Publications. 

Pierce, D. (2024, January 9). The Rabbit R1 is an AI-powered gadget that can use your apps for you. The Verge. https://www.theverge.com/2024/1/9/24030667/rabbit-r1-ai-action-model-price-release-date 

rabbit—Keynote. (n.d.). Retrieved February 25, 2024, from https://www.rabbit.tech/keynote 

Sutter, P. (2023, October 4). AI is already helping astronomers make incredible discoveries. Here’s how. Space.Com. https://www.space.com/astronomy-research-ai-future 

Weitzman, M. L. (1998). Recombinant Growth*. The Quarterly Journal of Economics, 113(2), 331–360. https://doi.org/10.1162/003355398555595 

Winner, L. (1993). Upon Opening the Black Box and Finding it Empty: Social Constructivism and the Philosophy of Technology. Science Technology & Human Values, 18(3), 362–378. 

Episode catch-up

Looks like the last episode we published here was at the start of 2024, with Episode 25 – Echanger. We’ll take the time to link one episode a day, getting caught up, starting from then.


The indie version is also getting up to speed. Not quite to the point where I’m publishing simultaneously to both, but the archives are coming along nicely. We have full episodes of the newsletter available, and we’re working on a couple different feeds too. I’m excited to get those going. 🙂

The big job, of course, will be moving all the previous blog posts over. Still looking at a way to automate that effectively, as it’s way easier than doing that by hand.

I’m also going to try and post some of the content that feeds the sections of the newsletter here first, things like the Current Reading and Multi-Melting sections, as well as podcast episodes and other feed info. We’ll still have something unique for each issue, so feel free to subscribe here.

Soylent Culture

In 1964, Marshall McLuhan described how the content of any new medium is that of an older medium. This can make it stronger and more intense:

The content of a movie is a novel or a play or an opera. The effect of the movie form is not related to its program content. The “content” of writing or print is speech, but the reader is almost entirely unaware either or print or of speech.

Marshall McLuhan, Understanding Media (1964).

In 2024, this is the promise of the generative AI tools, that we currently have access to, tools like ChatGPT, Dall-E, Claude, Midjourney, and a proliferation of others. But this is also the end result of 30 years of new media, of the digitalization of anything and everything that can be used as some form of content on the internet.

Our culture has been built on these successive waves of media, but what happens when their is nothing left to feed the next wave?

It feeds on itself, and we come to live in an era of Soylent Culture.


Of course, this has been a long time coming. The atomization of culture into it’s component parts; the reduction in clips to soundbites, to TikToks, to Vines; the memeification of culture in general were all evidence of this happening. This isn’t inherently a bad thing, it was just a reduction to the bare essentials as ever smaller bits of attention were carved off of the mass audience.

Culture is inherently memetic. This is more than just Dawkins’ formulation of the idea of the meme to describe a unit of cultural transmission while the whole field of anthropology was right over there. The recombination of various cultural components in the pursuit of novelty leads to innovation in the arts and the aesthetic dimension. And when a new medium presents itself, due to changing technology, the first forays into that new medium will often be adaptations or translations of work done in an earlier form, as noted by McLuhan (above).

It can take a while for that new media to come into its own. Often, it’ll be grasped by the masses as ‘popular’ entertainment, and derided by the ‘high’ arts. It can often feel derivative as it copies those stories, retelling them in a new way. But over time, fresh stories start to be told by those familiar with the medium, with its strengths and weaknesses, tales told that reflect the experiences and lives of the people living in the current age and not just reflections of earlier tales.

How long does it take for a new media to be accepted as art?

First they said radio wasn’t art, and then we got War of the Worlds
They said comic books weren’t art, then we got Maus
They said rock and roll wasn’t art, then we go Dark Side of the Moon (and Pet Sounds, and Sgt Peppers, and many others)
They said films weren’t art, then we got Citizen Kane
They said video games weren’t art, and we got Final Fantasy 7
They said TV wasn’t art, and we got Breaking Bad
And now they’re telling us that AI Generated Art isn’t art, and I’m wondering how long it will take until they admit they were wrong here too.

But this can often happen relatively ‘early’ in the life-cycle of a new media, once creators become accustomed to the cultural form. As newer creators began working with the media, they can take it further, but there is a risk. Creators that have grown up with the media may be too familiar with the source material, drawing on the representations from within itself.

F’rex: writers on police procedurals, having grown up watching police procedurals, simply endlessly repeat the tropes that are foundational to the genre. The works become pastiches, parodies of themselves, often unintentionally, unable to escape from the weight of the tropes they carry.

Soylent culture is this, the self-referential culture that has fed on itself, an Ourobouros of references that always point at something else. The rapid-fire quips coming at the audience faster than a Dennis Miller-era Saturday Night Live “Weekend Update” or the speed of a Weird Al Yankovic polka medley. Throw in a few decades worth of Simpson‘s Halloween episodes, and the hyper-referential and meta-commentative titles like The Family Guy and Deadpool (print or film) seem like the inevitable results of the form.

And that’s not to suggest that the above works aren’t creative; they’re high examples of the form. But the endless demand for fresh material in the era of consumption culture means that the hyper-referentiality will soon exhaust itself, and turn inward. This is where the nostalgia that we’ve been discussing come into play, a resource for mining, providing variations of previous works to spark a glimmer in the audience’s eyes of “Hey, I recognize that!”

But they’re limited, bound as they are to previous, more popular titles, art that was more widely accessible, more widely known. They are derivative works. They can’t come up with anything new.

Perhaps.

This is where we come back to the generative art tools, the LLMs and GenAIs we spoke of earlier. Because while soylent culture existed before the AI Art tools came onto the scene, it has become increasingly obvious that they facilitate it, drive it forward, and feed off it even more. The AI art tools are voracious, continually wanting more, needing fresh new stuff in order to increase the fidelity of the model, that hallowed heart driving the beast that continually hungers.

But the model is weak, it is vulnerable.

Model Collapse

And the one thing the model can’t take too much of is itself. Model collapse is the very real risk of a GPT being trained on LLM generated text. Identified by Shumailov et. al. (2024), and “ubiquit(ous) among all learned generative models”, model collapse is a risk that creators of AI tools face in further developing the tools. In an era of model collapse, the human-generated content of the earlier, pre-AI web becomes a much valuable resource, the digital equivalent of low-background steel sought after for the creation of precision instruments in an era of atmospheric nuclear testing, where the background levels of radiation made the newly mined ore unsuitable for use.

(The irony that we were living in an era when the iron was unusable should not go un-noted.)

“Model collapse is a degenerative process affecting generations of learned generative models, in which the data they generate end up polluting the training set of the next generation. Being trained on polluted data, they then mis-perceive reality.”

(Shumailov, et. al., 2024).

Model collapse can result in the models “forgetting” (Shumailov, et al, 2023). It is a cybernetic prion disease. Like the cattle that developed BSE by being fed feed that contained parts of other ground up cows sick with the disease, the burgeoning electronic “minds” of the AI tools cannot digest other generated content.

Soylent culture.

But despite the incredible velocity that all this is happening at, it is still early days. There is an incredible amount of research being done on the effects of model collapse, and the long term ramifications for it on the industry. There may yet be a way out from culture continually eating itself.

We’ll explore some of those possible solutions next.

The Nostalgia Curve

Watching Deadpool and Wolverine, and engaging with the discourse surrounding it after, (I notoriously skip trailers, spoilers, and all but the most superficial reviews and prefer walking into movies relatively open-minded), one of the recurring themes in those discussions is how much the movie trades on nostalgia.

And with the recent release of Deadpool & Wolverine, there’s a renewed look at how nostalgia is driving (or if not behind the wheel, definitely tucked in with the seatbelts on. To a degree, this is understandable, as Hollywood is fairly risk-averse (seriously, this is the reason why you’ll see 100 sequels or adaptations in a given year, and only rarely does an original property break through). Of course there is more than just track record that nostalgia trades in on. Witness how it was deployed in the recent Twin Peaks: The Return.

I think they’re right, in so far as nostalgia can act as a balm, so that often people want more of that thing that they liked, but this isn’t necessarily a point of critique. There’s nothing wrong with liking what you like, and asking for (and maybe even getting) more of that, when it is available.

Three Fandoms

I’m thinking the best way to illustrate this would be by looking at three (enduring) fandoms here: Star Trek, Pro-wrestling, and comic books, and how they relate to and engage with new material produced for them.

These fandoms aren’t exactly equivalent, but they’re more alike beneath the surface than is usually acknowledged. All three cater to niche fandoms, and have persisted long enough that most of the population had had the opportunity to engage with them at some point in their lives. The slipping in and out of the zeitgeist that comes with successive waves of popularity is a critical part of that, as nostalgic parents can introduce their children to the media (and by extent the fandoms) that they enjoyed when they were younger.

Both comic books and pro-wrestling live in this weird kinda Eternal Now, that can acknowledge (and play off) their history (often as a means of generating credibility or cache), but continually, inexorably, have to put out new product. Sometimes they’ll re-introduce old characters in a new way to play off that, either through legacy characters or children (or relatives) of past performers but the trends are largely the same.

Star Trek is different (for the most part) as it has to continually create new stuff that is kinda like the old stuff, but still new and distinct enough that the fans will enjoy it. Witness the titles it has put out during the streaming era, with the dichotomy between Discovery, Picard, The Lower Decks, Prodigy, and Strange New Worlds, all coming out during roughly the same time period, and all engendering different reactions as they touch down on different points along that “nostalgia curve”.

Obviously, other properties play with the nostalgia curve at times too. Especially long running ones: Star Wars and Dr. Who come to mind; some gaming titles like Dungeons and Dragons, Magic:the Gathering, Pokemon, and Warhammer 40000 are getting old enough to test the waters as well.

So perhaps we should get to the point:

What is the Nostalgia Curve?

Maybe it’s best to think of the amount of nostalgia a given property evokes as existing along a gradient (maybe it can be a continuum, but we use that a lot. This time, we’re grading on the curve.) When something appears in a long-running piece of media, one with an inherent fandom, it can be a challenge to separate something from appearing for nostalgia purposes (i.e. marketing or whatever) and something existing just because it’s part of the setting)

Where you go “Hey look, it’s a wookie! they last showed up in Season 1 Episdoe 8 of the Acolyte! It’s been 20 years!” (says the viewer from the grimdark future of 2044).

(As unlikely as that scenario may be: Wookie’s Will Never Die; they’re the number one furry beast in my heart (behind Cookie Monster, and maybe Snuffleupagus. Wookies are top 5, is what I’m getting at.)

But back to the point I think I’m making is that the commodification of nostalgia, where whether or not a given movie or project even gets made depends on how much the perceived nostalgia factor is worth, is really the issue.

If the perceived value is enough, if you’re far enough along the nostalgia curve, then the movie can get made. And Hollywood being a place where money talks, it may be worth trying to create nostalgia for something that never existed in the first place. If you can create (or incept?) a “fake-thing-which-evokes-real-nostalgia” (actually name pending some focus groups), then you can commodify that in the same way that Deadpool did with Wolverine, and the “comic book accurate costume” that still isn’t 100%.

Nostalgic Memes

Nostalgia is representational (in a memetic way). Like earlier in the flick where Deadpool explicitly calls out the montage during a fourth wall break, and each scene in the montage is iconic within the comics, and instantly recognizable to a long-time fan, even though they never have occurred on screen at any point prior.

Every point of nostalgia is an assemblage (or container, or docker) for all the associations that accompany it. And these are all “shorthand” for everything else that is associated with those books. The time they were published, the creators (writers, artists, and editors), the events that they occurred during (“Age of Apocalypse” “Fall of the Mutants”, etc.).

Thus each and every nostalgic element packs in more and more, until a meta-textual movie like Deadpool & Wolverine can’t help but burst at the seams.

But in this case, it’s in a way that feels deserved. A recent IGN review of D&W lumped it in with the adaptation of Ready Player One, a film similarly stuffed to the brim with “Hey, I recognize that!” moments, and criticized it as being one of Steven Spielberg’s weakest films. Now, Senor Spielbergo may have forgotten more about making fantastic movies than most will ever know, so were the failures of RP1 Spielberg’s fault, or was he simply faithful to the source material?

(I’m asking as I found RP1 (The Book) execrable, and punted it at around the 20 page mark. I declined to watch the RP1 (The Movie.)

What we’re getting at here is that nostalgia is a hot commodity. It isn’t going away any time soon, and even though we all yearn for something fresh and new, and endlessly scrolling on our apps of choice to find it, we end up finding community and joy in our shared nostalgia for things we’re pretty sure we never saw, at least not the way we imagined them to be.