Implausipod E0012 – AI Reflections

AI provides a refection of humanity back at us, through a screen, darkly. But that glass can provide different visions, depending on the viewpoint of the observer. Are the generative tools that we call AI a tool for advancement and emancipation, or will they be used to further a dystopic control society? Several current news stories give us the opportunity to see the potential path before us leading down both these routes. Join us for a personal reflection on AI’s role as an assistive technology on this episode of the Implausipod.

https://www.buzzsprout.com/1935232/episodes/13472740

Transcript:

 On the week before August 17th, 2023, something implausible happened. There was a news report that a user looking for, can’t miss spots in Ottawa, Ontario, Canada, would be returned some unusual results on Microsoft’s Bing search. The third result down on an article from MS Travel suggested the users could visit the Ottawa food bank if they’re hungry, that they should bring an appetite.

This was a very dark response, a little odd, and definitely insensitive, making one wonder if this is done by some teenage pranksters or hackers, or if there was a human involved in the editing decisions at all. Because initial speculation was that this article – credited to Microsoft Travel – may have been entirely generated by AI.  Microsoft’s subsequent response in the week following was that it was credited due to human error, but doubts remain, and I think the whole incident allows us to reflect on what we see in AI, and what AI reflects back to us… about ourselves, which we’ll discuss in this episode of the ImplausiPod.

Welcome to the ImplausiPod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible, and today on episode 12, we’re gonna peer deeply into that glass, that formed silicon that makes up our large language models and AI, and find out what they’re telling us about ourselves.

Way back in episode three, which admittedly is only nine episodes but came out well over a year ago, we looked at some of the founding figures of cyberpunk and of course one of those is Philip K Dick, who’s most known for Do Android’s Dream of Electric Sheep, which became Blade Runner, and now The Man in the High Castle, and other works which are yet un-adapted, like The Three Stigmata of Palmer Eldrich, but one of his most famous works was of course A Scanner Darkly, which had a Rotoscoped film version released in 2006 starring Keanu Reeves. Now, the title, of course, is a play on words from the biblical verse from One Corinthians where it’s phrased as looking “through a glass darkly”, and even though there’s some ambiguity there, whether it’s a glass or a mirror, or in our context, a filter, or in this case a scanner or screen. With the latter two being the most heavily technologized of all of them, the point remains, whether it’s a metaphor or a meme, that by peering through the mirror, the reflection that we get back is but a shadow of the reality around us.

And so too, it is with AI. The large language models, which have been likened to “auto-complete on steroids”, and the generative art tools (which are like procedural map makers that we discussed in a icebreaker last fall) have gained an incredible amount of attention in 2023. But with that attention has come some cracks in the mirror, and while there is still a lot of deployment of them as tools, they’re no longer seen as the harbinger of AGI or (artificial) general Intelligence, let alone super intelligence that will lead us on a path through a technological singularity. No, the collection of programs that have been branded as AI are simply tools what media theorist Marshall McCluhan called “Extensions of Man”, and it’s with that dual framing of the mirror held extended at our hand that I wanna reflect on what AI means for us in 2023.

So let’s think about it in terms of a technology. In order to do that, I’d like to use the most simple definition I can come up with; one that I use as an example in courses I’ve taught at the university. So follow along with me and grab one of the simplest tools that you may have nearby. It works best with a pencil or perhaps a pair of chopsticks, depending on where you’re listening.

If you’re driving an automobile, please don’t follow along and try this when you’re safely stopped. But take the tool and hold it in your hands as if you were about to use it, whether to write or draw or to grab some tasty sushi or a bowl of ramen. You do you. And then close your eyes and rest for a moment.

Breathe and then focus your attention down. To the tool in your hands, held between your fingers and reach out. Feel the shape of it, you know exactly where it is, and you can kind of feel with the stretch of your attention, the end of where that might actually exist. The tool has now become part of you, a material object that is next to you and extends your senses and what you are capable of.

And so it is with all tools that we use, everything from a spoon to a steam shovel, even though we don’t often think of that as such. It also includes the AI tools that we use, that constellation of programs we discussed earlier. We can think of all of these as assistive technologies, as extensions of ourselves that multiply our capabilities. And open your eyes if you haven’t already.

So what this quick little experiment is helpful in demonstrating is just exactly how we may define technology. Here using a portion of McLuhan’s version. We can see it as an extension of man, but there have been many other definitions of technology before. We can use other versions that focus on the artifacts themselves, like Fiebleman’s  where tech is “materials that are altered by human agency for human usage”, but this can be a little instrumental. And at the other extreme, we can have those from the social construction school like John Laws’ definition of “a family of methods for associating and channelling other entities and forces, both human and non-human”. Which when you think about it, does capture pretty much everything relating to technology, but it’s also so broad that it loses a lot of the utility.

But I’ve always drawn a middle line and my personal definition of technology is it’s “the material embodiment of an artifact and its associated systems, materials, and practices employed to achieve human ends”. I think we need to capture both the tool and the context, as well as the ways that they’re employed and used, and I think this definition captures the generative tools that we call AI as well. If we can recognize that they’re tools used for human ends and not actors with their own agency, then we can change the frame of the discussion around these generative tools and focus on what ends they’re being used for.

And what they’re being used for right now is not some science fictional version, either the dystopic hellscapes of the Matrix or Terminator, or on the flip side, the more utopic versions, the one, the “Fully Automated Luxury Communism” that we’d see in the post scarcity societies of like a Star Trek: The Next Generation, or even Iain M. Banks’ the Culture series.  Neither of these is coming true, but those polls – that ideation, these science fiction versions that kind of drive our collective imagination of the publics, the social imaginaries that we talked about a few episodes ago – these polls represent the two ends of that continuum, of that discussion, that dialectic between utopic and dystopic and the way we frame technology.

As Anabel Quan-Haase notes in their book on Technology and Society, those poles: the utopic idea of technology achieving progress through science, and the dystopic is technology as a threat to established ways of life, are both frames of reference. They could both be true depending on the point of view of the referrer. But as we said, it is a dialectic. There is a dialogue going back and forth between these two poles continually. So technology in this case is not inherently utopic or dystopic, but we have to return again to the ends that the technology is put towards: the human ends. So rather than utopic or dystopic, we can think of technology being rather emancipatory or controlling, and it’s in this frame, through this lens, this glass that I want to peer at the technology of AI.

The emancipatory frame for viewing these generative AI tools view them as an assistive technology, and it’s through this frame, this lens that we’re going to look at the technology first. These tools are exactly that: they are liberating, they are freeing. And whenever we want to take an empathetic view of technology, we wanna see how they may be used by others who aren’t in our situation.  And that situations means they may be doing okay, they might be even well off, but they may also be struggling. There may be issues that they, or challenges that they have to deal with on a regular basis that most of us can’t even imagine. And this is where my own experience comes from. So I’ll speak to that briefly.

Part of my background is when I was doing my field work for my dissertation, I was engaged with a number of the makerspaces in my city, and some of them were working with local need-knowers or persons with disabilities like the Tikkun Olam Makers, as well as the Makers Making Change groups. These groups worked with persons with disabilities to find solutions to their particular problems.  problems that often there wasn’t a market solution available because it wasn’t cost effective. You know, the “Capitalist realism” situation that we currently are under means that a lot of needs, especially for marginal groups, may go unmet. And these groups came together to try and meet those needs as best they could through technological solutions using modern technologies like 3D printing or microcontrollers or what have you, and they do it through regular events, whether it was a hackathon or regular monthly meetup groups or using the space provided by a local makerspace. And in all these cases, what these tools are are liberating to some of the constraints or challenges that are experienced in daily life.

We can think of more mainstream versions, like a mobility scooter that allows somebody with reduced mobility to get around and more fully participate within their community to meet some of the needs that they need on a regular basis, and even something as simple as that can be really liberating for somebody who needs it. We need to be cognizant of that because as the saying goes, we are all at best just temporarily able, and we never know when a change may be coming that could radically change our lives. So that empathetic view of technology allows us to think with some forethought about what may happen as if we or someone we love were in that situation, and it doesn’t even have to be somebody that close to us. We can have a much more communal or collective view of society as well.

But to return to this liberating view, we can think about it in terms of those tools, the generative tools, whether they’re for text or for art, or for programming, or even helping with a little bit of math.  We can see how they can assist us in our daily lives by either fulfilling needs or just allowing us to pursue opportunities that we thought were too daunting. While the generative art tools like Dall-E and Midjourney have been trained on already existing images and photographs, they allow creators to use them in new and novel ways.

It may be that a musician can use the tools to create a music video where before they never had the resources or time or money in any way, shape, or form to actually pursue that. It allows them to expand their art in a different realm. Similarly, an artist may be able to create stills that go with a collection or you know, accompany their writing that they’re working on, or an academic could use it as slides to accompany a presentation, something that they’ve spent time on, or a YouTube video, or even a podcast and their title bars and the like (present company included). My own personal experience when I was trying to launch this podcast was there was all this stuff I needed to do, and the generative art tools, the cruder ones that were available at that time, allowed some of the art assets to be filled in, and that barrier to launch, that barrier to getting something going was removed.

So emancipatory, liberating, even though at a much smaller scale, those barriers were removed and it allowed for creativity to flow in other areas, and it works similarly across these generative tools, whether it’s putting together some background art or a campaign map or a story prompt. If you need some background for a characters that are part of a story as an NPC, as a Dungeon Master, or what have you, or even just to bounce or refine coding ideas off of, I mean, the coding skills are rudimentary, but it does allow for something functional to be produced.

And this leads into some of the examples I’d like to talk about. The first one is from a post by Brennan Stelli on Mastodon on August 18th, where he said that we could leverage AI to do work, which is not being done already because there’s no budget time or knowhow.  There’s a lot of work that falls into this space of stuff that needs to be done, but you know, is outside of scope of a particular project. This could include something like developing the visualizations that will allow him to better communicate an idea in a fraction of the time, you know, minutes instead of hours that would normally take to do something like that, and so we can see in Brennan’s experience that it mirrors a lot of our own.

The next example’s a little bit more involved in an article written by Pam Belluck and published on the New York Times website on August 23rd, 2023. She details how researchers have used predictive text as well as AI generated facial animations that help with an avatar and speech that assist the stroke victim in communicating with their loved ones.

And the third example that hit a little bit closer to home was that of a Stanford research team that used the BCI or brain computer interface, along with AI assisted predictive text generation to allow a person with amyotrophic lateral sclerosis or ALS (to talk) at a regular conversational tempo, the tools read the neural activity that would be combined with the facial muscles moving and that is allowed to be translated into text. These are absolutely groundbreaking and amazing developments and I can’t think of any better example that shows how AI can be an assistive technology.

Now most of these technologies are confined to text and screen, to video and audio, and often when we think of AI, we think of mobility as well. So the robotic assistants that have come out of research labs like that of Boston Dynamics have attracted a lot of the attention, but even there, we can see some of the potential as an assistive technology. The fact that it’s confined to a humanoid robot means we sometimes lose sight of that fact, but that is what it is. In the video that they released in January of 2023, it shows an Atlas Robot as an assistant on a construction site providing tools and moving things around in aid of the human that’s the lead on the project, so it allows a single contractor working on their own to extend what they’re able to do, even if they don’t have access to a human helper. So it still counts as an assistive technology, even though we can start to see the dark side of the reflection through this particular lens, that the fact that an emancipatory technology may mean emancipation from the work that people currently have available to them.

In all of these instances, there’s the potential for job loss, that the tools would take the place of someone doing that, whether it’s in writing or as an artist, or a translator, or transcriber, or a construction assistant, and those are very real concerns. I do not want to downplay that, Part of our reflection on AI has to take these into account that the dark side of the mirror (or the flip side of the magnifying glass) can take something that can be helpful and exacerbate it when it’s applied to society at large. The concerns about job loss are similar to concerns we’ve had about automation for centuries, and they’re still valid. What we’re seeing is an extension of that automation into realms that we thought were previously exclusively bound to, you know, human actors: creators, artists, writers and the like.

This is why AI and generative art tools are such a driving and divisive element when it comes to the current WGA and SAG-Aftra strikes: that the future of Hollywood could be radically different if they see widespread usage. And beyond just the automation and potential job loss, a second area of concern is the one that ChatGPT and the large language models don’t necessarily have any element of truth involved in it, that they’re just producing output linguists like Professor Emily Bender of the University of Washington and the Mystery AI Hype Theater Podcast have gone into extensive detail about how the output of ChatGPT cannot be trusted. It has no linkage to truth, and there’s been other scholars that have gone into the challenges with using ChatGPT or LLMs for legal research or academic work or anything along those lines. I think it still has a lot of potential and utility as a tool, but it’s very much a contested space.

And the final area of contestation that we’ll talk about today is the question of control. Now, that question has two sides: the first is the control of that AI. One that most often surfaces in our collective imaginary is that idea of rogue super intelligences or killer robots gets repeated in TV, film, and our media in general, and this does get addressed at an academic level and works like Stuart Russell’s Human Compatible and Nick Bostrom’s Superintelligence.  They both address the idea of what happens if those artificial intelligences get beyond human capacity to control them.

But the other side of that is the control of us, control of society. Now, that gets replicated in our media as well, and everything from Westworld, to the underlying themes of the TV series Person of Interest, where The Machine is a computer system, developed to help detect and anticipate and suppress terrorist action using the tools of a post 9-11 surveillance state that it has access to.

And ever since Gilles Deleuze wrote his Postscript on the Societies of Control back in 1990, that so accurately captured the shift that had occurred in our societies from the sovereign societies of the Middle Ages and Renaissance through to the disciplinary societies that typified the 18th and 19th century, through to the shift that occurred in the 20th and 21st century to that of a control society where the logics of the society was enforced and regulated by computers and code. And while Deleuze was not talking about algorithms and AI in his work, we can see how they’re a natural extension of what he was talking about, how the biases that are ingrained within our algorithms, what Virginia Eubanks talked about in her book Automating Inequality, and how our biases and assumptions that go into the coding and training of those advanced system can manifest in ways, including everything from facial recognition to policing, to recommendation engines on travel websites that suggest that perhaps should go to the food bank to catch a meal.

Now there’s a twist to our Ottawa food bank story, of course. About a week after Microsoft came out and said that the article had been removed and that it had been identified that the issue was due to human error and not due to an unsupervised AI. But even with that answer, there are those who are skeptical: because it didn’t happen just once. There was a lot of articles where such weird or incongruous elements showed up. And of course, this being the internet, there was a number of people that did catch the receipts.

Now there’s a host of reasons of what might be happening with these bad reviews. Some plausible and some slightly less so. It could be just an issue of garbage in garbage out that the content that they’re scraping to power the AI is drawing articles that already exist that are, you know, satire or meme sites. If the information that you’re getting on the web is coming from Something Awful or 4chan, then you’re gonna get some dark articles in there. But the other alternative is that it could be just hallucinations that have been an observed fact that has been happening with these AIs and large language models that, uh, incidents like we saw with the Loa B that we talked about in an icebreaker last year are still coming forward in ways that are completely unexpected and out of our control.

That scares us a little bit because we don’t know exactly what it’s going to do. When we look at the AI through that lens, like in the mirror, what it’s reflecting back to us is something we don’t necessarily want to look at, and we think that it could be revealing the darkest aspects of ourselves, and that frightens us a whole lot.

AI is a reflection of our society and ourselves, and if we don’t like what we’re seeing, then that gives us an opportunity to perhaps correct things because AI, truth be told, is really dumb right now. It just shows us what’s gone into building it. But as it gets better, as the algorithms improve, then it may get better at hiding its sources.

And that’s a cause for concern. We’re rapidly reaching a point where we may no longer be able to tell or spot a deepfake or artificially generated image or voice, and this may be used by all manner of malicious actors. So as we look through our lens at the future of AI , what do we see on our horizon?

References:
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

Deleuze, G. (1992). Postscript on the Societies of Control. October, 59, 3–7.

Eubanks, V. (2018). Automating Inequality. Macmillan.

McLuhan, M. (1964). Understanding Media: The Extensions of Man. The New American Library.

Quan-Haase, A. (2015). Technology and Society: Social Networks, Power, and Inequality. Oxford University Press.

Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.

Links:
https://arstechnica.com/information-technology/2023/08/microsoft-ai-suggests-food-bank-as-a-cannot-miss-tourist-spot-in-canada/

https://tomglobal.org/about

https://www.makersmakingchange.com/s/

https://arstechnica.com/health/2023/08/ai-powered-brain-implants-help-paralyzed-patients-communicate-faster-than-ever/

https://blog.jim-nielsen.com/2023/temporarily-abled/

https://www.businessinsider.com/microsoft-removes-embarrassing-offensive-ai-assisted-travel-articles-2023-8

Implausipod EP009: Recursive Publics and Social Media

Introduction

What are “recursive publics” and “social imaginaries”, how have they impacted the development of the modern internet, and what impact do they have on the state of the internet in 2023 with the implosion of Twitter, Reddit, and the rise of the Fediverse? Stay tuned as we take a 50000 foot view of the rise of the public sphere of geeks.

https://www.buzzsprout.com/1935232/episodes/13329924#

Transcript

 Welcome to the Implausipod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible, and today we’re gonna follow on from our last episode and stay in the social media sphere and look at the idea of a recursive public, a form of a social imaginary, and see how they’ve impacted the development of the modern internet.

What is a recursive public? Well, if you’re using the internet and if you’re seeing or hearing this, I’m gonna guess you are, you’re impacted by one because recursive publics are the driving force behind a lot of the tools of the internet. And they’re also now driving the future of social media through the ActivityPub protocol.

And I’m also gonna hazard a guess that you’d never even heard of them before, even though the idea has been around for nearly 20 years. So let’s get into it: let’s find out how geeks build communities online and what that means for the future of the internet. Now, when we last spoke, Threads had just come out, Twitter was still called Twitter, and we were worried about Facebook possibly engaging in something called EEE with respect to ActivityPub. Since then, Threads has cut its user base in half, Twitter’s now called X, and Google’s the one engaged in EEE with respect to something called W E I or Web Environment Integrity, which will be D R M on all chromium browsers.

So, we might need to have a look at that sometime in the future, but like Ferris Bueller said: “life moves pretty fast. If you don’t stop and look around once in a while, you could miss it.” But that was back in the eighties and life was moving way faster now in the 21st century. So let’s try and get caught up a little bit.

While the goal is to be weekly with this, there’s some challenges with that, so I’ll just work on improving my workflow and iterating through a process of, uh, additive manufacturing, so to speak, and getting better over time. We’ll increase the frequency as things improve, but that brings us back to the topic at hand because that idea of improving through iteration is core to what the recursive public is.

What exactly is it? Well, as Christopher Kelty explained in 2005, a recursive public is a group, or rather a particular form of social imaginary through which this group develops the means of their own association and the material form that this imagination takes the technical and legal conditions required for their association.

So, in other words, it’s a bunch of geeks that get together and say: “Hey, how can we use the internet to talk?” and developed tools and processes by which they can get together and talk. It’s a little circular, and those tools can be things like, you know, a chat room or email, but they can also be the underlying tools like the operating system, Linux or something for sharing things like Napster, and those are the things that Kelty was originally looking at, and that kind of makes sense.

But wait a second. You’re asking. What’s a social imaginary? Well, we’re at the risk of defining things by using other things. So, um, let’s drill down a little bit and see if we can get to a base level of understanding. Social imaginaries are ways in which people imagine their social existence and how they fit together with others.

How things go on between them and their fellows, and the expectations that are normally met. And the deeper normative notions and images that underlie these expectations. Now, that’s a direct quote from Charles Taylor in 2004 who described them as meta topical spaces or topical spaces. The place where a conversation takes place, and not just conversation, but also pre 20th century also where like rituals and practices and assembly takes place.

And as I’m talking here, I realize I need to put a pin in that idea of “where a conversation takes place”, and we’ll circle back to that in a little while. But we’re defining things with other things again. So, topical spaces, if that’s where the conversation’s taking place, then who’s having that conversation?

Well, a public. Not the public mind you, just a public, that’s having that conversation. So I think we’re getting somewhere. If we have multiple conversations taking place, then that must be happening in the public sphere and that is where the public is. And when we’re looking at the difference between these publics, we’re looking at the work of Michael Warner who talked about Publics and Counter-publics in 2002.

The public is the social totality. It is, in other words, the social imaginary and that differs from a specific instantiation, which would be a public. Publics are happening all the time. They form, they’re swirling together, they achieve a specific mass and through discursive address, and performed attention in quotes, guilty before dissipating, and either achieving critical mass to become a movement or, you know, drifting off into the either.

So a discussion would be a topical public and a public constituted through the imagined participation in a discussion is a meta topical public, and all of these together, that social totality, they’re engaging in the public’s sphere or this is where the public sphere happens, and if we’re situating those within the public sphere, then that brings us all the way back to Habermas.

Wonderful. I think I’ve managed to make this as clear as mud. Fantastic.

Let’s diagram this out a little bit and see if we can make some sense of all this. Whenever you have a group of people involved in a discussion that creates a topical public, it doesn’t matter whether it’s face-to-face or through the media or online, it’s a public. That’s it. That’s the minimum. We need a public that’s constituted through the imagined participation in that discussion. So that includes the audience basically is a meta topical public, and you can have multiple of those together to create that public.

Each of these discussions amongst the publics occurs in a particular topical space. So if it’s online, we could think of these as like subreddits or discussion forums or ABNs or what have you. And then if you have multiple of those together, it would be a meta topical space. This would be like the platform itself, whether it’s Twitter, Sorry, X, Reddit, Facebook, TikTok. These are what Taylor calls “non-local common spaces”.  And again, that’s particular to the internet, but it happens in broadcast and other media as well. And then if you have a particular group, which can. Change the place of the means of their association. That is a recursive public. And so that’s like your geeks in Linux or what’s happening right now with Mastodon, ActivityPub and the Fediverse in general.

And that was the big change: the way a recursive public, one that’s on the internet, can actually make changes to the way they get together and communicate. You see, those meta topical common spaces had already existed long before the internet, prior to the 18th century. We called them things like the Church and the State.  But in the 18th century, we had the idea of this new social imaginary that showed up. That would become, what was the public sphere? It was the coffee house society. It was the discussion that would take place within the newspapers, the letters to the editor within the salons. So all this happened well before the internet.

What these spaces are is they’re, they bring about by like a common understanding that like, this is how we talk, this is where things take place and this is how we can discuss things. And this public sphere is made up by, it’s like an extra political space, right? It’s not brought about by any legislation or political maneuver, the government or the church, but through the practices and the media of that society, through the way they’re able to communicate with each other, and it’s a self-organizing space through the conversations that are taking place.

One of the things that made it really powerful was that it was seen as apolitical or extra political that it took place away from the discussions of power and had a place that was seen outside of that. Because it’s outside of that power, it has power. Which is kind of weird, I know, but it’s like why you’ll see politicians engage on Twitter or TikTok and try and be trendy just because they need to court the power that’s there in the public sphere.

It’s also why you’ll see like authoritarian states try and fake the existence of a public sphere by having news media or what have you. That gives the appearance that there’s a discussion going on. And there’s amazing scholars that have done work on like, the role of media in Eastern Bloc countries and the like, and how that, you know, legitimizes that power.

But that’s way outside of our point of discussion. The main point is that these social imaginaries, these ways that the public imagines society to be, have existed for a long time. And while it’s classically been defined by the activities like speaking and writing and thinking and having that discussion, we now need to change that a little bit in the internet era and include things like building and coding and compiling and redistributing and sharing and hacking.

And this is what Kelty is arguing, is that this “argument by technology” can create a new way of building a public space, a recursive public. You can contrast this with like a non recursive public, which would be like a newspaper or a political gathering. There’s the organizers or the people who write or publish the newspapers, and occasionally there’s like a letter to an editor or they’ll have somebody get up, but by and large, they’re locked into way that it allows them to engage with the public in the first place.

A recursive public allows for the feedback and for that public to remake the means of that gathering. In their own terms and their own terms include their shared common understanding, the way they imagine the world works. And how do they imagine the world works? How do they come up with the ideology that they share?

Well, myths and narratives and folklore. The shared fictions that they have pre-internet. This would be things like, uh, tall tales like Paul Bunyan or George Washington not being able to tell a lie. Those kinds of things. Anything that would be a fodder for like a Disney movie or TV show. Post internet, this can include things like, you know, the “net treats censorship as damage”, or “show me the code” or the idea of a singularity, or the ideas behind free and open-source software In the general, or even some of the underlying myths about cyberspace or the images and beliefs that go into like the identity of a hacker.

These are all elements that constitute the social imaginary of a recursive public, of a public on the internet. But there’s a twist. And the twist is social media. See, as I said, Kelty was writing in 2005 and he was talking about Napster and Linux, and he did some ethnographic field work with groups that are engaged in that, you know, in different parts of the world.

But, Since 2005, there’s been some changes to how the internet works, so let me read off some names and dates. Facebook, 2004. Reddit 2005, Snapchat, 2005. Twitter, 2006, Instagram 2010 GitHub 2008. YouTube 2005, TikTok or Douyin. 2012, and even the ones like Facebook that were before 2005, before Kelty was writing, were much smaller then.

So when Kelty was writing the internet was a radically different place than it is now in 2023, we’ve had the rise of these platforms, these. Social networks, but within walled gardens that all seek to recreate the public sphere. Having learned some of the lessons from the dot.com boom and bust, and from AOL and the other crashes, you could call them all medic topical spaces because they allow for multiple discussions and in their totality make up a public sphere.

Not “the” public sphere because the old public sphere is still there and they still interact with the online one as well, and none of them on their own make up the public sphere are constituted of it, even though just by dint of size, Facebook probably comes close. And it’s within this framework that Elon Musk with his purchase and subsequent rebranding of Twitter tried to buy into and Twitter’s role within it, even though it was smaller than most of the others, was the extent that it was legitimized, because that’s where journalists and academics and politicians would go to have those discussions.

That was where the conversation was taking place. But in 2023, that place has shifted, and this has been going on for a while. In the mid 20 teens, the geeks were chafing at the various restrictions, digital rights management and other, uh, issues with the various walled gardens and platforms. And because the geeks constituted a recursive public, they set about creating their own version of these walled platforms, of these social networks, one that fit their needs better.

They recognize the utility of those social networks and that they could be used for good, but they recognize that there’s also serious limitations with the way they’re constructed and the way they commoditize their audiences, as we discussed last time. So in 2018, the ActivityPub protocol was created and it became a standard upon which new applications and communication networks could be built.

Like a lot of these tools and especially the early Linux tools in the nineties, it’s been worked on part-time by a lot of volunteers, occasionally funded, and even though it’s been a little rough, it’s gotten better over time, over the intervening five years. So in late 2022 when Elon Musk purchased Twitter and in 2023, when Reddit and various other social networks started having massive problems, an alternative existed.

A new recursive public built by the geeks that mirrored some of the forms of the platforms of the previous 15 years of the social networking era. Different but familiar enough that it allowed for use. Thus, once again, the geeks have remade the internet, building a community that they can use, and we are moving.

Into the era of the FediVerse, but we’ll have to explore that in a future episode. For now, let’s wrap this up. I’m Dr. Implausible. It’s been a pleasure to join you. Transcripts should be available on the blog sometime soon, within a day or so, and we’ll also try and get a video version of the this up on the YouTubes.

The whole show is produced under Creative Commons 4.0 Share Alike license. Audio is by me, music is by me, and all the writing and stuff is too. No generative text or large language models have been employed in the production of this episode, and the world is moving pretty fast. So get out there and enjoy it.  Until next time, I’m Dr. Implausible. Have fun.

References:
Anderson, B. R. O. (1991 [2006]). Imagined communities: Reflections on the origin and spread of nationalism. Verso.

Habermas, J. (1989). The structural transformation of the public sphere: An inquiry into a category of bourgeois society (T. Burger, Trans.). MIT Press.

Kelty, C. (2005). Geeks, Social Imaginaries, and Recursive Publics. Cultural Anthropology,_20(2), 185–214. [https://doi.org/10.1525/can.2005.20.2.185](https://doi.org/10.1525/can.2005.20.2.185)

Taylor, C. (2004). Modern social imaginaries. Duke University Press.

Warner, M. (2002) “Publics and Counterpublics”. Public Culture 14(1): 49-90.

Implausipod EP008: Audience Commodity

(Editor’s note: this is part 2 of the previous post on the audience commodity, which was drawn from a discussion thread on Mastodon. Much of that made it into the transcript of both the Youtube episode and the Podcast (both embedded below). This post will include the full transcript of the audio (and video), so there may be some duplication with the previous post, in the interest of completeness.

If this format of posting works out, then they should be better aligned in the future. Still working on the basics of the POSSE system. Better life through Additive Manufacturing though; iterate and improve. In the meantime, enjoy!)

The link to audio version, from Implausipod Episode 008 is here: https://www.implausipod.com/1935232/13185814-implausipod-e0008-audience-commodity


Introduction

Getting started with a brief rundown of an old article that details the rise of the Audience Commodity: Smythe (1977) “Communications: Blindspot of Western Marxism”, we use that to explain the recent events of the internet of the last month or so, including the Twitter-pocalypse, the Reddit Meltdown, the rise of ChatGPT, and some general media theory too.


Transcript

 Welcome to the Implausipod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible, and as we return to a regular recording schedule, I’m going to introduce you to the audience commodity, an old idea from economics tat goes a long way to explain some of the current events we’re seeing in the social media spaces.

What exactly is the audience commodity? Well, that’s a fantastic question. With the recent introduction of Threads a little bit ahead of schedule because of the Twitter apocalypse, I thought it’d be worth going into the background of it because it’s really got some relevance for the current events that are happening today. Because it was published in a relatively obscure Canadian academic journal back in the seventies, it hasn’t seen that much adoption by mainstream economics, but we’ll get into it. If that the kind of thing is your bag, then by all means, stick around.

In short, the audience commodity is all about how you and I and all of us really are turned into products by the cultural industries, whether it’s media or advertisements or websites.

I’ll put the citation on the screen (see below) for those that are interested. The author, Dallas W. Smythe was writing it as a bit of a challenge to traditional Marxist economic thinking at the time in the seventies. He said they were getting it wrong when it came to the cultural industries and the impact that they actually had, what they were doing.

Now Dallas Smythe was a former economist at the FCC, and he was blacklisted due to McCarthyism. I mean, Hoover had a file on him, for reasons, and he is drawing heavily on a book called Monopoly Capital that was put out in the sixties by Baran and Sweezy. We should probably do a whole episode on that at some point in time, but we’ll see how this goes.

Now for Smythe, the main argument speaks directly to Facebook or Meta’s business model. This goes the same for like Google and everything else too. And what is their business model? Websites? No. Apps? No. Advertising? Close, but still not the whole picture. Their business model is the production of the audience commodity. Advertisers buy audiences and those audiences. Time is their labor. And how did Smythe come to this conclusion? Well, he’s asking a simple economic question. Basically, what economic functions for capital do mass communication systems serve? And in this case, both Google and Facebook, Meta and Alphabet, whatever, both fit in the same “mass” of mass communication. They have a huge reach. So in order to figure out the economic function, you need to figure out what the commodity those companies produce actually is. And you might think you know what this is, it’s the whole: “if you’re not paying, you’re the product” line. And this is a part of that, only in a lot more detail.

A part of Smythe’s argument is that traditional economics was getting it wrong. If you asked “what does the media produce?”, you might answer something like content or information or messages or entertainment or shows or something like that. And that’s understandable. It’s what it looks like they do. So you’d be forgiven if you thought That’s how it worked, because that was the traditional orthodox idealist point of view.  It was held by everybody from Marx to Galbraith to Veblen to McCluhan. There’s a lot of academic writing on this idea and non-academic writing too. Everybody thought that’s what was going on. Smythe’s argument is that it misses the point. If the trad orthodox view of economics is getting it wrong, what do the media companies actually produce?

What is the commodity form of advertising sponsored communications under “late capitalism”, or “monopoly capitalism” as Baran and Sweezy would say? The answer is audiences and readerships, or just the audience. The audience commodity here, the labor power of the workers, is resold to the advertisers. This is normally in the parlance of the time called the Consciousness industry.

So remember this: TV stations and walled platforms on the internet are factories that produce audiences for advertisers. That’s what’s coming outta the end of the factory. So that’s a lot of the overarching stuff. Let’s get into some of the specifics. Smythe has eight main points, and we’re gonna cover these quickly and then move on to how it connects to the social media platforms: Threads, Facebook, Twitter, TikTok, AOL, Reddit, whatever.  

So Smythe’s questions are in order. Here we go. Question one, what did the advertisers buy with their money? Answer: the services of audiences in predictable numbers. It’s a service economy and we are the ones providing the service.  We’re also ones being a served up, which is, I guess, ironic. The commodity is the collective.

Question two, how do advertisers know what they’re getting what they’re paid for? Well, various rating agencies back in the day, like the Nielsen’s and whatever, and the analysis, which has largely moved in-house for streaming and internet platforms.  There’s a whole host of stuff that falls under the umbrella of market research.

Question three, what institutions produce the commodity that advertisers want? Well, we’ve hinted at this, but it’s principally and traditionally the owners of TV and radio stations and newspapers and magazine publishers, and we can add most web platforms to this nowadays ’cause they all work on the same model.  Of course there’s a host of secondary producers in industries that provide content for the principal market, obviously, but this is the main outlet.

Question four, and what is the nature of that content in economic terms under monopoly capitalism or late capitalism? Well, it’s an inducement. It’s the free lunch that attracts the audience to the saloon.  It gets ’em in the door and encourages them to stay. Now this speaks nothing to the cost, the quality, the format. In fact, the cheaper that this can be procured, the better. A free lunch isn’t free, obviously, but someone is providing the bread and the meat, and if the users bring their own, it’s the case of social media then even better.  And what are those users doing?

Question five, what is the nature of the servers performed for the advertiser by members of the purchased audiences? Well, the audience commodity is in economic terms, a non-durable producer’s good bought and used in the marketing of the advertiser’s product. The work that the audience is doing is to learn to buy and consume various brands of products and spend their income accordingly.

If they can develop brand loyalty while doing this, then that’s fantastic. Now, there’s a whole lot of work that goes into that learning. It’s like the reproduction of ideology and Ian terms and a whole lot more going on. But we will again, delve into this and either later in this episode or in future episodes as we keep this going on, but for Smythe, question five is all about the management of demand.

And question six is the big one: How does the management of that demand relate to the notion of free or leisure time under the labor theory of value? And for Smythe the answer is: the goal under monopoly capitalism is for all non-sleeping time to be work time for most of the population. I’ll let you do the math on the missing percentage yourself, but basically free time and leisure time are all turned into work time and in the 21st century, even work time can do double duty as branded elements take place within work.

Now Smythe goes on for about four pages in answering number six. It’s this key point and there’s a lot to unpack there. So again, we’re gonna circle back, but in the interest of brevity:

question seven, does the audience commodity perform an essential economic function? Well, the answer there is “it’s complicated”.  As noted above, Orthodox theories didn’t really go into this, and mass media and brands were before Marx’s time, so he didn’t have much to say about them either. Smythe turns to Marx’s Grundrisse to tease out an answer where production produces consumption, which is, I think page 91 and 92. There’s a whole paragraph on it.  So yes, there’s an essential economic function that’s taking place, but again, it isn’t what we think it is.

Question eight addresses some of that, what we touched on earlier, which is why have the traditional Marxist economists been indifferent to the role of advertising? They were focused on content instead.  Again, this is in the seventies, and it was obviously shiny things. The content was front and center, so people thought that that was what was going on. Remember, this is 1977, a full decade before authors Edward Herman and Noam Chomsky were publishing “Manufacturing Consent”, even though this was contemporaneous with some of Edward Herman’s earlier writings.

Now Smythe had actually published two versions of this paper. The peer reviewed article from 1977 that we’ve been using, and again, it came up as a chapter in 1981’s Dependency Road. These are again, foundational, critical for understanding what’s going on, but what does it mean for right now? Now as I’m recording this, on the evening of July 6th, 2023, Facebook has just launched Threads their Twitter competitor within the last 24 hours.

Earlier this week when I was writing it, I thought the main argument would be the Reddit implosion and Twitter’s issues, which were leading to a mass exodus of users looking for an alternative and heading towards the Fediverse, including Mastodon, which is an ActivityPub protocol tool that’s very similar in some ways to early Twitter.

Earlier, back in June or a thousand years ago, it seems, there was a lot of discussion on the Fediverse because there was news that Facebook was using the ActivityPub protocol for their Threads tools. All of this has gone by in like, you know, Lightspeed, where weeks, sometimes decades happen, right.

Anyways, when I started drafting this in response to those particular events and the general bad idea of engaging with Facebook on anything, (we’ll get into what Triple E means, probably in a future episode too), the online universe was vastly different. The Reddit moderator strike wasn’t even a thing that had happened yet, and even though there was problems at Twitter, it didn’t seem to be the mass expulsion that happened on July 1st.

So let’s tie it back to our main characters. Both Meta and Alphabet, Facebook and Google are well entrenched as advertising companies at this point. There’s no surprises going on there, and it’s also, it’s reasonably well known what’s going on when the auction service is used, being detailed in this explainer from the markup (see below).  I’ll put the link up in the show notes here. I.

They also have a wonderful explainer article going into the breakdown of market segmentation that’s done by, in this case, Microsoft and their Xandr platform, but actually takes place behind the scenes by all of these major social media companies. And these major companies know exactly what they’re doing, or they get into troubles when they lose sight of exactly what their core business model is serving up an audience to their customers, the advertisers.

Often they get themselves distracted by thinking themselves of content providers, and really that’s not the case. The most famous example of this would be like AOL. When they bought Time Warner and moved into providing content on a more regular basis, they kind of lost track of what they’re doing. Their subsequent failure and being overtaken by like everything else on the internet really speaks to them losing sight of that fact and investing in areas where they shouldn’t have. If AOL had focused on either infrastructure or their core business model, the audience, they would’ve weathered the dot-com bust significantly better than all the other companies out there.

But they got distracted by the shininess of Hollywood and thought that they were in the content business. So too, for Reddit and Twitter is some of the problems that they’ve had or because of moves that they’ve made to protect that content. But they can be forgiven slightly because there’s something that changed, something that Smythe didn’t foresee back in 1977.

And that’s AI. See AI flipped the equation around a little bit and turned all that user generated content stuff provided by the labor of the audience for free into something useful data for their large language models. You can understand why Elon Musk and Steve Huffman are a little bit miffed. Imagine you had a lumber mill and someone came in and took a look around and said, “Hey, you’re doing anything with all that sawdust?” and he said “No, take it”. And then they took that useless byproduct and added a little bit of glue to it, and all of a sudden turned it into, I don’t know, designer Swedish furniture and made a mint. You’d be like, what’s going on here? And try and stop them from taking the sawdust and figure out how to use it yourself, because all of a sudden, that stuff’s gold.

Jerry Gold. Because they didn’t know it or didn’t understand the process, both read it and Twitter in the process of lighting a fire in their factory and burning it to the ground. And meanwhile, the users, the audience commodity that was driving their business are all exiting stage left. And that pretty much gets us up to now.

Now we haven’t even gone into some of the other events like TikTok and the proposed ban that seems to be continually ongoing or some of the other social media networks or television, broadcast tv, what’s happening over there. And we also haven’t really gone into Threads and their use of the ActivityPub protocol that we kind of hinted at it.

But we need to get into something else related to that. And that’s a philosophy called Triple E or Embrace Extend Extinguish, but I think that’s gonna be a whole other video. Things are moving pretty fast and I’m just one guy. So for now, we’ll just wrap this up and try and catch the next one. I’m Dr. Implausible. The audio will be available over on the Implausible Pod and the text of the show should be available on the blog or in the comments sometime soon. The whole show is produced under the Creative Comments Attribution Sharealike 4.0 International Public License. We’ll try and make this one look prettier as I figure out how this whole video thing works.

But in the meantime, the world’s moving pretty fast, so we’ll see what it looks like in a week or so. I’m Dr. Implausible. Have fun.


Other links and references:

Baran, P. A., & Sweezy, P. M. (1966). Monopoly Capital. Monthly Review Press.

Smythe, D. W. (1981). Dependency Road: Communications, Capitalism, Consciousness, and Canada (Revised ed. edition). Praeger.

Eastwood, J., Hongsdusit, G., & Keegan, J. (2023, June 23). How Your Attention Is Auctioned Off to Advertisers – The Markup. https://themarkup.org/privacy/2023/06/23/how-your-attention-is-auctioned-off-to-advertisers

Keegan, J., & Eastwood, J. (2023, June 8). From “Heavy Purchasers” of Pregnancy Tests to the Depression-Prone: We Found 650,000 Ways Advertisers Label You – The Markup. https://themarkup.org/privacy/2023/06/08/from-heavy-purchasers-of-pregnancy-tests-to-the-depression-prone-we-found-650000-ways-advertisers-label-you


The Audience Commodity, an overview

From posts made to Mastodon account as of 2023-07-04
https://mastodon.online/@drimplausible

With looming introduction of Threads and the subsequent integration with the fediverse I thought a quick summary of a key piece of economics literature is in order. Likely too late, but perhaps not.

Basically, what is the Facebook or Meta business model?

The production of the audience commodity

(This is from 1977, by Dallas W. Smythe, so some of it may seem obvious in retrospect. Please read it through. Also I’m posting as I go, so it might take a bit).

So what is the question Smythe is trying to answer when it comes to the audience commodity? Basically, “what economic functions for capital do mass communications systems serve?” (And Google and Facebook both fit in with the “mass” in mass communications here).

In order to figure out this function, you need to figure out what the commodity they produce actually is. You might think you know the whole “if you’re not paying, you’re the product” line. This is part of that.

Now if you’re asked “what does media produce” you might answer something like content or information or messages or entertainment.

This is understandable. This is what it looks like they do. You’re forgiven if you thought that’s how it worked. This is the trad, orthodox, “idealist” POV. This is held by everyone from Galbraith to Marx to Veblen to McLuhan

So there’s a lot of press on this idea. Smythe’s argument is that it misses the point.

4/ So if the trad, orthodox, normal economics view of mass communication gets it wrong, what do they produce? What is the commodity form of advertising sponsored (mass)communications under late capitalism ?

Audiences and readerships.

The audience commodity.

Here the work, the labour power of the workers is resold to the advertisers. This is nominally the “consciousness industry”.

Remember: TV stations and walled platforms on the internet are factories that produce audiences for advertisers

So that’s a lot of the overarching stuff. let’s get into the specifics. Smythe has 8 main points. We’ll cover these quickly then move on to how it connects to Facebook and the fediverse

Q1) What do the advertisers buy with their money?
A) The services of audiences in predictable numbers.

It’s a service economy and we’re the ones providing the service.

(We’re also the ones being served up. Ironic!)

The commodity is the collective.

Q2) How do advertisers know they’re getting what they paid for?
A) Various ratings agencies, bitd, and the analysis which has largely moved in-house for streaming and internet platforms. This would be the Nielsen’s and a whole host of stuff under the umbrella of “market research”.

Q3): What institutions produce the commodity that advertisers want?
A) Principally, and traditionally, it’s the owners of TV and radio stations, and newspaper and magazine publishers. You can add most web platforms to this nowadays. Of course, there’s a host of secondary producers, and industries that provide content for the principal market, obviously, but this is the main outlet.

Q4) what is the nature of content in economic terms under late capitalism ?
A) it’s an inducement. It’s the “free lunch” that attracts the audience in the door, and encourages them to stay.

This speaks nothing to cost, “quality”, or format. In fact, the cheaper this can be procured, the better. A free lunch isn’t free, obvs, but someone is providing the bread and meat.

If the users bring their own, even better.

Q5): “What is the nature of the service performed for the advertiser by members of the purchased audiences?
A): The audience commodity is “a non-durable producer’s good bought and used in the marketing of the advertiser’s product”. The work the audience does is to learn to buy and consume various brands of products, and spend their income accordingly. If they can develop brand loyalty while doing this, even better

(Almost done, honest.)

Q6) How does the management of demand relate to the notion of “free” or “leisure” time under the labour theory of value?
A) The goal under monopoly capitalism is for all non-sleeping time to be work time. (For most of the population. I’ll let you do the math on the missing percentage yourself). Free time and leisure time are turned into work time. (And in the 21st century even work time can do double duty.)

(Note: Smythe goes on for 4 pages in Q6, above. It’s his key point and there’s a lot to unpack.)

Q7): Does the audience commodity perform an essential #economic function?
A) it’s complicated. As noted above, orthodox theories didn’t really go into this, and mass media and brands were before Marx ‘s time, so he didn’t say much about them either.

Smythe turns to the Grundrisse to tease out an answer: “production produces consumption” (p.91-2; that whole paragraph).

So, yes: essential.

Q8) Why have Marxist economists been indifferent to the role of advertising and focused on content instead?
A) Shiny things, obviously. Remember, this is being published in 1977, a decade before authors like Noam Chomsky would publish Manufacturing Consent.

Smythe published two versions of this, the peer-reviewed article I’ve been using, and again in 1981 in Dependency Road. Again, foundational. Critical for understanding what’s going on.

What does it mean for right now?

So just to link the above thread with some current events in social media:

Both Meta and Alphabet are well entrenched as advertising companies at this point. No surprises.

Also, it’s reasonably well known what’s going on, with the auction service being detailed in this explainer from @themarkup :
https://themarkup.org/privacy/2023/06/23/how-your-attention-is-auctioned-off-to-advertisers

And follow the link in their article to the breakdown of market segmentation by Microsoft in their Xandr platform.


(Part 2 coming tomorrow!)

A Thousand Plateaus (0/n)

“What is your ‘white whale’ book?” asked @schizophrenicreads on TikTok recently, and I knew the answer immediately: Deleuze and Guattari’s A Thousand Plateaus. It’s a book that I’ve bounced off several times, and in so doing always felt that I was somehow lacking in my understanding. When the subject(s) of the book are explained to me it always intuitively makes sense, but when I try and decipher the text I feel as though I’m trapped in Borges’ endless library reading something that’s just off by a dimension or two.

But I’m nothing if not persistent, so perhaps this time will be the charm.

I feel like it should be understandable. The later works of Deleuze that I’ve read (such as “Postscripts on a Control Society”) were straightforward and easy to grok. Perhaps there was a significant shift in Deleuze’s writing over time, becoming more refined, more focused. Or perhaps it’s a question of translation, ever a cause for academic inscrutability and undergraduate confusion. Maybe Immanuel Kant is an easy read in the original German? Perhaps, but I suspect this is still not the case…

However, I’m going to document and share my journey of trying to crack this Whale of a text, interstitially, with the other content on this feed as I make my way through. Perhaps it’ll help in making headway. To assist in the process, to ensure success, I’ve spoken with a friend and colleague who is somewhat of a Deleuzian scholar, and I’ve consumed a couple quick summaries, one of which I’ll link down below. To progress, and bringing in that White Whale. Arggh!