An archive of positivity

Been thinking about this one for a few days*, so I created a page to collect links to stories about positive use cases for AI. I feel like this will be an evergreen document, and something I can refer back to in the future, as well as send a link to someone who denies the existence of potential positive impacts of AI.

Yes, there are some risks, and there is the potential that some of the stories are simply marketing. Part of the challenge will be sifting the hype-rbole from the actual positive uses. And of course, there are more stories out there than I can ever find, so if you come across this blog post in the future, feel free to send me any examples you find.


*: Maybe more than a few; it has kinda been lingering since I did the AI Reflections episode almost a year ago, last August.

Échanger

(This was originally released as Implausipod Episode 25, on January 2, 2024)

https://www.implausipod.com/1935232/14232183-implausipod-e0025-echanger

[buzzsprout episode=’14232183′ player=’true’]


Échanger

Bonjour. J’ai une question à vous poser. Voulez vous échanger avec moi? Really? Are you sure? That’s fantastic! Because sometimes the English language doesn’t have the right word that does exactly what you need it to do, that expresses the entirety of what you’re looking for. And in this case, that word, échanger, is what we’re going to use when we’re talking about automation.

I’ll explain more in this episode of The Implausipod.

Welcome to The Implausipod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. And in this episode, we’re going to take a look at part three of our two part series on the sphere in Las Vegas. Yeah, things got out of hand. And follow through on an observation that dominated the discourse in 2023 and serves to be at the forefront of our discussion about technology in 2024 and beyond.

And that concept is échanger.

So I mentioned this the other episode when we were looking at the Sphere in Las Vegas and how it had a lot of workers that were doing fairly regular rote tasks, like holding up signs and directing traffic. And as they funneled everybody into the entrance of the Sphere, into the first floor of that massive auditorium, We met the robots, the auras, that were doing almost exactly the same thing:

responding to the crowd, answering questions of the audience, and directing them. But responding to them personally. And it struck me at the time, especially as we were kind of going through and looking at five different Auras, the sisters, that were explaining what we saw in each of these stations, that each of them could do the job of the others, their human chaperones, without too much more training.

It was job replacement made real. And this is where I started to look for a term that can kind of encompass that. Now, it’s something that’s been discussed a whole lot, that idea of job loss through automation, and it’s accelerated in the last year since the release of ChatGPT and the other AI assisted art tools or large language models, as people are worried that that’s going to directly lead to job loss.

But that’s only one part of the story, as there’s also things like the development of the Boston Dynamics robots, and other robotic assisted tools that are taking the roles of persons, and dogs, and mules within various environments. And so we have this assemblage of different things that are all connected to this job loss.

And in order to encompass these factors, I found myself stumbling for a word. I recalled back to some of my training in grad school where we were looking at the idea of actor network theory and the author Michael Callon. In 1986, he came up with the idea of interessement, And obviously he was French, but in his work titled Some Elements of the Sociology of Translation, he was talking about that shift that took place, and he was using the French language to describe it, a specific instance.

So I thought I’d reach out and draw on that inspiration, and see if perhaps a verb in French could encompass what we are seeing within the world at large. Hence, Échanger. And I like it. It works. I know there’s been some other authors who have used other verbs to describe different processes within the tech sphere lately, and sometimes those will get caught by language filters and sometimes they won’t, but I think Échanger, with all its multiplicity of meanings, adequately captures the breadth of what we’re looking for here when we’re talking about automation, agentrification via AI tools, and virtualization,

and what they might mean for workers that are working alongside machines that will replace them. That’s what the term means, or what it means now in the context of this episode, and in my reference to technological replacement. And speaking from a personal perspective, I have more than just an academic interest in echange.

I’ve been automated out of jobs on at least a couple different occasions over the last 30 years, and I’ve experienced outsourcing from a worker perspective on a couple occasions as well. And in some cases, both at the same time. For example, in one of those instances, I was working for a local tech company that was manufacturing phone handsets.

And there was seven people working on the assembly line, and after a few months, they brought in one machine that could replace the role of one of the persons on the line. And our duty was to feed material into the machine. And then after that was tested and worked out, within a month, they brought in another one.

And slowly, that team of seven was whittled down to two, as we’d just really need somebody at the front end to load the parts, and at the back end to take out the manufactured ones and test them. And it ran pretty much 24 7. And after they had fine tuned that, they packed up the whole factory and shipped it down to Mexico.

So we had both replacement, échanger, and outsourcing happening within the same instance. Now, obviously, this isn’t anything new, it’s been happening for years. The term technological unemployment was originally proposed by Keynes and included in his Essays in Persuasion from 1931, and has been returned to many times since, including by Nobel Prize winner Wassily Leontief in his paper titled Is Technological Unemployment Inevitable?

Daniel Suskind writes in his 2020 book, A World Without Work, that there can be two kinds of technological unemployment, frictional and structural. Frictional tech unemployment is that kind that is imposed by switching costs and not all workers being able to transition to the new jobs available in the changed economy.

The friction prevents the workers from moving as freely as needed. And this is what was happening in my experience with the jobs where échanger occurred. I want to be clear, a lot of those jobs that I was automated out of were not great. It was hard, demanding work, or physical work that was replaced by labor saving devices, in this case, machines.

But it still meant a job loss, and there was one less role, or entry level role, for a high school student, or college student, or casual worker, or whatever I was at the time.

Échanger. (part 2)

And that’s part of the problem. On March 27th, 2023, the Economics Research Department at Goldman Sachs released a report titled The Potentially Large Effects of Artificial Intelligence on Economic Growth, otherwise known as the Briggs-Kodnani Report. The report was published several months after the release of ChatGPT4 to the general public and captures the fear that was seen during its initial wave of use.

The report focuses on the economic impacts of generative AI and its ability to create content that is, quote, indistinguishable from human created outputs and breaks down communication barriers, end quote, and speculates what the macroeconomic effects of a large scale rollout of such technology would be.

Now, the authors state that this large scale introduction of AI tools would be, or Could be a significant disruption to the labor market. The authors take a look at occupational tasks on jobs, and using standard industry classifications, they find that approximately two thirds of current jobs are exposed to some degree of AI automation.

And the generated AI could, quote, substitute up to one fourth of current work. Now, if you take those estimates, like they did, it means it could expose something like 300 million full time jobs to automation through AI, or what I like to call agentrification. And that’s over a 10 year period. This would create an incredible amount of churn in the workforce, and whenever we hear about churn, we need to consider the human costs behind those terms.

A lot of people will lose their jobs, and well, the Schumpeterian creative destruction generally means that people get new jobs, or that old workers that haven’t moved become more productive, as a study by David Autor and others from 2022 found when they looked at U. S. census data from 1940 to 2018. and found that 60 percent of workers in 2018 were working at jobs that did not exist in 1940, and that most of this growth is fueled by technology driven job creation.

But there’s usually a lag between the two, between losing one job and having tech create new positions, the frictional tech unemployment we mentioned earlier. But there could also be more, the second kind mentioned above, structural technological unemployment. As stated by Briggs and Kodnani, there could very well be just some permanent job losses, and that can be a challenge for us to address as a society.

Now, with the productivity growth, Briggs and Kodnani argue we could see a 1. 5 percent growth over a 10 year period following widespread adoption, so the timing for all of this is actually quite distant. Everybody’s thinking everything’s going to end immediately, and that’s not necessarily the case. But it sure can feel like it’s coming around the corner right away.

The authors also estimated that GDP globally could increase by 7%, but that would depend on a whole lot of factors, so I’d like to bracket off that prediction, as there’s too many variables involved. The two things I really found interesting about their report was a, the timescale that they’re looking at this and B, the specific jobs that they’re looking at.

So, as I said, the ability to predict the specific GDP on something as large scale as this across the economy on a 10 year timeframe is just like, let’s not do that. It’s just. There, you can put numbers into it, but I think there’s just far too much speculation involved in actually being able to get to that level of precision with anything.

The interesting thing in the paper was their estimate of the work tasks that could be automated in the industries that could be more significantly affected. There’s two key charts for this. It’s Exhibit 5, which is the share of industry employment exposed to automation, and Exhibit 8, which is the share of industry employment by relative exposure to automation by AI.

And there’s some of these that are, you’re not going to see any automation improvements in. Some industries are just not really going to take a hit. But some of them could have AI as a complement, and some of them will have AI as a replacement. And this is in Exhibit 8, and I think this is probably the most interesting thing in the whole article.

The thing the Briggs and Kodnani report captures is a lot of the public’s initial impressions of OpenAI, and of ChatGPT as well. This drove some of the furor because as people were able to access the tool and use it, one of the things they’d naturally do is go, Well, does this help me? Can I use this for my own job?

And B, how well does this do my own job? So a lot of the initial uproar and the impacts from ChatGPT was people using it to see how it would do their job and being concerned with what they saw. So I think a lot of their concerns and fears are well founded. If you’re doing basic coding tasks, and the tool is able to replicate some of those tasks fairly simply, you’re like, oh my god, what’s going on?

If you’re doing copywriting or any of those roles that receive a significant amount of replacement, as in the Table 8 on the Report, like office and administrative support, and legal, you know, traditionally one of those things we didn’t really think would be automated, you’re going to have some serious concerns.

Martin Ford’s book, The Rise of the Robot, talks about that white collar replacement, where we’re seeing job loss and automation in roles that traditionally hadn’t seen it before. When we think of échanger. When we think of automation, we think of it as, like, large industrial machinery. We’re thinking of things like factory machines, being able to produce something that a craftsman might have had to work at for long hours, but able to do that at an industrial scale

or rapid scale. And this change has us going all the way back to the era of the Luddites in the early industrial revolution in England. Now, when ChatGPT launched, we’re starting to see the process of what I like to call agentrification, tech replacement through AI tools. And basically, we’re having automation of white collar work in things like the legal field.

I mean, this might fly under the radar for a lot of academic analysis, but if you’re paying attention to what gets advertised, there were signs. Tools like LegalZoom were continually advertised on the Jim Rome sports talk show over a decade ago, and we note that being able to be centralized and outsourcing that work would indicate that there’s, you know, some risks of échanger involved in those particular fields.

Now, there’s other fields where this white collar work is at the risk of echangér as well. The Hollywood Strikes of 2023 had similar motivations. Though their industries were moving quicker to roll out the tools, being on the forefront of their use, the Actors Guild and the Writers Guild were much more proactive against the tools because they saw the role that would take place in their replacement.

Given the role of the cultural industries, like movie production, being at the leading edge of soft innovation, we were already seeing digital de-aging tech and reinsertion in major motion pictures, notably from Disney properties like Star Wars with both Peter Cushing and Carrie Fisher, whose likenesses were used in films after they had passed away, and the de aging of Harrison Ford in Indiana Jones 5.

This leads to an interesting question. Can Échanger lead to a replacement of you with your younger self? I don’t know. Let’s explore that a bit more, next.

Échanger (part 3)

On December 2nd, 2023, the rock band KISS played their final show at Madison Square Gardens. Now, this may have not been newsworthy, as they had been doing Last show ever since late last century, but as the members were now in their 70s, there was a feeling that they really meant it this time. However, at the end of the show, they revealed that they weren’t quite done just yet, and they unveiled their quote unquote immortal digital avatars that will represent the band on stage in the future.

Now, KISS aren’t the first in doing this by any means. The Swedish pop band ABBA has been doing this for a while, and Kiss contacted the same company, Pop House Entertainment, to work on their avatars. Now, Bloomberg News reports that the ABBA shows are pulling in 2 million a week. Yes, you heard that correctly.

Clearly, I’m in the wrong business. But this trend to virtual entertainers has been happening for a while. When a hologram Tupac appeared with Snoop Dogg and Dr. Dre at Coachella in 2012, it was something that had already been in the works. Bands like Gorillaz and Death Clock had long used virtual or animated avatars, and within countries like South Korea, virtual avatars are growing in popularity as well, like M.A.V.E., the four member virtual K pop group that’s been moving up the charts in 2023. We noted a few episodes ago that one of the challenges for 21st century entertainment complexes like the Sphere is providing enough continuous content, and virtualized groups like this may well be able to fill that role and allow the Sphere to provide content worldwide by having virtual avatars that can fill the entire space in ways that Bono and the Edge on a small stage in front of a massive screen can’t quite do. And more than just this, the shift to remote that’s happened as part of the pandemic response could mean this technology could be rolled out in education and other fields as well.

So we’re just seeing the thin edge of the wedge of this virtualization component of Échanger. With large companies like Apple and Meta continually pushing the Metaverse, we’re going to see more and more of it in the coming years. So 2024 may well be the year of virtualization. We’ll dive further into virtualization and the Metaverse in upcoming weeks here on the Implausipod.

Why échanger? (part 4)

Well, basically it covers three things. We’ve kind of discovered it covers automation, which is the industrial process that we’ve been seeing for centuries now. It covers virtualization, the shift to digital in entertainment, education, conferences, and distribution. And the third thing it covers is agentrification, the replacement of workers or roles or jobs by AI.

So, this is different than outsourcing, as outsourcing may work in conjunction with some of the above, as noted in my own personal experience earlier, and these are all metaprocesses of the trends towards technological unemployment. If we look at any of these, automation, Virtualization and agentification, they’re all metaprocesses of translation.

Now, the work I mentioned earlier by Michel Callon, in Some Elements Towards the Sociology of Translation from 1986, is basically talking about that, describing what we call a flat ontology. An ontology, in this case, is a way of describing the world. And what a flat ontology does is it treats the actors in the world as similar.

So, normally, when we talk about an ontology, we’re talking about like with like, right? We’re talking about people, or we’re talking about things, or we’re talking about institutions, firms, we’re looking at things on the same level. When we flatten the ontology, we treat all the actors or agents in the system equally, and we can look at the power relations between them.

We use the same terms for the actors, so in this case, it would mean human and non human actors are treated in the same way. We treat the things the same as the people. That doesn’t necessarily mean we treat the people as things, but we say that everything here has to be described with the same terms when it comes to their agency.

This is what interessment means. That’s the agency. In between state, the interposition, when Michel Callon is talking about translation between asymmetrical actors, it’s that moment where we connect dissimilar things. And so this is where we come into the idea of échanger as a metaprocess for these three trends of replacement.

And that’s why we chose échanger for this process of translation as well. Échanger is a process of translation of a different kind. Échanger is the metaprocess of having something different do the job or being a replacement for the task. So if échanger means in French, literally a trade and exchange, a swap, then we’re extending or exapting the term a little bit in this case, where to us échanger means replacement in place.

So if we return to our example from the Sphere in Las Vegas, we can see this happening with the Auras and the workers. The role is similar, but it’s a different agent, different actor that is taking that place. This is what we see with virtualization as well, or automation, the agentrification that’s taking place due to AI.

And sometimes those machines, those tools, those devices, means the job of many can be done by one. But it also means that the one still occupies the same place within the network of tasks and associations within the process around it. Think of those machines embedded in the assembly line I mentioned earlier.

Where the staff went down from 7 to 2 and the production line was turned into a black box with inputs and outputs. But what’s actually going on in that black box? We can have some questions. With some automated processes, we can tell. But with AI tools, we don’t necessarily know. And that can be a significant problem. Especially when we’re facing Échanger.


Bibliography:

Autor, D., Chin, C., Salomons, A. M., & Seegmiller, B. (2022). New Frontiers: The Origins and Content of New Work, 1940–2018 (Working Paper 30389). National Bureau of Economic Research. https://doi.org/10.3386/w30389

Hatzius, J. et al. (2023)The Potentially Large Effects of Artificial Intelligence on Economic Growth . (Briggs/Kodnani). Retrieved December 5, 2023, 

Ford, M. (2016). The Rise of the Robots: Technology and the Threat of Mass Unemployment. Oneworld Publications.

Leontief, W. (1979). Is Technological Unemployment Inevitable? Challenge, 22(4), 48–50.

Susskind, D. (2020). A World Without Work: Technology, Automation, and How We Should Respond. Metropolitan Books.

They’re not human? AI-powered K-pop girl group Mave: eye global success. (2023, March 17). South China Morning Post.

Tupac Coachella hologram: Behind the technology – CBS News. (2012, November 9). 

Implausipod E0012 – AI Reflections

AI provides a refection of humanity back at us, through a screen, darkly. But that glass can provide different visions, depending on the viewpoint of the observer. Are the generative tools that we call AI a tool for advancement and emancipation, or will they be used to further a dystopic control society? Several current news stories give us the opportunity to see the potential path before us leading down both these routes. Join us for a personal reflection on AI’s role as an assistive technology on this episode of the Implausipod.

https://www.buzzsprout.com/1935232/episodes/13472740

Transcript:

 On the week before August 17th, 2023, something implausible happened. There was a news report that a user looking for, can’t miss spots in Ottawa, Ontario, Canada, would be returned some unusual results on Microsoft’s Bing search. The third result down on an article from MS Travel suggested the users could visit the Ottawa food bank if they’re hungry, that they should bring an appetite.

This was a very dark response, a little odd, and definitely insensitive, making one wonder if this is done by some teenage pranksters or hackers, or if there was a human involved in the editing decisions at all. Because initial speculation was that this article – credited to Microsoft Travel – may have been entirely generated by AI.  Microsoft’s subsequent response in the week following was that it was credited due to human error, but doubts remain, and I think the whole incident allows us to reflect on what we see in AI, and what AI reflects back to us… about ourselves, which we’ll discuss in this episode of the ImplausiPod.

Welcome to the ImplausiPod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible, and today on episode 12, we’re gonna peer deeply into that glass, that formed silicon that makes up our large language models and AI, and find out what they’re telling us about ourselves.

Way back in episode three, which admittedly is only nine episodes but came out well over a year ago, we looked at some of the founding figures of cyberpunk and of course one of those is Philip K Dick, who’s most known for Do Android’s Dream of Electric Sheep, which became Blade Runner, and now The Man in the High Castle, and other works which are yet un-adapted, like The Three Stigmata of Palmer Eldrich, but one of his most famous works was of course A Scanner Darkly, which had a Rotoscoped film version released in 2006 starring Keanu Reeves. Now, the title, of course, is a play on words from the biblical verse from One Corinthians where it’s phrased as looking “through a glass darkly”, and even though there’s some ambiguity there, whether it’s a glass or a mirror, or in our context, a filter, or in this case a scanner or screen. With the latter two being the most heavily technologized of all of them, the point remains, whether it’s a metaphor or a meme, that by peering through the mirror, the reflection that we get back is but a shadow of the reality around us.

And so too, it is with AI. The large language models, which have been likened to “auto-complete on steroids”, and the generative art tools (which are like procedural map makers that we discussed in a icebreaker last fall) have gained an incredible amount of attention in 2023. But with that attention has come some cracks in the mirror, and while there is still a lot of deployment of them as tools, they’re no longer seen as the harbinger of AGI or (artificial) general Intelligence, let alone super intelligence that will lead us on a path through a technological singularity. No, the collection of programs that have been branded as AI are simply tools what media theorist Marshall McCluhan called “Extensions of Man”, and it’s with that dual framing of the mirror held extended at our hand that I wanna reflect on what AI means for us in 2023.

So let’s think about it in terms of a technology. In order to do that, I’d like to use the most simple definition I can come up with; one that I use as an example in courses I’ve taught at the university. So follow along with me and grab one of the simplest tools that you may have nearby. It works best with a pencil or perhaps a pair of chopsticks, depending on where you’re listening.

If you’re driving an automobile, please don’t follow along and try this when you’re safely stopped. But take the tool and hold it in your hands as if you were about to use it, whether to write or draw or to grab some tasty sushi or a bowl of ramen. You do you. And then close your eyes and rest for a moment.

Breathe and then focus your attention down. To the tool in your hands, held between your fingers and reach out. Feel the shape of it, you know exactly where it is, and you can kind of feel with the stretch of your attention, the end of where that might actually exist. The tool has now become part of you, a material object that is next to you and extends your senses and what you are capable of.

And so it is with all tools that we use, everything from a spoon to a steam shovel, even though we don’t often think of that as such. It also includes the AI tools that we use, that constellation of programs we discussed earlier. We can think of all of these as assistive technologies, as extensions of ourselves that multiply our capabilities. And open your eyes if you haven’t already.

So what this quick little experiment is helpful in demonstrating is just exactly how we may define technology. Here using a portion of McLuhan’s version. We can see it as an extension of man, but there have been many other definitions of technology before. We can use other versions that focus on the artifacts themselves, like Fiebleman’s  where tech is “materials that are altered by human agency for human usage”, but this can be a little instrumental. And at the other extreme, we can have those from the social construction school like John Laws’ definition of “a family of methods for associating and channelling other entities and forces, both human and non-human”. Which when you think about it, does capture pretty much everything relating to technology, but it’s also so broad that it loses a lot of the utility.

But I’ve always drawn a middle line and my personal definition of technology is it’s “the material embodiment of an artifact and its associated systems, materials, and practices employed to achieve human ends”. I think we need to capture both the tool and the context, as well as the ways that they’re employed and used, and I think this definition captures the generative tools that we call AI as well. If we can recognize that they’re tools used for human ends and not actors with their own agency, then we can change the frame of the discussion around these generative tools and focus on what ends they’re being used for.

And what they’re being used for right now is not some science fictional version, either the dystopic hellscapes of the Matrix or Terminator, or on the flip side, the more utopic versions, the one, the “Fully Automated Luxury Communism” that we’d see in the post scarcity societies of like a Star Trek: The Next Generation, or even Iain M. Banks’ the Culture series.  Neither of these is coming true, but those polls – that ideation, these science fiction versions that kind of drive our collective imagination of the publics, the social imaginaries that we talked about a few episodes ago – these polls represent the two ends of that continuum, of that discussion, that dialectic between utopic and dystopic and the way we frame technology.

As Anabel Quan-Haase notes in their book on Technology and Society, those poles: the utopic idea of technology achieving progress through science, and the dystopic is technology as a threat to established ways of life, are both frames of reference. They could both be true depending on the point of view of the referrer. But as we said, it is a dialectic. There is a dialogue going back and forth between these two poles continually. So technology in this case is not inherently utopic or dystopic, but we have to return again to the ends that the technology is put towards: the human ends. So rather than utopic or dystopic, we can think of technology being rather emancipatory or controlling, and it’s in this frame, through this lens, this glass that I want to peer at the technology of AI.

The emancipatory frame for viewing these generative AI tools view them as an assistive technology, and it’s through this frame, this lens that we’re going to look at the technology first. These tools are exactly that: they are liberating, they are freeing. And whenever we want to take an empathetic view of technology, we wanna see how they may be used by others who aren’t in our situation.  And that situations means they may be doing okay, they might be even well off, but they may also be struggling. There may be issues that they, or challenges that they have to deal with on a regular basis that most of us can’t even imagine. And this is where my own experience comes from. So I’ll speak to that briefly.

Part of my background is when I was doing my field work for my dissertation, I was engaged with a number of the makerspaces in my city, and some of them were working with local need-knowers or persons with disabilities like the Tikkun Olam Makers, as well as the Makers Making Change groups. These groups worked with persons with disabilities to find solutions to their particular problems.  problems that often there wasn’t a market solution available because it wasn’t cost effective. You know, the “Capitalist realism” situation that we currently are under means that a lot of needs, especially for marginal groups, may go unmet. And these groups came together to try and meet those needs as best they could through technological solutions using modern technologies like 3D printing or microcontrollers or what have you, and they do it through regular events, whether it was a hackathon or regular monthly meetup groups or using the space provided by a local makerspace. And in all these cases, what these tools are are liberating to some of the constraints or challenges that are experienced in daily life.

We can think of more mainstream versions, like a mobility scooter that allows somebody with reduced mobility to get around and more fully participate within their community to meet some of the needs that they need on a regular basis, and even something as simple as that can be really liberating for somebody who needs it. We need to be cognizant of that because as the saying goes, we are all at best just temporarily able, and we never know when a change may be coming that could radically change our lives. So that empathetic view of technology allows us to think with some forethought about what may happen as if we or someone we love were in that situation, and it doesn’t even have to be somebody that close to us. We can have a much more communal or collective view of society as well.

But to return to this liberating view, we can think about it in terms of those tools, the generative tools, whether they’re for text or for art, or for programming, or even helping with a little bit of math.  We can see how they can assist us in our daily lives by either fulfilling needs or just allowing us to pursue opportunities that we thought were too daunting. While the generative art tools like Dall-E and Midjourney have been trained on already existing images and photographs, they allow creators to use them in new and novel ways.

It may be that a musician can use the tools to create a music video where before they never had the resources or time or money in any way, shape, or form to actually pursue that. It allows them to expand their art in a different realm. Similarly, an artist may be able to create stills that go with a collection or you know, accompany their writing that they’re working on, or an academic could use it as slides to accompany a presentation, something that they’ve spent time on, or a YouTube video, or even a podcast and their title bars and the like (present company included). My own personal experience when I was trying to launch this podcast was there was all this stuff I needed to do, and the generative art tools, the cruder ones that were available at that time, allowed some of the art assets to be filled in, and that barrier to launch, that barrier to getting something going was removed.

So emancipatory, liberating, even though at a much smaller scale, those barriers were removed and it allowed for creativity to flow in other areas, and it works similarly across these generative tools, whether it’s putting together some background art or a campaign map or a story prompt. If you need some background for a characters that are part of a story as an NPC, as a Dungeon Master, or what have you, or even just to bounce or refine coding ideas off of, I mean, the coding skills are rudimentary, but it does allow for something functional to be produced.

And this leads into some of the examples I’d like to talk about. The first one is from a post by Brennan Stelli on Mastodon on August 18th, where he said that we could leverage AI to do work, which is not being done already because there’s no budget time or knowhow.  There’s a lot of work that falls into this space of stuff that needs to be done, but you know, is outside of scope of a particular project. This could include something like developing the visualizations that will allow him to better communicate an idea in a fraction of the time, you know, minutes instead of hours that would normally take to do something like that, and so we can see in Brennan’s experience that it mirrors a lot of our own.

The next example’s a little bit more involved in an article written by Pam Belluck and published on the New York Times website on August 23rd, 2023. She details how researchers have used predictive text as well as AI generated facial animations that help with an avatar and speech that assist the stroke victim in communicating with their loved ones.

And the third example that hit a little bit closer to home was that of a Stanford research team that used the BCI or brain computer interface, along with AI assisted predictive text generation to allow a person with amyotrophic lateral sclerosis or ALS (to talk) at a regular conversational tempo, the tools read the neural activity that would be combined with the facial muscles moving and that is allowed to be translated into text. These are absolutely groundbreaking and amazing developments and I can’t think of any better example that shows how AI can be an assistive technology.

Now most of these technologies are confined to text and screen, to video and audio, and often when we think of AI, we think of mobility as well. So the robotic assistants that have come out of research labs like that of Boston Dynamics have attracted a lot of the attention, but even there, we can see some of the potential as an assistive technology. The fact that it’s confined to a humanoid robot means we sometimes lose sight of that fact, but that is what it is. In the video that they released in January of 2023, it shows an Atlas Robot as an assistant on a construction site providing tools and moving things around in aid of the human that’s the lead on the project, so it allows a single contractor working on their own to extend what they’re able to do, even if they don’t have access to a human helper. So it still counts as an assistive technology, even though we can start to see the dark side of the reflection through this particular lens, that the fact that an emancipatory technology may mean emancipation from the work that people currently have available to them.

In all of these instances, there’s the potential for job loss, that the tools would take the place of someone doing that, whether it’s in writing or as an artist, or a translator, or transcriber, or a construction assistant, and those are very real concerns. I do not want to downplay that, Part of our reflection on AI has to take these into account that the dark side of the mirror (or the flip side of the magnifying glass) can take something that can be helpful and exacerbate it when it’s applied to society at large. The concerns about job loss are similar to concerns we’ve had about automation for centuries, and they’re still valid. What we’re seeing is an extension of that automation into realms that we thought were previously exclusively bound to, you know, human actors: creators, artists, writers and the like.

This is why AI and generative art tools are such a driving and divisive element when it comes to the current WGA and SAG-Aftra strikes: that the future of Hollywood could be radically different if they see widespread usage. And beyond just the automation and potential job loss, a second area of concern is the one that ChatGPT and the large language models don’t necessarily have any element of truth involved in it, that they’re just producing output linguists like Professor Emily Bender of the University of Washington and the Mystery AI Hype Theater Podcast have gone into extensive detail about how the output of ChatGPT cannot be trusted. It has no linkage to truth, and there’s been other scholars that have gone into the challenges with using ChatGPT or LLMs for legal research or academic work or anything along those lines. I think it still has a lot of potential and utility as a tool, but it’s very much a contested space.

And the final area of contestation that we’ll talk about today is the question of control. Now, that question has two sides: the first is the control of that AI. One that most often surfaces in our collective imaginary is that idea of rogue super intelligences or killer robots gets repeated in TV, film, and our media in general, and this does get addressed at an academic level and works like Stuart Russell’s Human Compatible and Nick Bostrom’s Superintelligence.  They both address the idea of what happens if those artificial intelligences get beyond human capacity to control them.

But the other side of that is the control of us, control of society. Now, that gets replicated in our media as well, and everything from Westworld, to the underlying themes of the TV series Person of Interest, where The Machine is a computer system, developed to help detect and anticipate and suppress terrorist action using the tools of a post 9-11 surveillance state that it has access to.

And ever since Gilles Deleuze wrote his Postscript on the Societies of Control back in 1990, that so accurately captured the shift that had occurred in our societies from the sovereign societies of the Middle Ages and Renaissance through to the disciplinary societies that typified the 18th and 19th century, through to the shift that occurred in the 20th and 21st century to that of a control society where the logics of the society was enforced and regulated by computers and code. And while Deleuze was not talking about algorithms and AI in his work, we can see how they’re a natural extension of what he was talking about, how the biases that are ingrained within our algorithms, what Virginia Eubanks talked about in her book Automating Inequality, and how our biases and assumptions that go into the coding and training of those advanced system can manifest in ways, including everything from facial recognition to policing, to recommendation engines on travel websites that suggest that perhaps should go to the food bank to catch a meal.

Now there’s a twist to our Ottawa food bank story, of course. About a week after Microsoft came out and said that the article had been removed and that it had been identified that the issue was due to human error and not due to an unsupervised AI. But even with that answer, there are those who are skeptical: because it didn’t happen just once. There was a lot of articles where such weird or incongruous elements showed up. And of course, this being the internet, there was a number of people that did catch the receipts.

Now there’s a host of reasons of what might be happening with these bad reviews. Some plausible and some slightly less so. It could be just an issue of garbage in garbage out that the content that they’re scraping to power the AI is drawing articles that already exist that are, you know, satire or meme sites. If the information that you’re getting on the web is coming from Something Awful or 4chan, then you’re gonna get some dark articles in there. But the other alternative is that it could be just hallucinations that have been an observed fact that has been happening with these AIs and large language models that, uh, incidents like we saw with the Loa B that we talked about in an icebreaker last year are still coming forward in ways that are completely unexpected and out of our control.

That scares us a little bit because we don’t know exactly what it’s going to do. When we look at the AI through that lens, like in the mirror, what it’s reflecting back to us is something we don’t necessarily want to look at, and we think that it could be revealing the darkest aspects of ourselves, and that frightens us a whole lot.

AI is a reflection of our society and ourselves, and if we don’t like what we’re seeing, then that gives us an opportunity to perhaps correct things because AI, truth be told, is really dumb right now. It just shows us what’s gone into building it. But as it gets better, as the algorithms improve, then it may get better at hiding its sources.

And that’s a cause for concern. We’re rapidly reaching a point where we may no longer be able to tell or spot a deepfake or artificially generated image or voice, and this may be used by all manner of malicious actors. So as we look through our lens at the future of AI , what do we see on our horizon?

References:
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

Deleuze, G. (1992). Postscript on the Societies of Control. October, 59, 3–7.

Eubanks, V. (2018). Automating Inequality. Macmillan.

McLuhan, M. (1964). Understanding Media: The Extensions of Man. The New American Library.

Quan-Haase, A. (2015). Technology and Society: Social Networks, Power, and Inequality. Oxford University Press.

Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.

Links:
https://arstechnica.com/information-technology/2023/08/microsoft-ai-suggests-food-bank-as-a-cannot-miss-tourist-spot-in-canada/

https://tomglobal.org/about

https://www.makersmakingchange.com/s/

https://arstechnica.com/health/2023/08/ai-powered-brain-implants-help-paralyzed-patients-communicate-faster-than-ever/

https://blog.jim-nielsen.com/2023/temporarily-abled/

https://www.businessinsider.com/microsoft-removes-embarrassing-offensive-ai-assisted-travel-articles-2023-8