AI Refractions

(this was originally published as Implausipod Episode 38 on October 5th, 2024)

https://www.implausipod.com/1935232/episodes/15804659-e0038-ai-refractions

Looking back in the year since the publication of our AI Reflections episode, we take a look at the state of the AI discourse at large, where recent controversies including those surrounding NaNoWriMo and whether AI counts as art, or can assist with science, bring the challenges of studying the new medium to the forefront.


In 2024, AI is still all the rage, but some are starting to question what it’s good for. There’s even a few that will claim that there’s no good use for AI whatsoever, though this denialist argument takes it a little bit too far. We took a look at some of the positive uses of AI a little over a year ago in an episode titled AI Reflections.

But it’s time to check out the current state of the art, take another look into the mirror and see if it’s cracked. So welcome to AI Refractions, this episode of ImplausiPod.

Welcome to The ImplausiPod, an academic podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. And in this episode, we’ve got a lot to catch up on with respect to AI. So we’re going to look at some of the positive uses that have come up and how AI relates to creativity and statements from NaNoWriMo caused a bit of controversy.

And how that leads into AI’s use in science. But it’s not all sunny over in AI land. We’ve looked at some of the concerns before with things like Echange, and we’ll look at some of the current critiques as well. And then look at the value proposition for AI, and how recent shakeups with open AI in September of 2024 might relate to that.

So we’ve got a lot to cover here on our near one year anniversary of that AI Reflections episode, so let’s get into it. We’ve mentioned AI a few other times since that episode aired in August of 2023. It came up in episode 28, our discussion on black boxes and the role of AI handhelds, as well as episode 31 when we looked at AI as a general purpose technology.

And it also came up a little bit in our discussion about the arts, things like Echanger and the Sphere, and how AI might be used to assist in higher fidelity productions. So it’s been an underlying theme about a lot of our episodes. And I think that’s just the nature of where we sit with relation to culture and technology.

When you spend your academic career studying the emergence of high technology and how it’s created and developed, when a new one comes on the scene, or at least becomes widely commercially available, you’re going to spend a lot of time talking about it. And we’ve been obviously talking about it for a while.

So if you’ve been with us for a while, first off, you’re Thank you, and this may be familiar to you, and if you just started listening recently, welcome, and feel free to check out those episodes that we mentioned earlier. I’ll put links to the specific ones in the text. And looking back at episode 12, we started by laying down a definition of technology.

We looked at how it functioned as an extension of man, to borrow from Marshall McLuhan, but the working definition of technology that I use, the one that I published in my PhD, is that “Technology is the material embodiment of an artifact and its associated systems, materials, and practices employed to achieve human ends.”

And this definition of technology covers everything from the sharp stick and sharp stick- related technologies like spears, pencils, and chopsticks, to our more advanced tech like satellites and AI and VR and robots and stuff. When you really think about it, it’s a very expansive definition, but that helps us in its utility in allowing us to recognize and identify things.

And by being able to cover everything from sharp sticks to satellites, from language to pharmaceuticals to games, it really covers the gamut of things that humans use technology for, and contributes to our view of technology as an emancipatory view. That technology is ultimately assistive and can aid us in issues that we’re struggling with.

We recognize that there’s other views and perspectives, but this is where we fall down on the spectrum. Returning back to episode 12, we showed how this emancipatory stance contributes to an empathetic view of technology, where we can step outside of our own frame of reference and think about how technology can be used by somebody who isn’t us.

Whether it’s a loved one, somebody close to us, or even a member of our community or collective, or you. More widely ranging, somebody that we’ll never come into contact with. How persons with different abilities and backgrounds will find different uses for the technology. Like the famous quote goes, “the street finds its own uses for things.”

Maybe we’ll return back to that in a sec. We finished off episode 12 looking at some of the positive uses of AI at that time that had been published just within a few weeks of us recording that episode. People were recounting how they were finding it as an aid or an enhancement to their creativity, and news stories were detailing how the predictive text abilities as well as generative AI facial animations could help stroke victims, as well as persons with ALS being able to converse at a regular tempo.

So by and large it could function as an assistive technology, and in recent weeks we have started trying to Catalogue all those stories. Back in July over on the blog we created the Positive AI Archive, a place where I could put those links to all the stories that I come across. Me being me, I forgot to update it since, but we’ll get those links up there and you should be able to follow along.

We’ll put the link to the archive in the show notes regardless. And, in the interest of positivity, that’s kinda where I wanted to start the show.

The street finds its own uses for things. It’s a great quote from Burning Chrome, a collection of short stories by William Gibson. It’s the one that held Johnny Mnemonic, which led to the film with Keanu Reeves, and then subsequently The Matrix and Cyberpunk 2077 and all those other derivative works. The street finds its own uses for things is a nuanced phrase and nuance can be required when we’re talking about things, especially online when everything gets reduced to a soundbite or a five second dance clip.

The street finds its own uses for things is a bit of a mantra and it’s one that I use when I’m studying the impacts of technology and what “the street finds its own uses for things” means is that the end users may put a given technology to tasks that its creators and developers never saw. Or even intended.

And what I’ve been preaching here, what I mentioned earlier, is the empathetic view of technology. And we look at who benefits from using that technology, and what we find with the AI tools is that there are benefits. The street is finding its own uses for AI. In August of 2024, a number of news reports talked about Casey Harrell, a 46 year old father suffering from ALS, amyotrophic lateral sclerosis, who was able to communicate with his daughter using a combination of brain implants and AI assisted text and speech generation.

Some of the work on these assistive technologies was done with grant money, and there’s more information about the details behind that work, and I’ll link to that article here. There’s multiple technologies that go into this, and we’re finding that with the AI tools, there’s very real benefits for persons with disabilities and their families.

Another thing we can do when we’re evaluating a technology is see where it’s actually used, where the street is located. And when it comes to assistive AI tools like ChatGPT, The street might not be where you think it is. In a recent survey published by Boston Consulting Group in August of 2024, they showed where the usage of ChatGPT was the highest.

It’s hard to visually describe a chart, obviously, but at the top of the scale, we saw countries like India, Morocco, Argentina, Brazil, Indonesia. English speaking countries like the US, Australia, and the UK were much further down on the chart. The country where ChatGPT is finding its most adoption are countries where English is not the primary language.

They’re in the global south, countries with large populations that have also had to deal with centuries of exploitation. And that isn’t to say that the citizens of these countries don’t have concerns, they do, but they’re using it as an assistive technology. They’re using it for translation, to remove barriers and to help reduce friction, and to customize their own experience. And these are just a fraction of the stories that are out there. 

So there are positive use cases for AI, which may seem to directly contradict various denialist arguments that are trying to gaslight you into believing that there is no good use for AI. This is obviously false.

If the positive view, the use on the street, is being found by persons with disabilities, it follows that the denialist view is ableist. If the positive view, that use on the street, is being found by persons of color, non English speakers, persons in the global south, then the denialist view will carry all those elements of oppression, racism, and colonialism with it.

If the use on the street is by Those who find their creativity unlocked by the new tools and they’re finally able to express themselves where previously they may have struggled with a medium or been gatekept from having an arts education or poetry or English or what have you, only to now find themselves told that this isn’t art or this doesn’t count despite all evidence to the contrary, then there’s massive elements of class and bias that go into that as well.

So let’s be clear. An empathetic view of technology recognizes that there are positive use cases for AI. These are being found on the street by persons with disabilities, persons of the global south, non english speakers, and persons across the class spectrum. To deny this is to deny objective reality.

It’s to deny all these groups their actual uses of the technology. Are there problems? Yes, absolutely. Are there bad actors that may use the technology for nefarious means? Of course, this happens on a regular basis, and we’ll put a pin in that and return to that in a few moments, but to deny that there are no good uses is to deny the experience of all these groups that are finding uses for it, and we’re starting to see that when this denialism is pointed out, it’s causing a great degree of controversy.

In a statement made early in September of 2024, NaNoWriMo, the non profit organization behind National Novel Writing Month, it was acceptable to use AI as an assistive technology when writers were working on their pieces for NaNoWriMo, because this supports their mission, which is to quote, “provide the structure, community, and encouragement to help people use their voices, achieve creative goals, and build new worlds, on and off the page.” End quote. 

But what drew the opprobrium of the online community is that they noted that some of the objections to the use of AI tools are classist and ableist. And, as we noted, they weren’t wrong. For all the reasons we just explained and more. But, due to the online uproar, they’ve walked that back somewhat.

I’ll link to the updated statement in the show. The thing is, if you believe that using AI for something like NaNoWriMo is against the spirit of things, that’s your decision. They’ve clearly stated that they feel that assistive technologies can help for people pursuing their dreams. And if you have concerns that they’re going to take stuff that’s put into the official app and sell it off to an LLM or AI company, well, that’s a discussion you need to have with NaNoWriMo, the nonprofit. 

You’re still not held off from doing something like NaNoWriMo using notepad or obsidian or however else you take your notes, but that’s your call. I for one was glad to see that NaNoWriMo called it out. One of the things that I found both in my personal life, as well as in my research, when I was working on the PhD and looking at Tikkun Olam Makers is that it can be incredibly difficult and expensive for persons with disabilities to find a tool that can meet their needs, if it exists at all. So if you’re wondering where I come down on this, I’m on the side of the persons in need. We’re on the side of the streets. You might say we’re streets ahead.

Of course, one of the uses that the street finds for things has always been art. Or at least work that eventually gets recognized as art. It took a long time for the world to recognize that the graffiti of a street artist might count, but in 2024, if one was to argue that Banksy wasn’t an artist, you’d get some funny looks.

There are several threads of debates surrounding AI art, generative art, including the role of creativity, the provenance of the materials, the ethics of using the tools, but the primary question is what counts? What counts as art and who decides that it counts? That’s the point that we’re really raising with that question, and obviously it ties back to what we were talking about last episode when it comes to Soylent Culture, and before that when we were talking about the recently deceased Frederick Jameson as well.

In his work Nostalgia for the Present from 1989, Jameson mentioned this with respect to television. He said, Quote, “At the time, however, it was high culture in the 1950s who was authorized, as it still is, to pass judgment on reality, to say what real life is and what is mere appearance. And it is by leaving out, by ignoring, by passing over in silence and with the repugnance one may feel for the dreary stereotypes of television series, that high art palpably issues its judgments.” end quote. 

Now, High Art in Bunny Quotes isn’t issuing anything, obviously, Jameson’s reifying the term, but what Jameson is getting at is that there’s stakes for those involved about what does and does not count. And we talked about this last episode, where it took a long time for various forms of new media to finally be accepted as art on its own terms.

For some, it takes longer than others. I mean, Jameson was talking about television in the 1980s, for something that had already existed for decades at that point. And even then, it wasn’t until the 90s and 2000s, to the eras of Oz and The Sopranos and Breaking Bad and Mad Men and the quote unquote “golden age of television” that it really began to be recognized and accepted as art on its own terms.

Television was seen as disposable ephemera for decades upon decades. There’s a lot of work that goes on on behalf of high art by those invested in it to valorize it and ensure that it maintains its position. This is why we see one of the critiques about A. I. art being that it lacks creativity, that it is simply theft.

As if the provenance of the materials that get used in the creation of art suddenly matter on whether it counts or not. It would be as if the conditions in the mines of Afghanistan for the lapis lazuli that was crushed to make the ultramarine used by Vermeer had a material impact on whether his painting counted as art. Or if the gold and jewels that went into the creation of the Fabergé eggs and were subsequently gifted to the Russian royal family mattered as to whether those count. It’s a nonsense argument. It makes no sense. And it’s completely orthogonal to the question of whether these works count as art.

And similarly, where people say that good artists borrow, great artists steal, well, we’ll concede that Picasso might have known a thing or two about art, but Where exactly are they stealing it from? The artists aren’t exactly tippy toeing into the art gallery and yoinking it off the walls now, are they?

No, they’re stealing it from memory, from their experience of that thing, and the memory is the key. Here, I’ll share a quote. “Art consists in bringing the memory of things past to the surface. But the author is not a Paessiest. He is a link to history, to memory, which is linked to the common dream.” This is of course a quote by Saul Bellow, talking about his field, literature, and while I know nowadays not as many people are as familiar with his work, if you’re at a computer while you’re listening to this, it might be worth to just look him up.

Are we back? Awesome. Alright, so what the Nobel Prize Laureate and Pulitzer Prize winner Saul Bellow was getting at is that art is an act of memory, and we’ve been going in depth into memory in the last three episodes. And the artist can only work with what they have access to, what they’ve experienced during the course of their lifetime.

The more they’ve experienced, the more they can draw on and put into their art. And this is where the AI art tools come in as an assistive technology, because they would have access to much, much more than a human being can experience, right? Possibly anything that has been stored and put into the database and the creator accessing that tool will have access to everything, all the memory scanned and stored within it as well.

And so then the act of art becomes one of curation of deciding what to put forth. AI art is a digital art form, or at least everything that’s been produced to date. So how does that differ? Right? Well, let me give you an example. If I reach over to my paint shelf and grab an ultramarine paint, right, a cheap Daler Rowney acrylic ink, it’s right there with all the other colors that might be available to me on my paint shelf.

But, back in the day, if we were looking for a specific blue paint, an ultramarine, it would be made with lapis lazuli, like the stuff that Vermeer was looking for. It would be incredibly expensive, and so the artist would be limited in their selection to the paints that they had available to them, or be limited in the amount that they could actually paint within a given year.

And sometimes the cost would be exorbitant. For some paints, it still actually is, but a digital artist working on an iPad or a Wacom tablet or whatever would have access to a nigh unlimited range of colors. And so the only choice and selection for that artist is by deciding what’s right for the piece that they’re doing.

The digital artist is not working with a limited palette of, you know, a dozen paints or whatever they happen to have on hand. It’s a different kind of thing entirely. The digital artist has a much wider range of things to choose from, but it still requires skill. You know, conceptualization, composition, planning, visualization.

There’s still artistry involved. It’s no less art, but it’s a different kind of art. But one that already exists today and one that’s already existed for hundreds of years. And because of a banger that just got dropped in the last couple of weeks, it might be eligible for a Grammy next year. It’s an allographic art.

And if you’re going to try and tell me that Mozart isn’t an artist, I’m going to have a hard time believing you.

Allographic art is a type of art that was originally introduced by Nelson Goodman back in the 60s and 70s. Goodman is kind of like Gordon Freeman, except, you know, not a particle physicist. He was a mathematician and aesthetician, or sorry, philosopher interested in aesthetics, not esthetician as we normally call them now, which has a bit of a different meaning and is a reminder that I probably need to book a pedicure.

Nelson was interested in the question of what’s the difference between a painting and a symphony, and it rests on the idea of like uniqueness versus forgery. A painting, especially an oil painting, can be forged, but it relies on the strokes and the process and the materials that went into it, so you need to basically replicate the entire thing while doing it in order to make an accurate forgery, much like Pierre Menard trying to reproduce Cervantes ‘Quixote’ in the Jorge Luis Borges short story.

Whereas a symphony, or any song really, that is performed based off of a score, a notational system, is simply going to be a reproduction of that thing. And this is basically what Walter Benjamin was getting at when he was talking about art in the age of mechanical reproduction, too, right? So, a work that’s based off of a notational system can still count as a work of art.

Like, no one’s going to argue that a symphony doesn’t count as art, or that Mozart wasn’t an artist. And we can extend that to other forms of art that use a notational system as well. Like, I don’t know, architecture. Frank Lloyd Wright didn’t personally build Falling Water or the Guggenheim, but he created the plans for it, right?

And those were enacted. He did. We can say that, yeah, there’s artistic value there. So these things, composition, architecture, et cetera, are allographic arts, as opposed to autographic arts, things like painting or sculpture, or in some instances, the performance of an allographic work. If I go to see an orchestra playing a symphony, a work based off of a score, I’m not saying that I’m not engaged with art.

And this brings us back to the AI Art question, because one of the arguments you often see against it is that it’s just, you know, typing in some prompts to a computer and then poof, getting some results back. At a very high level, this is an approximation of what’s going on, but it kind of misses some of the finer points, right?

When we look at notational systems, we could have a very, you know, simple set of notes that are there, or we could have a very complex one. We could be looking at the score for Chopsticks or Twinkle Twinkle Little Star, or a long lost piece by Mozart called Serenade in C Major that he wrote when he was a teenager and has finally come to light.

This is an allographic art, and the fact that it can be produced and played 250 years later kind of proves the point. But that difference between simplicity and complexity is part of the key. When we look at the prompts that are input into a computer, we rarely see something with the complexity of say a Mozart.

As we increase the complexity of what we’re putting into one of the generative AI tools, we increase the complexity of what we get back as well. And this is not to suggest that the current AI artists are operating at the level of Mozart either. Some of the earliest notational music we have is found on ancient cuneiform tablets called the Hurrian Hymns, dating back to about 1400 BCE, so it took us a little over 3000 years to get to the level of Mozart in the 1700s.

We can give the AI artists a little bit of time to practice. The generative AI art tools, which are very much in their infancy, appear to be allographic arts, and they’re following in their lineage from procedurally generated art has been around for a little while longer. And as an art form in its infancy, there’s still a lot of contested areas.

Whether it counts, the provenance of materials, ethics of where it’s used, all of those things are coming into question. But we’re not going to say that it’s not art, right? And as an art, as work conducted in a new medium, we have certain responsibilities for documenting its use, its procedures, how it’s created.

In the introduction to 2001’s The Language of New Media, Lev Manovich, in talking about the creation of a new media, digital media in this case, noted how there was a lost opportunity in the late 19th and early 20th century with the creation of cinema. Quote, “I wish that someone in 1895, 1897, or at least 1903 had realized the fundamental significance of the emergence of the new medium of cinema and produced a comprehensive record.

Interviews with audiences, systematic account of narrative strategies, scenography, and camera positions as they developed year by year. An analysis of the connections between the emerging language of cinema and different forms of popular entertainment that coexisted with it. Unfortunately, such records do not exist.

Instead, we are left with newspaper reports, diaries of cinema’s inventors, programs of film showings, and other bits and pieces. A set of random and unevenly distributed historical samples. Today, we are witnessing the emergence of a new medium, the meta medium of the digital computer. In contrast to a hundred years ago, when cinema was coming into being, We are fully aware of the significance of this new media revolution.

Yet I am afraid that future theorists and historians of computer media will be left with not much more than the equivalence of the newspaper reports and film programs from cinema’s first decades.” End quote. 

Manovich goes on to note that a lot of the work that was being done on computers, especially in the 90s, was stuff prognosticating about its future uses, rather than documenting what was actually going on.

And this is the risk that the denialist framing of AI art puts us in. By not recognizing that something new is going on, that art is being created, and allographic art, we lose the opportunity to document it for the future. And

And as with art, so too with science. We’ve long noted that there’s an incredible amount of creativity that goes into scientific research, that the STEM fields, science, technology, engineering, and mathematics, require and benefit so much from the arts that they’d be better classified as STEAM, and a small side effect of that may mean that we see better funding for the arts at the university level.

But I digress. In the examples I gave earlier of medical research, of AI being used as an assistive technology, we were seeing some real groundbreaking developments of the boundaries being pushed, and we’re seeing that throughout the science fields. Part of this is because of what AI does well with things like pattern recognition, allowing weather forecasts, for example, to be predicted more quickly and accurately.

It’s also been able to provide more assistance with medical diagnostics and imaging as well. The massive growth in the number of AI related projects in recent years is often due to the fact that a number of these projects are just rebranded machine learning or deep learning. In a report released by the Royal Society in England in May of 2024 as part of their Disruptive Technology for Research project, they note how, quote, “AI is a broad term covering all efforts aiming to replicate and extend human capabilities for intelligence and reasoning in machines.”

End quote. They go on further to state that, quote, “Since the founding of the AI field at the 1956 Dartmouth Summer Research Project on Artificial Intelligence, Many different techniques have been invented and studied in pursuit of this goal. Many of these techniques have developed into their own sub fields within computer science, such as expert systems and symbolic reasoning.” end quote. 

And they note how the rise of the big data paradigm has made machine learning and deep learning techniques a lot more affordable and accessible, and scalable too. And all of this has contributed to the amount of stuff that’s floating around out there that’s branded as AI. Despite this confusion in branding and nomenclature, AI is starting to contribute to basic science.

A New York Times article published July by Siobhan Roberts talked about how a couple AI models were able to compete at the level of a silver medalist at the recent International Mathematical Olympiad. And this is the first time that the AI model has medaled at that competition. So there may be a role for AI to assist even high level mathematicians to function as collaborators and, again, assistive technologies there.

And we can see this in science more broadly. In a paper submitted to arxiv. org in August of 2024, titled, The AI Scientist Towards Fully Automated Open Ended Scientific Discovery, authors Liu et al. use a frontier large language model to perform research independently. Quote, “We introduce the AI scientist, which generates novel research ideas, writes code, executes experiments, visualizes results, describes its findings by writing a scientific paper, And then runs the simulated review process for evaluation” end quote.

So, a lot of this is scripts and bots and hooking into other AI tools in order to simulate the entire scientific process. And I can’t speak to the veracity of the results that they’re producing in the fields that they’ve chosen. They state that their paper can, quote, “Produce papers that exceed the acceptance threshold at a top machine learning conference as judged by our automated reviewer,” end quote.

And that’s Fine, but it shows that the process of doing the science can be assisted in various realms as well. And in one of those areas of assistance, it’s in providing help for stuff outside the scope of knowledge of a given researcher. AI as an aid in creativity can help explore the design space and allow for the combination of new ideas outside of everything we know.

As science is increasingly interdisciplinary. We need to be able to bring in more material, more knowledge, and that can be done through collaboration, but here we have a tool that can assist us as well. As we talked about with Nessience and Excession a few episodes ago, we don’t know everything. There’s more than we can possibly know, so the AI tools help expand the field of what’s available to us.

We don’t necessarily know where new ideas are going to come from. And if you don’t believe me on this, let me reach out to another scientist who said some words on this back in 1980. Quote, “We do not know beforehand where fundamental insights will arise from about our mysterious and lovely solar system.

And the history of our study of the solar system shows clearly that accepted and conventional ideas are often wrong, and that fundamental insights can arise from the most unexpected sources.” End quote. That, of course, is Carl Sagan. From an October 1980 episode of Cosmos A Personal Journey, titled Heaven and Hell, where he talks about the Velkovsky Affair.

I haven’t spliced in the original audio because I’m not looking to grab a copyright strike, but it’s out there if you want to look for it. And what Sagan is describing there is basically the process by which a Kuhnian paradigm shift takes place. Sagan is speaking to the need to reach beyond ourselves, especially in the fields of science, and the AI assisted research tools can help us with that.

And not just in the conduction of the research, but also in the writing and dissemination of that. Not all scientists are strong or comfortable writers or speakers, and many of them come to English as a second, third, or even fourth language. And the role of AI tools as translation devices means we have more people able to communicate and share their ideas and participate in the pursuit of knowledge.

This is not to say that everything is rosy. Are there valid concerns when it comes to AI? Absolutely. Yes. We talked about a few at the outset and we’ve documented a number of them throughout the run of this podcast. One of our primary concerns is the role of the AI tools in échanger, that replacement effect that happens that leads to technological unemployment.

Much of the initial hype and furor around the AI tools was people recognizing that potential for échanger following the initial public release of ChatGPT. There’s also concerns about the degree to which the AI tools may be used as instruments of control, and how they can contribute to what Gilles Deleuze calls a control society, which we talked about in our Reflections episode last year. 

And related to that is the lack of transparency, the degree to which the AI tools are black boxes, where based on a given set of inputs, we’re not necessarily sure about how it came up with the outputs. And this is a challenge regardless of whether it’s a hardware device or a software tool.

And regardless of how the AI tool is deployed, the increased prevalence of it means we’re leading to a soylent culture. With an increased amount of data smog, or bitslop, or however you want to refer to the digital pollution that takes place with the increased amount of AI content in our channels and For-You-Feeds, and this is likely to become even more heightened as Facebook moves to pushing AI generated posts into the timelines.

Many are speculating that this is becoming so prevalent that the internet is largely bots pushing out AI generated content, what’s called the “Dead Internet Theory”, which we’ll definitely have to take a look at it in a future episode. Hint, the internet is alive and well, it’s just not necessarily where you think it is.

And with all this AI generated content, we’re still facing the risk of the hallucinations, which we talked about, holy moly, over two years ago when we discussed the LOAB, that brief little bit of creepypasta that was making the rounds as people were trying out the new digital tools. But the hallucinations still highlight one of the primary issues with the AI tools, and that’s the errors in the results.

In order to document and collate these issues, a research team over at MIT has created the AI Risk Repository. It’s available at airisk. mit. edu. Here they have created taxonomies of the causes and domains where the risks may take place. However, not all of these risks are equal. One of the primary ones that gets mentioned is the energy usage for AI.

And while it’s not insignificant, I think it needs to be looked at in context. One estimate of global data center usage was between 240 and 340 terawatt hours, which is a lot of energy, and it might be rising as data center usage for the big players like Microsoft and Google has gone up by like 30 percent since 2022.

And that still might be too low, as one report noted that the actual estimate could be as much as 600 percent higher. So when you put that all together, that initial estimate could be anywhere between a thousand and 2000 terawatts. But the AI tools are only a fraction of what goes on at the data centers, which include cloud storage and services, streaming video, gaming, social media, and other high volume activities.

So you bring that number right back down. And AI is using? The thing is, whatever that number is, 300 terawatts times 1. 3 times six divided by five. Whatever that result ends up being doesn’t even chart when looking at global energy usage. Looking at a recent chart on global primary energy consumption by source over at Our World in Data, we see that the worldwide consumption in 2023 was 180, 000 terawatt hours.

The amount of energy potentially used by AI hardly registers as a pixel on the screen compared to worldwide energy usage that were presented with the picture in the media where AI is burning up the planet. I’m not saying AI energy usage isn’t a concern. It should be green and renewable. And it needs to be verifiable, this energy usage of the AI companies, as there is the risk of greenwashing the work that is done, of painting over their activities true energy costs by highlighting their positive impacts for the environment.

And the energy usage may be far exceeded by the water usage that’s used for the cooling of the data centers. And as with the energy usage, the amount of water that’s actually going to AI is incredibly hard to dissociate from all the other activities that are taking place in these data centers. And this greenwashing, which various industries have long been accused of, might show up in another form as well.

There is always the possibility that the helpful stories that are presented, AI tools have provided for various at risk and minority populations, are presented as a form of “aidwashing”. And this is something we have to evaluate for each of the stories posted in the AI Positivity Archive. Now I can’t say for sure that “aidwashing” specifically as a term exists.

A couple searches didn’t return any hits, so you may have heard it here first. However, while positive stories about AI often do get touted, do we think this is the driving motivation for the massive investment we’re seeing in the AI technologies? No, not even for a second. These assistive uses of AI don’t really work with the value proposition for the industry, even though those street uses of technology may point the way forward in resolving some of the larger issues for AI tools with respect to resource consumption and energy usage.

The AI tools used to assist Casey Harrell, the ALS patient mentioned near the beginning of the show, use a significantly smaller model than one’s conventionally available, like those found in ChatGPT. The future of AI may be small, personalized, and local, but again, that doesn’t fit with the value proposition. 

And that value proposition is coming under increased scrutiny. In a report published by Goldman Sachs on June 25th, 2024, they question if there’s enough benefit for all the money that’s being poured into the field. In a series of interviews with a number of experts in the field, they note how initial estimates about both the cost savings, the complexity of tasks that AI is available to do, and the productivity gains that would derive from it, are all much lower than initially proposed or happening on a much longer time frame.

In it, MIT professor Daron Acemoglu forecasts minimal productivity and GDP growths, around 0. 5 percent or 1%, whereas Goldman Sachs predictions were closer to 9 percent and 6 percent increase in GDP. With such varying degrees of estimates, what the actual impact of AI in the next 10 years is, is anybody’s guess.

It could be at either extreme or somewhere in between. But the main takeaway from this is that even Goldman Sachs is starting to look at the balance sheet and question the amount of money that’s being invested in AI. And that amount of money is quite large indeed. 

In between starting recording this podcast episode and finishing it, OpenAI raised 6. 6 billion dollars in a funding round from its investors, including Microsoft and Nvidia, which is the largest ever recorded. As reported by Reuters, this could value the company at 157 billion dollars and make it one of the the world. valuable private companies in the world. And this coincides with the recent restructuring from a week earlier which would remove the non profit control and see it move to a for profit business model.

But my final question is, would this even work? Because it seems diametrically opposed to what AI might actually bring about. If assistive technology focused on automation and Echange, then the end result may be something closer to what Aaron Bastani calls “fully automated luxury communism”, where the future is a post-scarcity environment that’s much closer to Star Trek than it is to Snow Crash.

How do you make that work when you’re focused on a for profit model? The tool that you’re using is not designed to do what you’re trying to make it do. Remember, “The street finds its own uses for things”, though in this case that street might be Wall Street. The investors and forecasters at Goldman Sachs are recognizing that disconnect by looking at the charts and tables in the balance sheet.

But their disconnect, the part that they’re missing, is that the driving force towards AI may be one more of ideology. And that ideology is the California ideology, a term that’s been floating around since at least the mid 1990s. And we’ll take a look at it next episode and return to the works of Lev Manovich, as well as Richard Barbrook, Andy Cameron, and Adrian Daub, as well as a recent post by Sam Altman titled ‘The Intelligence Age’.

There’s definitely a lot more going on behind the scenes.

Once again, thank you for joining us on the Implausipod. I’m your host, Dr. Implausible. You can reach me at drimplausible at implausipod. com. And you can also find the show archives and transcripts of all our previous shows at implausipod. com as well. I’m responsible for all elements of the show, including research, writing, mixing, mastering, and music.

And perhaps somewhat surprisingly, given the topic of our episode, no AI is used in the production of this podcast. Though I think some machine learning goes into the transcription service that we use. And the show is licensed under Creative Commons 4. 0 share alike license. You may have noticed at the beginning of the show that we described the show as an academic podcast and you should be able to find us on the Academic Podcast Network when that gets updated.

You may have also noted that there was no advertising during the program and there’s no cost associated with the show. But it does grow from word of mouth of the community. So if you enjoy the show, please share it with a friend or two, and pass it along. There’s also a buy me a coffee link on each show at implausopod.

com, which will go to any hosting costs associated with the show. I’ve put a bit of a hold on the blog and the newsletter, as WordPress is turning into a bit of a dumpster fire, and I need to figure out how to re host it. But the material is still up there, I own the domain. It’ll just probably look a little bit more basic soon.

Join us next time as we explore that Californian ideology, and then we’ll be asking, who are Roads for? And do a deeper dive into how we model the world. Until next time, take care and have fun.



Bibliography

A bottle of water per email: The hidden environmental costs of using AI chatbots. (2024, September 18). Washington Post. https://www.washingtonpost.com/technology/2024/09/18/energy-ai-use-electricity-water-data-centers/

A Note to Our Community About our Comments on AI – September 2024 | NaNoWriMo. (n.d.). Retrieved October 5, 2024, from https://nanowrimo.org/a-note-to-our-community-about-our-comments-on-ai-september-2024/

Advances in Brain-Computer Interface Technology Help One Man Find His Voice | The ALS Association. (n.d.). Retrieved October 5, 2024, from https://www.als.org/blog/advances-brain-computer-interface-technology-help-one-man-find-his-voice

Balevic, K. (n.d.). Goldman Sachs says the return on investment for AI might be disappointing. Business Insider. Retrieved October 5, 2024, from https://www.businessinsider.com/ai-return-investment-disappointing-goldman-sachs-report-2024-6

Broad, W. J. (2024, July 29). Artificial Intelligence Gives Weather Forecasters a New Edge. The New York Times. https://www.nytimes.com/interactive/2024/07/29/science/ai-weather-forecast-hurricane.html

Card, N. S., Wairagkar, M., Iacobacci, C., Hou, X., Singer-Clark, T., Willett, F. R., Kunz, E. M., Fan, C., Nia, M. V., Deo, D. R., Srinivasan, A., Choi, E. Y., Glasser, M. F., Hochberg, L. R., Henderson, J. M., Shahlaie, K., Stavisky, S. D., & Brandman, D. M. (2024). An Accurate and Rapidly Calibrating Speech Neuroprosthesis. New England Journal of Medicine, 391(7), 609–618. https://doi.org/10.1056/NEJMoa2314132

Consumers Know More About AI Than Business Leaders Think. (2024, April 8). BCG Global. https://www.bcg.com/publications/2024/consumers-know-more-about-ai-than-businesses-think

Cosmos. (1980, September 28). [Documentary]. KCET, Carl Sagan Productions, British Broadcasting Corporation (BBC).

Donna. (2023, October 9). Banksy Replaced by a Robot: A Thought-Provoking Commentary on the Role of Technology in our World, London 2023. GraffitiStreet. https://www.graffitistreet.com/banksy-replaced-by-a-robot-a-thought-provoking-commentary-on-the-role-of-technology-in-our-world-london-2023/

Gen AI: Too much spend, too little benefit? (n.d.). Retrieved October 5, 2024, from https://www.goldmansachs.com/insights/top-of-mind/gen-ai-too-much-spend-too-little-benefit

Goodman, N. (1976). Languages of Art (2 edition). Hackett Publishing Company, Inc.

Goodman, N. (1978). Ways Of Worldmaking. http://archive.org/details/GoodmanWaysOfWorldmaking

Hill, L. W. (2024, September 11). Inside the Heated Controversy That’s Tearing a Writing Community Apart. Slate. https://slate.com/technology/2024/09/national-novel-writing-month-ai-bots-controversy.html

Hu, K. (2024, October 3). OpenAI closes $6.6 billion funding haul with investment from Microsoft and Nvidia. Reuters. https://www.reuters.com/technology/artificial-intelligence/openai-closes-66-billion-funding-haul-valuation-157-billion-with-investment-2024-10-02/

Hu, K., & Cai, K. (2024, September 26). Exclusive: OpenAI to remove non-profit control and give Sam Altman equity. Reuters. https://www.reuters.com/technology/artificial-intelligence/openai-remove-non-profit-control-give-sam-altman-equity-sources-say-2024-09-25/

Knight, W. (n.d.). An ‘AI Scientist’ Is Inventing and Running Its Own Experiments. Wired. Retrieved September 9, 2024, from https://www.wired.com/story/ai-scientist-ubc-lab/

LaBossiere, M. (n.d.). AI: I Want a Banksy vs I Want a Picture of a Dragon. Retrieved October 5, 2024, from https://aphilosopher.drmcl.com/2024/04/01/ai-i-want-a-banksy-vs-i-want-a-picture-of-a-dragon/

Lu, C., Lu, C., Lange, R. T., Foerster, J., Clune, J., & Ha, D. (2024, August 12). The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery. arXiv.Org. https://arxiv.org/abs/2408.06292v3

Manovich, L. (2001). The language of new media. MIT Press.

McLuhan, M. (1964). Understanding Media: The Extensions of Man. The New American Library.

Mickle, T. (2024, September 23). Will A.I. Be a Bust? A Wall Street Skeptic Rings the Alarm. The New York Times. https://www.nytimes.com/2024/09/23/technology/ai-jim-covello-goldman-sachs.html

Milman, O. (2024, March 7). AI likely to increase energy use and accelerate climate misinformation – report. The Guardian. https://www.theguardian.com/technology/2024/mar/07/ai-climate-change-energy-disinformation-report

Mueller, B. (2024, August 14). A.L.S. Stole His Voice. A.I. Retrieved It. The New York Times. https://www.nytimes.com/2024/08/14/health/als-ai-brain-implants.html

Overview and key findings – World Energy Investment 2024 – Analysis. (n.d.). IEA. Retrieved October 5, 2024, from https://www.iea.org/reports/world-energy-investment-2024/overview-and-key-findings

Roberts, S. (2024, July 25). Move Over, Mathematicians, Here Comes AlphaProof. The New York Times. https://www.nytimes.com/2024/07/25/science/ai-math-alphaproof-deepmind.html

Schacter, R. (2024, August 18). How does Banksy feel about the destruction of his art? He may well be cheering. The Guardian. https://www.theguardian.com/commentisfree/article/2024/aug/18/banksy-art-destruction-graffiti-street-art

Science in the age of AI | Royal Society. (n.d.). Retrieved October 2, 2024, from https://royalsociety.org/news-resources/projects/science-in-the-age-of-ai/

Sullivan, S. (2024, September 25). New Mozart Song Released 200 Years Later—How It Was Found. Woman’s World. https://www.womansworld.com/entertainment/music/new-mozart-song-released-200-yaers-later-how-it-was-found

Taylor, C. (2024, September 3). How much is AI hurting the planet? Big tech won’t tell us. Mashable. https://mashable.com/article/ai-environment-energy

The AI Risk Repository. (n.d.). Retrieved October 5, 2024, from https://airisk.mit.edu/

The Intelligence Age. (2024, September 23). https://ia.samaltman.com/

What is NaNoWriMo’s position on Artificial Intelligence (AI)? (2024, September 2). National Novel Writing Month. https://nanowrimo.zendesk.com/hc/en-us/articles/29933455931412-What-is-NaNoWriMo-s-position-on-Artificial-Intelligence-AI

Wickelgren, I. (n.d.). Brain-to-Speech Tech Good Enough for Everyday Use Debuts in a Man with ALS. Scientific American. Retrieved October 5, 2024, from https://www.scientificamerican.com/article/brain-to-speech-tech-good-enough-for-everyday-use-debuts-in-a-man-with-als/

Artwork

Thinking about the artwork on the podcast today, and how I’d like to get something that isn’t procedurally generated. I used one of the early generative tools for it (forgot which one) when I was trying to get it launched, and while it helped, and I like it overall, I know the “look” of the generative art isn’t necessarily for everyone, and might be an active turnoff.

So, thinking of a couple different approaches:

  1. Photography (plus a little editing):

Take a picture, of something that roughly matches, and then process it so it keeps the same overall “feel”, but had a human involved in the steps rather than an automated tool.

Pros: not AI generated, original to me, own the copyright

Cons: at a certain level, what’s the point? What’s the difference between something that’s procedurally generated, and something that’s heavily processed? They’ll end up looking similar by the end of it, with the original barely recognizable. If one of the benefits of the generative tools is that they automate the work, as we’ve argued several times on both the podcast and on this page, then what’s wrong with using the tools?

  1. Contracted Work:

This would involve finding one of the many artists accepting commissions to create am icon, a logo, some splash art, something like that. I’m fine with this, even though the cost might scale, depending on what (or how much) I’m asking for. Licensing might be an issue, and terms of the agreement. Copyright to the artist, obviously.

Pros: something that looks good, created by a professional, business to an artist.

Cons: negotiations, cost, rights to the work. Obviously this gets done and happens all the time, but it’s still outside my experience. And the ability to change and modify the work for different contexts, like special podcast episodes, or for different places (Podcast v Youtube v here, frex).

I’m not against it, but I’m still a little wary.

  1. Self-created:

Using an art tool, like Canva, Moho, or something similar to create the work. This is what I’ve been using for everything aside from the banner and logo for the podcast. Again, not bad, but limited, especially with my artistic skills and Canva’s free options. I’m not 100% happy with the look of stuff that I’ve created. Maybe that’s a me thing, but I know it could be better as well.

Pros: Already started down the path, would have ownership of the material as well.

Cons: Doesn’t look great, kinda sends off an amatuerish or unprofessional vibe, cost could increase significantly to possible little effect (if the issue is with my “eye” or style more than the tools).

Could be (or “can” or “is”) a massive time sink, spending hours doing something outside my skillset where I could be working on the writing and recording. So, perhaps not the best use of my Friday nights?

And there may be other options I’m missing. But it feels like this is where I’m at at the moment with respect to the art for the blog, podcast, and YouTube channel.

I’m excited that there’s an opportunity to try new things with each of the above options. But I’m a little worried by each of them too.

I think the best way might be to explore each one in turn, and see which one gives me a result I’m most happy with (without breaking the bank, of course!).

I’ll link back here, and update, with examples of each as we give it a shot.

Implausipod E0012 – AI Reflections

AI provides a refection of humanity back at us, through a screen, darkly. But that glass can provide different visions, depending on the viewpoint of the observer. Are the generative tools that we call AI a tool for advancement and emancipation, or will they be used to further a dystopic control society? Several current news stories give us the opportunity to see the potential path before us leading down both these routes. Join us for a personal reflection on AI’s role as an assistive technology on this episode of the Implausipod.

https://www.buzzsprout.com/1935232/episodes/13472740

Transcript:

 On the week before August 17th, 2023, something implausible happened. There was a news report that a user looking for, can’t miss spots in Ottawa, Ontario, Canada, would be returned some unusual results on Microsoft’s Bing search. The third result down on an article from MS Travel suggested the users could visit the Ottawa food bank if they’re hungry, that they should bring an appetite.

This was a very dark response, a little odd, and definitely insensitive, making one wonder if this is done by some teenage pranksters or hackers, or if there was a human involved in the editing decisions at all. Because initial speculation was that this article – credited to Microsoft Travel – may have been entirely generated by AI.  Microsoft’s subsequent response in the week following was that it was credited due to human error, but doubts remain, and I think the whole incident allows us to reflect on what we see in AI, and what AI reflects back to us… about ourselves, which we’ll discuss in this episode of the ImplausiPod.

Welcome to the ImplausiPod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible, and today on episode 12, we’re gonna peer deeply into that glass, that formed silicon that makes up our large language models and AI, and find out what they’re telling us about ourselves.

Way back in episode three, which admittedly is only nine episodes but came out well over a year ago, we looked at some of the founding figures of cyberpunk and of course one of those is Philip K Dick, who’s most known for Do Android’s Dream of Electric Sheep, which became Blade Runner, and now The Man in the High Castle, and other works which are yet un-adapted, like The Three Stigmata of Palmer Eldrich, but one of his most famous works was of course A Scanner Darkly, which had a Rotoscoped film version released in 2006 starring Keanu Reeves. Now, the title, of course, is a play on words from the biblical verse from One Corinthians where it’s phrased as looking “through a glass darkly”, and even though there’s some ambiguity there, whether it’s a glass or a mirror, or in our context, a filter, or in this case a scanner or screen. With the latter two being the most heavily technologized of all of them, the point remains, whether it’s a metaphor or a meme, that by peering through the mirror, the reflection that we get back is but a shadow of the reality around us.

And so too, it is with AI. The large language models, which have been likened to “auto-complete on steroids”, and the generative art tools (which are like procedural map makers that we discussed in a icebreaker last fall) have gained an incredible amount of attention in 2023. But with that attention has come some cracks in the mirror, and while there is still a lot of deployment of them as tools, they’re no longer seen as the harbinger of AGI or (artificial) general Intelligence, let alone super intelligence that will lead us on a path through a technological singularity. No, the collection of programs that have been branded as AI are simply tools what media theorist Marshall McCluhan called “Extensions of Man”, and it’s with that dual framing of the mirror held extended at our hand that I wanna reflect on what AI means for us in 2023.

So let’s think about it in terms of a technology. In order to do that, I’d like to use the most simple definition I can come up with; one that I use as an example in courses I’ve taught at the university. So follow along with me and grab one of the simplest tools that you may have nearby. It works best with a pencil or perhaps a pair of chopsticks, depending on where you’re listening.

If you’re driving an automobile, please don’t follow along and try this when you’re safely stopped. But take the tool and hold it in your hands as if you were about to use it, whether to write or draw or to grab some tasty sushi or a bowl of ramen. You do you. And then close your eyes and rest for a moment.

Breathe and then focus your attention down. To the tool in your hands, held between your fingers and reach out. Feel the shape of it, you know exactly where it is, and you can kind of feel with the stretch of your attention, the end of where that might actually exist. The tool has now become part of you, a material object that is next to you and extends your senses and what you are capable of.

And so it is with all tools that we use, everything from a spoon to a steam shovel, even though we don’t often think of that as such. It also includes the AI tools that we use, that constellation of programs we discussed earlier. We can think of all of these as assistive technologies, as extensions of ourselves that multiply our capabilities. And open your eyes if you haven’t already.

So what this quick little experiment is helpful in demonstrating is just exactly how we may define technology. Here using a portion of McLuhan’s version. We can see it as an extension of man, but there have been many other definitions of technology before. We can use other versions that focus on the artifacts themselves, like Fiebleman’s  where tech is “materials that are altered by human agency for human usage”, but this can be a little instrumental. And at the other extreme, we can have those from the social construction school like John Laws’ definition of “a family of methods for associating and channelling other entities and forces, both human and non-human”. Which when you think about it, does capture pretty much everything relating to technology, but it’s also so broad that it loses a lot of the utility.

But I’ve always drawn a middle line and my personal definition of technology is it’s “the material embodiment of an artifact and its associated systems, materials, and practices employed to achieve human ends”. I think we need to capture both the tool and the context, as well as the ways that they’re employed and used, and I think this definition captures the generative tools that we call AI as well. If we can recognize that they’re tools used for human ends and not actors with their own agency, then we can change the frame of the discussion around these generative tools and focus on what ends they’re being used for.

And what they’re being used for right now is not some science fictional version, either the dystopic hellscapes of the Matrix or Terminator, or on the flip side, the more utopic versions, the one, the “Fully Automated Luxury Communism” that we’d see in the post scarcity societies of like a Star Trek: The Next Generation, or even Iain M. Banks’ the Culture series.  Neither of these is coming true, but those polls – that ideation, these science fiction versions that kind of drive our collective imagination of the publics, the social imaginaries that we talked about a few episodes ago – these polls represent the two ends of that continuum, of that discussion, that dialectic between utopic and dystopic and the way we frame technology.

As Anabel Quan-Haase notes in their book on Technology and Society, those poles: the utopic idea of technology achieving progress through science, and the dystopic is technology as a threat to established ways of life, are both frames of reference. They could both be true depending on the point of view of the referrer. But as we said, it is a dialectic. There is a dialogue going back and forth between these two poles continually. So technology in this case is not inherently utopic or dystopic, but we have to return again to the ends that the technology is put towards: the human ends. So rather than utopic or dystopic, we can think of technology being rather emancipatory or controlling, and it’s in this frame, through this lens, this glass that I want to peer at the technology of AI.

The emancipatory frame for viewing these generative AI tools view them as an assistive technology, and it’s through this frame, this lens that we’re going to look at the technology first. These tools are exactly that: they are liberating, they are freeing. And whenever we want to take an empathetic view of technology, we wanna see how they may be used by others who aren’t in our situation.  And that situations means they may be doing okay, they might be even well off, but they may also be struggling. There may be issues that they, or challenges that they have to deal with on a regular basis that most of us can’t even imagine. And this is where my own experience comes from. So I’ll speak to that briefly.

Part of my background is when I was doing my field work for my dissertation, I was engaged with a number of the makerspaces in my city, and some of them were working with local need-knowers or persons with disabilities like the Tikkun Olam Makers, as well as the Makers Making Change groups. These groups worked with persons with disabilities to find solutions to their particular problems.  problems that often there wasn’t a market solution available because it wasn’t cost effective. You know, the “Capitalist realism” situation that we currently are under means that a lot of needs, especially for marginal groups, may go unmet. And these groups came together to try and meet those needs as best they could through technological solutions using modern technologies like 3D printing or microcontrollers or what have you, and they do it through regular events, whether it was a hackathon or regular monthly meetup groups or using the space provided by a local makerspace. And in all these cases, what these tools are are liberating to some of the constraints or challenges that are experienced in daily life.

We can think of more mainstream versions, like a mobility scooter that allows somebody with reduced mobility to get around and more fully participate within their community to meet some of the needs that they need on a regular basis, and even something as simple as that can be really liberating for somebody who needs it. We need to be cognizant of that because as the saying goes, we are all at best just temporarily able, and we never know when a change may be coming that could radically change our lives. So that empathetic view of technology allows us to think with some forethought about what may happen as if we or someone we love were in that situation, and it doesn’t even have to be somebody that close to us. We can have a much more communal or collective view of society as well.

But to return to this liberating view, we can think about it in terms of those tools, the generative tools, whether they’re for text or for art, or for programming, or even helping with a little bit of math.  We can see how they can assist us in our daily lives by either fulfilling needs or just allowing us to pursue opportunities that we thought were too daunting. While the generative art tools like Dall-E and Midjourney have been trained on already existing images and photographs, they allow creators to use them in new and novel ways.

It may be that a musician can use the tools to create a music video where before they never had the resources or time or money in any way, shape, or form to actually pursue that. It allows them to expand their art in a different realm. Similarly, an artist may be able to create stills that go with a collection or you know, accompany their writing that they’re working on, or an academic could use it as slides to accompany a presentation, something that they’ve spent time on, or a YouTube video, or even a podcast and their title bars and the like (present company included). My own personal experience when I was trying to launch this podcast was there was all this stuff I needed to do, and the generative art tools, the cruder ones that were available at that time, allowed some of the art assets to be filled in, and that barrier to launch, that barrier to getting something going was removed.

So emancipatory, liberating, even though at a much smaller scale, those barriers were removed and it allowed for creativity to flow in other areas, and it works similarly across these generative tools, whether it’s putting together some background art or a campaign map or a story prompt. If you need some background for a characters that are part of a story as an NPC, as a Dungeon Master, or what have you, or even just to bounce or refine coding ideas off of, I mean, the coding skills are rudimentary, but it does allow for something functional to be produced.

And this leads into some of the examples I’d like to talk about. The first one is from a post by Brennan Stelli on Mastodon on August 18th, where he said that we could leverage AI to do work, which is not being done already because there’s no budget time or knowhow.  There’s a lot of work that falls into this space of stuff that needs to be done, but you know, is outside of scope of a particular project. This could include something like developing the visualizations that will allow him to better communicate an idea in a fraction of the time, you know, minutes instead of hours that would normally take to do something like that, and so we can see in Brennan’s experience that it mirrors a lot of our own.

The next example’s a little bit more involved in an article written by Pam Belluck and published on the New York Times website on August 23rd, 2023. She details how researchers have used predictive text as well as AI generated facial animations that help with an avatar and speech that assist the stroke victim in communicating with their loved ones.

And the third example that hit a little bit closer to home was that of a Stanford research team that used the BCI or brain computer interface, along with AI assisted predictive text generation to allow a person with amyotrophic lateral sclerosis or ALS (to talk) at a regular conversational tempo, the tools read the neural activity that would be combined with the facial muscles moving and that is allowed to be translated into text. These are absolutely groundbreaking and amazing developments and I can’t think of any better example that shows how AI can be an assistive technology.

Now most of these technologies are confined to text and screen, to video and audio, and often when we think of AI, we think of mobility as well. So the robotic assistants that have come out of research labs like that of Boston Dynamics have attracted a lot of the attention, but even there, we can see some of the potential as an assistive technology. The fact that it’s confined to a humanoid robot means we sometimes lose sight of that fact, but that is what it is. In the video that they released in January of 2023, it shows an Atlas Robot as an assistant on a construction site providing tools and moving things around in aid of the human that’s the lead on the project, so it allows a single contractor working on their own to extend what they’re able to do, even if they don’t have access to a human helper. So it still counts as an assistive technology, even though we can start to see the dark side of the reflection through this particular lens, that the fact that an emancipatory technology may mean emancipation from the work that people currently have available to them.

In all of these instances, there’s the potential for job loss, that the tools would take the place of someone doing that, whether it’s in writing or as an artist, or a translator, or transcriber, or a construction assistant, and those are very real concerns. I do not want to downplay that, Part of our reflection on AI has to take these into account that the dark side of the mirror (or the flip side of the magnifying glass) can take something that can be helpful and exacerbate it when it’s applied to society at large. The concerns about job loss are similar to concerns we’ve had about automation for centuries, and they’re still valid. What we’re seeing is an extension of that automation into realms that we thought were previously exclusively bound to, you know, human actors: creators, artists, writers and the like.

This is why AI and generative art tools are such a driving and divisive element when it comes to the current WGA and SAG-Aftra strikes: that the future of Hollywood could be radically different if they see widespread usage. And beyond just the automation and potential job loss, a second area of concern is the one that ChatGPT and the large language models don’t necessarily have any element of truth involved in it, that they’re just producing output linguists like Professor Emily Bender of the University of Washington and the Mystery AI Hype Theater Podcast have gone into extensive detail about how the output of ChatGPT cannot be trusted. It has no linkage to truth, and there’s been other scholars that have gone into the challenges with using ChatGPT or LLMs for legal research or academic work or anything along those lines. I think it still has a lot of potential and utility as a tool, but it’s very much a contested space.

And the final area of contestation that we’ll talk about today is the question of control. Now, that question has two sides: the first is the control of that AI. One that most often surfaces in our collective imaginary is that idea of rogue super intelligences or killer robots gets repeated in TV, film, and our media in general, and this does get addressed at an academic level and works like Stuart Russell’s Human Compatible and Nick Bostrom’s Superintelligence.  They both address the idea of what happens if those artificial intelligences get beyond human capacity to control them.

But the other side of that is the control of us, control of society. Now, that gets replicated in our media as well, and everything from Westworld, to the underlying themes of the TV series Person of Interest, where The Machine is a computer system, developed to help detect and anticipate and suppress terrorist action using the tools of a post 9-11 surveillance state that it has access to.

And ever since Gilles Deleuze wrote his Postscript on the Societies of Control back in 1990, that so accurately captured the shift that had occurred in our societies from the sovereign societies of the Middle Ages and Renaissance through to the disciplinary societies that typified the 18th and 19th century, through to the shift that occurred in the 20th and 21st century to that of a control society where the logics of the society was enforced and regulated by computers and code. And while Deleuze was not talking about algorithms and AI in his work, we can see how they’re a natural extension of what he was talking about, how the biases that are ingrained within our algorithms, what Virginia Eubanks talked about in her book Automating Inequality, and how our biases and assumptions that go into the coding and training of those advanced system can manifest in ways, including everything from facial recognition to policing, to recommendation engines on travel websites that suggest that perhaps should go to the food bank to catch a meal.

Now there’s a twist to our Ottawa food bank story, of course. About a week after Microsoft came out and said that the article had been removed and that it had been identified that the issue was due to human error and not due to an unsupervised AI. But even with that answer, there are those who are skeptical: because it didn’t happen just once. There was a lot of articles where such weird or incongruous elements showed up. And of course, this being the internet, there was a number of people that did catch the receipts.

Now there’s a host of reasons of what might be happening with these bad reviews. Some plausible and some slightly less so. It could be just an issue of garbage in garbage out that the content that they’re scraping to power the AI is drawing articles that already exist that are, you know, satire or meme sites. If the information that you’re getting on the web is coming from Something Awful or 4chan, then you’re gonna get some dark articles in there. But the other alternative is that it could be just hallucinations that have been an observed fact that has been happening with these AIs and large language models that, uh, incidents like we saw with the Loa B that we talked about in an icebreaker last year are still coming forward in ways that are completely unexpected and out of our control.

That scares us a little bit because we don’t know exactly what it’s going to do. When we look at the AI through that lens, like in the mirror, what it’s reflecting back to us is something we don’t necessarily want to look at, and we think that it could be revealing the darkest aspects of ourselves, and that frightens us a whole lot.

AI is a reflection of our society and ourselves, and if we don’t like what we’re seeing, then that gives us an opportunity to perhaps correct things because AI, truth be told, is really dumb right now. It just shows us what’s gone into building it. But as it gets better, as the algorithms improve, then it may get better at hiding its sources.

And that’s a cause for concern. We’re rapidly reaching a point where we may no longer be able to tell or spot a deepfake or artificially generated image or voice, and this may be used by all manner of malicious actors. So as we look through our lens at the future of AI , what do we see on our horizon?

References:
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

Deleuze, G. (1992). Postscript on the Societies of Control. October, 59, 3–7.

Eubanks, V. (2018). Automating Inequality. Macmillan.

McLuhan, M. (1964). Understanding Media: The Extensions of Man. The New American Library.

Quan-Haase, A. (2015). Technology and Society: Social Networks, Power, and Inequality. Oxford University Press.

Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.

Links:
https://arstechnica.com/information-technology/2023/08/microsoft-ai-suggests-food-bank-as-a-cannot-miss-tourist-spot-in-canada/

https://tomglobal.org/about

https://www.makersmakingchange.com/s/

https://arstechnica.com/health/2023/08/ai-powered-brain-implants-help-paralyzed-patients-communicate-faster-than-ever/

https://blog.jim-nielsen.com/2023/temporarily-abled/

https://www.businessinsider.com/microsoft-removes-embarrassing-offensive-ai-assisted-travel-articles-2023-8