AI Refractions

(this was originally published as Implausipod Episode 38 on October 5th, 2024)

https://www.implausipod.com/1935232/episodes/15804659-e0038-ai-refractions

Looking back in the year since the publication of our AI Reflections episode, we take a look at the state of the AI discourse at large, where recent controversies including those surrounding NaNoWriMo and whether AI counts as art, or can assist with science, bring the challenges of studying the new medium to the forefront.


In 2024, AI is still all the rage, but some are starting to question what it’s good for. There’s even a few that will claim that there’s no good use for AI whatsoever, though this denialist argument takes it a little bit too far. We took a look at some of the positive uses of AI a little over a year ago in an episode titled AI Reflections.

But it’s time to check out the current state of the art, take another look into the mirror and see if it’s cracked. So welcome to AI Refractions, this episode of ImplausiPod.

Welcome to The ImplausiPod, an academic podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. And in this episode, we’ve got a lot to catch up on with respect to AI. So we’re going to look at some of the positive uses that have come up and how AI relates to creativity and statements from NaNoWriMo caused a bit of controversy.

And how that leads into AI’s use in science. But it’s not all sunny over in AI land. We’ve looked at some of the concerns before with things like Echange, and we’ll look at some of the current critiques as well. And then look at the value proposition for AI, and how recent shakeups with open AI in September of 2024 might relate to that.

So we’ve got a lot to cover here on our near one year anniversary of that AI Reflections episode, so let’s get into it. We’ve mentioned AI a few other times since that episode aired in August of 2023. It came up in episode 28, our discussion on black boxes and the role of AI handhelds, as well as episode 31 when we looked at AI as a general purpose technology.

And it also came up a little bit in our discussion about the arts, things like Echanger and the Sphere, and how AI might be used to assist in higher fidelity productions. So it’s been an underlying theme about a lot of our episodes. And I think that’s just the nature of where we sit with relation to culture and technology.

When you spend your academic career studying the emergence of high technology and how it’s created and developed, when a new one comes on the scene, or at least becomes widely commercially available, you’re going to spend a lot of time talking about it. And we’ve been obviously talking about it for a while.

So if you’ve been with us for a while, first off, you’re Thank you, and this may be familiar to you, and if you just started listening recently, welcome, and feel free to check out those episodes that we mentioned earlier. I’ll put links to the specific ones in the text. And looking back at episode 12, we started by laying down a definition of technology.

We looked at how it functioned as an extension of man, to borrow from Marshall McLuhan, but the working definition of technology that I use, the one that I published in my PhD, is that “Technology is the material embodiment of an artifact and its associated systems, materials, and practices employed to achieve human ends.”

And this definition of technology covers everything from the sharp stick and sharp stick- related technologies like spears, pencils, and chopsticks, to our more advanced tech like satellites and AI and VR and robots and stuff. When you really think about it, it’s a very expansive definition, but that helps us in its utility in allowing us to recognize and identify things.

And by being able to cover everything from sharp sticks to satellites, from language to pharmaceuticals to games, it really covers the gamut of things that humans use technology for, and contributes to our view of technology as an emancipatory view. That technology is ultimately assistive and can aid us in issues that we’re struggling with.

We recognize that there’s other views and perspectives, but this is where we fall down on the spectrum. Returning back to episode 12, we showed how this emancipatory stance contributes to an empathetic view of technology, where we can step outside of our own frame of reference and think about how technology can be used by somebody who isn’t us.

Whether it’s a loved one, somebody close to us, or even a member of our community or collective, or you. More widely ranging, somebody that we’ll never come into contact with. How persons with different abilities and backgrounds will find different uses for the technology. Like the famous quote goes, “the street finds its own uses for things.”

Maybe we’ll return back to that in a sec. We finished off episode 12 looking at some of the positive uses of AI at that time that had been published just within a few weeks of us recording that episode. People were recounting how they were finding it as an aid or an enhancement to their creativity, and news stories were detailing how the predictive text abilities as well as generative AI facial animations could help stroke victims, as well as persons with ALS being able to converse at a regular tempo.

So by and large it could function as an assistive technology, and in recent weeks we have started trying to Catalogue all those stories. Back in July over on the blog we created the Positive AI Archive, a place where I could put those links to all the stories that I come across. Me being me, I forgot to update it since, but we’ll get those links up there and you should be able to follow along.

We’ll put the link to the archive in the show notes regardless. And, in the interest of positivity, that’s kinda where I wanted to start the show.

The street finds its own uses for things. It’s a great quote from Burning Chrome, a collection of short stories by William Gibson. It’s the one that held Johnny Mnemonic, which led to the film with Keanu Reeves, and then subsequently The Matrix and Cyberpunk 2077 and all those other derivative works. The street finds its own uses for things is a nuanced phrase and nuance can be required when we’re talking about things, especially online when everything gets reduced to a soundbite or a five second dance clip.

The street finds its own uses for things is a bit of a mantra and it’s one that I use when I’m studying the impacts of technology and what “the street finds its own uses for things” means is that the end users may put a given technology to tasks that its creators and developers never saw. Or even intended.

And what I’ve been preaching here, what I mentioned earlier, is the empathetic view of technology. And we look at who benefits from using that technology, and what we find with the AI tools is that there are benefits. The street is finding its own uses for AI. In August of 2024, a number of news reports talked about Casey Harrell, a 46 year old father suffering from ALS, amyotrophic lateral sclerosis, who was able to communicate with his daughter using a combination of brain implants and AI assisted text and speech generation.

Some of the work on these assistive technologies was done with grant money, and there’s more information about the details behind that work, and I’ll link to that article here. There’s multiple technologies that go into this, and we’re finding that with the AI tools, there’s very real benefits for persons with disabilities and their families.

Another thing we can do when we’re evaluating a technology is see where it’s actually used, where the street is located. And when it comes to assistive AI tools like ChatGPT, The street might not be where you think it is. In a recent survey published by Boston Consulting Group in August of 2024, they showed where the usage of ChatGPT was the highest.

It’s hard to visually describe a chart, obviously, but at the top of the scale, we saw countries like India, Morocco, Argentina, Brazil, Indonesia. English speaking countries like the US, Australia, and the UK were much further down on the chart. The country where ChatGPT is finding its most adoption are countries where English is not the primary language.

They’re in the global south, countries with large populations that have also had to deal with centuries of exploitation. And that isn’t to say that the citizens of these countries don’t have concerns, they do, but they’re using it as an assistive technology. They’re using it for translation, to remove barriers and to help reduce friction, and to customize their own experience. And these are just a fraction of the stories that are out there. 

So there are positive use cases for AI, which may seem to directly contradict various denialist arguments that are trying to gaslight you into believing that there is no good use for AI. This is obviously false.

If the positive view, the use on the street, is being found by persons with disabilities, it follows that the denialist view is ableist. If the positive view, that use on the street, is being found by persons of color, non English speakers, persons in the global south, then the denialist view will carry all those elements of oppression, racism, and colonialism with it.

If the use on the street is by Those who find their creativity unlocked by the new tools and they’re finally able to express themselves where previously they may have struggled with a medium or been gatekept from having an arts education or poetry or English or what have you, only to now find themselves told that this isn’t art or this doesn’t count despite all evidence to the contrary, then there’s massive elements of class and bias that go into that as well.

So let’s be clear. An empathetic view of technology recognizes that there are positive use cases for AI. These are being found on the street by persons with disabilities, persons of the global south, non english speakers, and persons across the class spectrum. To deny this is to deny objective reality.

It’s to deny all these groups their actual uses of the technology. Are there problems? Yes, absolutely. Are there bad actors that may use the technology for nefarious means? Of course, this happens on a regular basis, and we’ll put a pin in that and return to that in a few moments, but to deny that there are no good uses is to deny the experience of all these groups that are finding uses for it, and we’re starting to see that when this denialism is pointed out, it’s causing a great degree of controversy.

In a statement made early in September of 2024, NaNoWriMo, the non profit organization behind National Novel Writing Month, it was acceptable to use AI as an assistive technology when writers were working on their pieces for NaNoWriMo, because this supports their mission, which is to quote, “provide the structure, community, and encouragement to help people use their voices, achieve creative goals, and build new worlds, on and off the page.” End quote. 

But what drew the opprobrium of the online community is that they noted that some of the objections to the use of AI tools are classist and ableist. And, as we noted, they weren’t wrong. For all the reasons we just explained and more. But, due to the online uproar, they’ve walked that back somewhat.

I’ll link to the updated statement in the show. The thing is, if you believe that using AI for something like NaNoWriMo is against the spirit of things, that’s your decision. They’ve clearly stated that they feel that assistive technologies can help for people pursuing their dreams. And if you have concerns that they’re going to take stuff that’s put into the official app and sell it off to an LLM or AI company, well, that’s a discussion you need to have with NaNoWriMo, the nonprofit. 

You’re still not held off from doing something like NaNoWriMo using notepad or obsidian or however else you take your notes, but that’s your call. I for one was glad to see that NaNoWriMo called it out. One of the things that I found both in my personal life, as well as in my research, when I was working on the PhD and looking at Tikkun Olam Makers is that it can be incredibly difficult and expensive for persons with disabilities to find a tool that can meet their needs, if it exists at all. So if you’re wondering where I come down on this, I’m on the side of the persons in need. We’re on the side of the streets. You might say we’re streets ahead.

Of course, one of the uses that the street finds for things has always been art. Or at least work that eventually gets recognized as art. It took a long time for the world to recognize that the graffiti of a street artist might count, but in 2024, if one was to argue that Banksy wasn’t an artist, you’d get some funny looks.

There are several threads of debates surrounding AI art, generative art, including the role of creativity, the provenance of the materials, the ethics of using the tools, but the primary question is what counts? What counts as art and who decides that it counts? That’s the point that we’re really raising with that question, and obviously it ties back to what we were talking about last episode when it comes to Soylent Culture, and before that when we were talking about the recently deceased Frederick Jameson as well.

In his work Nostalgia for the Present from 1989, Jameson mentioned this with respect to television. He said, Quote, “At the time, however, it was high culture in the 1950s who was authorized, as it still is, to pass judgment on reality, to say what real life is and what is mere appearance. And it is by leaving out, by ignoring, by passing over in silence and with the repugnance one may feel for the dreary stereotypes of television series, that high art palpably issues its judgments.” end quote. 

Now, High Art in Bunny Quotes isn’t issuing anything, obviously, Jameson’s reifying the term, but what Jameson is getting at is that there’s stakes for those involved about what does and does not count. And we talked about this last episode, where it took a long time for various forms of new media to finally be accepted as art on its own terms.

For some, it takes longer than others. I mean, Jameson was talking about television in the 1980s, for something that had already existed for decades at that point. And even then, it wasn’t until the 90s and 2000s, to the eras of Oz and The Sopranos and Breaking Bad and Mad Men and the quote unquote “golden age of television” that it really began to be recognized and accepted as art on its own terms.

Television was seen as disposable ephemera for decades upon decades. There’s a lot of work that goes on on behalf of high art by those invested in it to valorize it and ensure that it maintains its position. This is why we see one of the critiques about A. I. art being that it lacks creativity, that it is simply theft.

As if the provenance of the materials that get used in the creation of art suddenly matter on whether it counts or not. It would be as if the conditions in the mines of Afghanistan for the lapis lazuli that was crushed to make the ultramarine used by Vermeer had a material impact on whether his painting counted as art. Or if the gold and jewels that went into the creation of the Fabergé eggs and were subsequently gifted to the Russian royal family mattered as to whether those count. It’s a nonsense argument. It makes no sense. And it’s completely orthogonal to the question of whether these works count as art.

And similarly, where people say that good artists borrow, great artists steal, well, we’ll concede that Picasso might have known a thing or two about art, but Where exactly are they stealing it from? The artists aren’t exactly tippy toeing into the art gallery and yoinking it off the walls now, are they?

No, they’re stealing it from memory, from their experience of that thing, and the memory is the key. Here, I’ll share a quote. “Art consists in bringing the memory of things past to the surface. But the author is not a Paessiest. He is a link to history, to memory, which is linked to the common dream.” This is of course a quote by Saul Bellow, talking about his field, literature, and while I know nowadays not as many people are as familiar with his work, if you’re at a computer while you’re listening to this, it might be worth to just look him up.

Are we back? Awesome. Alright, so what the Nobel Prize Laureate and Pulitzer Prize winner Saul Bellow was getting at is that art is an act of memory, and we’ve been going in depth into memory in the last three episodes. And the artist can only work with what they have access to, what they’ve experienced during the course of their lifetime.

The more they’ve experienced, the more they can draw on and put into their art. And this is where the AI art tools come in as an assistive technology, because they would have access to much, much more than a human being can experience, right? Possibly anything that has been stored and put into the database and the creator accessing that tool will have access to everything, all the memory scanned and stored within it as well.

And so then the act of art becomes one of curation of deciding what to put forth. AI art is a digital art form, or at least everything that’s been produced to date. So how does that differ? Right? Well, let me give you an example. If I reach over to my paint shelf and grab an ultramarine paint, right, a cheap Daler Rowney acrylic ink, it’s right there with all the other colors that might be available to me on my paint shelf.

But, back in the day, if we were looking for a specific blue paint, an ultramarine, it would be made with lapis lazuli, like the stuff that Vermeer was looking for. It would be incredibly expensive, and so the artist would be limited in their selection to the paints that they had available to them, or be limited in the amount that they could actually paint within a given year.

And sometimes the cost would be exorbitant. For some paints, it still actually is, but a digital artist working on an iPad or a Wacom tablet or whatever would have access to a nigh unlimited range of colors. And so the only choice and selection for that artist is by deciding what’s right for the piece that they’re doing.

The digital artist is not working with a limited palette of, you know, a dozen paints or whatever they happen to have on hand. It’s a different kind of thing entirely. The digital artist has a much wider range of things to choose from, but it still requires skill. You know, conceptualization, composition, planning, visualization.

There’s still artistry involved. It’s no less art, but it’s a different kind of art. But one that already exists today and one that’s already existed for hundreds of years. And because of a banger that just got dropped in the last couple of weeks, it might be eligible for a Grammy next year. It’s an allographic art.

And if you’re going to try and tell me that Mozart isn’t an artist, I’m going to have a hard time believing you.

Allographic art is a type of art that was originally introduced by Nelson Goodman back in the 60s and 70s. Goodman is kind of like Gordon Freeman, except, you know, not a particle physicist. He was a mathematician and aesthetician, or sorry, philosopher interested in aesthetics, not esthetician as we normally call them now, which has a bit of a different meaning and is a reminder that I probably need to book a pedicure.

Nelson was interested in the question of what’s the difference between a painting and a symphony, and it rests on the idea of like uniqueness versus forgery. A painting, especially an oil painting, can be forged, but it relies on the strokes and the process and the materials that went into it, so you need to basically replicate the entire thing while doing it in order to make an accurate forgery, much like Pierre Menard trying to reproduce Cervantes ‘Quixote’ in the Jorge Luis Borges short story.

Whereas a symphony, or any song really, that is performed based off of a score, a notational system, is simply going to be a reproduction of that thing. And this is basically what Walter Benjamin was getting at when he was talking about art in the age of mechanical reproduction, too, right? So, a work that’s based off of a notational system can still count as a work of art.

Like, no one’s going to argue that a symphony doesn’t count as art, or that Mozart wasn’t an artist. And we can extend that to other forms of art that use a notational system as well. Like, I don’t know, architecture. Frank Lloyd Wright didn’t personally build Falling Water or the Guggenheim, but he created the plans for it, right?

And those were enacted. He did. We can say that, yeah, there’s artistic value there. So these things, composition, architecture, et cetera, are allographic arts, as opposed to autographic arts, things like painting or sculpture, or in some instances, the performance of an allographic work. If I go to see an orchestra playing a symphony, a work based off of a score, I’m not saying that I’m not engaged with art.

And this brings us back to the AI Art question, because one of the arguments you often see against it is that it’s just, you know, typing in some prompts to a computer and then poof, getting some results back. At a very high level, this is an approximation of what’s going on, but it kind of misses some of the finer points, right?

When we look at notational systems, we could have a very, you know, simple set of notes that are there, or we could have a very complex one. We could be looking at the score for Chopsticks or Twinkle Twinkle Little Star, or a long lost piece by Mozart called Serenade in C Major that he wrote when he was a teenager and has finally come to light.

This is an allographic art, and the fact that it can be produced and played 250 years later kind of proves the point. But that difference between simplicity and complexity is part of the key. When we look at the prompts that are input into a computer, we rarely see something with the complexity of say a Mozart.

As we increase the complexity of what we’re putting into one of the generative AI tools, we increase the complexity of what we get back as well. And this is not to suggest that the current AI artists are operating at the level of Mozart either. Some of the earliest notational music we have is found on ancient cuneiform tablets called the Hurrian Hymns, dating back to about 1400 BCE, so it took us a little over 3000 years to get to the level of Mozart in the 1700s.

We can give the AI artists a little bit of time to practice. The generative AI art tools, which are very much in their infancy, appear to be allographic arts, and they’re following in their lineage from procedurally generated art has been around for a little while longer. And as an art form in its infancy, there’s still a lot of contested areas.

Whether it counts, the provenance of materials, ethics of where it’s used, all of those things are coming into question. But we’re not going to say that it’s not art, right? And as an art, as work conducted in a new medium, we have certain responsibilities for documenting its use, its procedures, how it’s created.

In the introduction to 2001’s The Language of New Media, Lev Manovich, in talking about the creation of a new media, digital media in this case, noted how there was a lost opportunity in the late 19th and early 20th century with the creation of cinema. Quote, “I wish that someone in 1895, 1897, or at least 1903 had realized the fundamental significance of the emergence of the new medium of cinema and produced a comprehensive record.

Interviews with audiences, systematic account of narrative strategies, scenography, and camera positions as they developed year by year. An analysis of the connections between the emerging language of cinema and different forms of popular entertainment that coexisted with it. Unfortunately, such records do not exist.

Instead, we are left with newspaper reports, diaries of cinema’s inventors, programs of film showings, and other bits and pieces. A set of random and unevenly distributed historical samples. Today, we are witnessing the emergence of a new medium, the meta medium of the digital computer. In contrast to a hundred years ago, when cinema was coming into being, We are fully aware of the significance of this new media revolution.

Yet I am afraid that future theorists and historians of computer media will be left with not much more than the equivalence of the newspaper reports and film programs from cinema’s first decades.” End quote. 

Manovich goes on to note that a lot of the work that was being done on computers, especially in the 90s, was stuff prognosticating about its future uses, rather than documenting what was actually going on.

And this is the risk that the denialist framing of AI art puts us in. By not recognizing that something new is going on, that art is being created, and allographic art, we lose the opportunity to document it for the future. And

And as with art, so too with science. We’ve long noted that there’s an incredible amount of creativity that goes into scientific research, that the STEM fields, science, technology, engineering, and mathematics, require and benefit so much from the arts that they’d be better classified as STEAM, and a small side effect of that may mean that we see better funding for the arts at the university level.

But I digress. In the examples I gave earlier of medical research, of AI being used as an assistive technology, we were seeing some real groundbreaking developments of the boundaries being pushed, and we’re seeing that throughout the science fields. Part of this is because of what AI does well with things like pattern recognition, allowing weather forecasts, for example, to be predicted more quickly and accurately.

It’s also been able to provide more assistance with medical diagnostics and imaging as well. The massive growth in the number of AI related projects in recent years is often due to the fact that a number of these projects are just rebranded machine learning or deep learning. In a report released by the Royal Society in England in May of 2024 as part of their Disruptive Technology for Research project, they note how, quote, “AI is a broad term covering all efforts aiming to replicate and extend human capabilities for intelligence and reasoning in machines.”

End quote. They go on further to state that, quote, “Since the founding of the AI field at the 1956 Dartmouth Summer Research Project on Artificial Intelligence, Many different techniques have been invented and studied in pursuit of this goal. Many of these techniques have developed into their own sub fields within computer science, such as expert systems and symbolic reasoning.” end quote. 

And they note how the rise of the big data paradigm has made machine learning and deep learning techniques a lot more affordable and accessible, and scalable too. And all of this has contributed to the amount of stuff that’s floating around out there that’s branded as AI. Despite this confusion in branding and nomenclature, AI is starting to contribute to basic science.

A New York Times article published July by Siobhan Roberts talked about how a couple AI models were able to compete at the level of a silver medalist at the recent International Mathematical Olympiad. And this is the first time that the AI model has medaled at that competition. So there may be a role for AI to assist even high level mathematicians to function as collaborators and, again, assistive technologies there.

And we can see this in science more broadly. In a paper submitted to arxiv. org in August of 2024, titled, The AI Scientist Towards Fully Automated Open Ended Scientific Discovery, authors Liu et al. use a frontier large language model to perform research independently. Quote, “We introduce the AI scientist, which generates novel research ideas, writes code, executes experiments, visualizes results, describes its findings by writing a scientific paper, And then runs the simulated review process for evaluation” end quote.

So, a lot of this is scripts and bots and hooking into other AI tools in order to simulate the entire scientific process. And I can’t speak to the veracity of the results that they’re producing in the fields that they’ve chosen. They state that their paper can, quote, “Produce papers that exceed the acceptance threshold at a top machine learning conference as judged by our automated reviewer,” end quote.

And that’s Fine, but it shows that the process of doing the science can be assisted in various realms as well. And in one of those areas of assistance, it’s in providing help for stuff outside the scope of knowledge of a given researcher. AI as an aid in creativity can help explore the design space and allow for the combination of new ideas outside of everything we know.

As science is increasingly interdisciplinary. We need to be able to bring in more material, more knowledge, and that can be done through collaboration, but here we have a tool that can assist us as well. As we talked about with Nessience and Excession a few episodes ago, we don’t know everything. There’s more than we can possibly know, so the AI tools help expand the field of what’s available to us.

We don’t necessarily know where new ideas are going to come from. And if you don’t believe me on this, let me reach out to another scientist who said some words on this back in 1980. Quote, “We do not know beforehand where fundamental insights will arise from about our mysterious and lovely solar system.

And the history of our study of the solar system shows clearly that accepted and conventional ideas are often wrong, and that fundamental insights can arise from the most unexpected sources.” End quote. That, of course, is Carl Sagan. From an October 1980 episode of Cosmos A Personal Journey, titled Heaven and Hell, where he talks about the Velkovsky Affair.

I haven’t spliced in the original audio because I’m not looking to grab a copyright strike, but it’s out there if you want to look for it. And what Sagan is describing there is basically the process by which a Kuhnian paradigm shift takes place. Sagan is speaking to the need to reach beyond ourselves, especially in the fields of science, and the AI assisted research tools can help us with that.

And not just in the conduction of the research, but also in the writing and dissemination of that. Not all scientists are strong or comfortable writers or speakers, and many of them come to English as a second, third, or even fourth language. And the role of AI tools as translation devices means we have more people able to communicate and share their ideas and participate in the pursuit of knowledge.

This is not to say that everything is rosy. Are there valid concerns when it comes to AI? Absolutely. Yes. We talked about a few at the outset and we’ve documented a number of them throughout the run of this podcast. One of our primary concerns is the role of the AI tools in échanger, that replacement effect that happens that leads to technological unemployment.

Much of the initial hype and furor around the AI tools was people recognizing that potential for échanger following the initial public release of ChatGPT. There’s also concerns about the degree to which the AI tools may be used as instruments of control, and how they can contribute to what Gilles Deleuze calls a control society, which we talked about in our Reflections episode last year. 

And related to that is the lack of transparency, the degree to which the AI tools are black boxes, where based on a given set of inputs, we’re not necessarily sure about how it came up with the outputs. And this is a challenge regardless of whether it’s a hardware device or a software tool.

And regardless of how the AI tool is deployed, the increased prevalence of it means we’re leading to a soylent culture. With an increased amount of data smog, or bitslop, or however you want to refer to the digital pollution that takes place with the increased amount of AI content in our channels and For-You-Feeds, and this is likely to become even more heightened as Facebook moves to pushing AI generated posts into the timelines.

Many are speculating that this is becoming so prevalent that the internet is largely bots pushing out AI generated content, what’s called the “Dead Internet Theory”, which we’ll definitely have to take a look at it in a future episode. Hint, the internet is alive and well, it’s just not necessarily where you think it is.

And with all this AI generated content, we’re still facing the risk of the hallucinations, which we talked about, holy moly, over two years ago when we discussed the LOAB, that brief little bit of creepypasta that was making the rounds as people were trying out the new digital tools. But the hallucinations still highlight one of the primary issues with the AI tools, and that’s the errors in the results.

In order to document and collate these issues, a research team over at MIT has created the AI Risk Repository. It’s available at airisk. mit. edu. Here they have created taxonomies of the causes and domains where the risks may take place. However, not all of these risks are equal. One of the primary ones that gets mentioned is the energy usage for AI.

And while it’s not insignificant, I think it needs to be looked at in context. One estimate of global data center usage was between 240 and 340 terawatt hours, which is a lot of energy, and it might be rising as data center usage for the big players like Microsoft and Google has gone up by like 30 percent since 2022.

And that still might be too low, as one report noted that the actual estimate could be as much as 600 percent higher. So when you put that all together, that initial estimate could be anywhere between a thousand and 2000 terawatts. But the AI tools are only a fraction of what goes on at the data centers, which include cloud storage and services, streaming video, gaming, social media, and other high volume activities.

So you bring that number right back down. And AI is using? The thing is, whatever that number is, 300 terawatts times 1. 3 times six divided by five. Whatever that result ends up being doesn’t even chart when looking at global energy usage. Looking at a recent chart on global primary energy consumption by source over at Our World in Data, we see that the worldwide consumption in 2023 was 180, 000 terawatt hours.

The amount of energy potentially used by AI hardly registers as a pixel on the screen compared to worldwide energy usage that were presented with the picture in the media where AI is burning up the planet. I’m not saying AI energy usage isn’t a concern. It should be green and renewable. And it needs to be verifiable, this energy usage of the AI companies, as there is the risk of greenwashing the work that is done, of painting over their activities true energy costs by highlighting their positive impacts for the environment.

And the energy usage may be far exceeded by the water usage that’s used for the cooling of the data centers. And as with the energy usage, the amount of water that’s actually going to AI is incredibly hard to dissociate from all the other activities that are taking place in these data centers. And this greenwashing, which various industries have long been accused of, might show up in another form as well.

There is always the possibility that the helpful stories that are presented, AI tools have provided for various at risk and minority populations, are presented as a form of “aidwashing”. And this is something we have to evaluate for each of the stories posted in the AI Positivity Archive. Now I can’t say for sure that “aidwashing” specifically as a term exists.

A couple searches didn’t return any hits, so you may have heard it here first. However, while positive stories about AI often do get touted, do we think this is the driving motivation for the massive investment we’re seeing in the AI technologies? No, not even for a second. These assistive uses of AI don’t really work with the value proposition for the industry, even though those street uses of technology may point the way forward in resolving some of the larger issues for AI tools with respect to resource consumption and energy usage.

The AI tools used to assist Casey Harrell, the ALS patient mentioned near the beginning of the show, use a significantly smaller model than one’s conventionally available, like those found in ChatGPT. The future of AI may be small, personalized, and local, but again, that doesn’t fit with the value proposition. 

And that value proposition is coming under increased scrutiny. In a report published by Goldman Sachs on June 25th, 2024, they question if there’s enough benefit for all the money that’s being poured into the field. In a series of interviews with a number of experts in the field, they note how initial estimates about both the cost savings, the complexity of tasks that AI is available to do, and the productivity gains that would derive from it, are all much lower than initially proposed or happening on a much longer time frame.

In it, MIT professor Daron Acemoglu forecasts minimal productivity and GDP growths, around 0. 5 percent or 1%, whereas Goldman Sachs predictions were closer to 9 percent and 6 percent increase in GDP. With such varying degrees of estimates, what the actual impact of AI in the next 10 years is, is anybody’s guess.

It could be at either extreme or somewhere in between. But the main takeaway from this is that even Goldman Sachs is starting to look at the balance sheet and question the amount of money that’s being invested in AI. And that amount of money is quite large indeed. 

In between starting recording this podcast episode and finishing it, OpenAI raised 6. 6 billion dollars in a funding round from its investors, including Microsoft and Nvidia, which is the largest ever recorded. As reported by Reuters, this could value the company at 157 billion dollars and make it one of the the world. valuable private companies in the world. And this coincides with the recent restructuring from a week earlier which would remove the non profit control and see it move to a for profit business model.

But my final question is, would this even work? Because it seems diametrically opposed to what AI might actually bring about. If assistive technology focused on automation and Echange, then the end result may be something closer to what Aaron Bastani calls “fully automated luxury communism”, where the future is a post-scarcity environment that’s much closer to Star Trek than it is to Snow Crash.

How do you make that work when you’re focused on a for profit model? The tool that you’re using is not designed to do what you’re trying to make it do. Remember, “The street finds its own uses for things”, though in this case that street might be Wall Street. The investors and forecasters at Goldman Sachs are recognizing that disconnect by looking at the charts and tables in the balance sheet.

But their disconnect, the part that they’re missing, is that the driving force towards AI may be one more of ideology. And that ideology is the California ideology, a term that’s been floating around since at least the mid 1990s. And we’ll take a look at it next episode and return to the works of Lev Manovich, as well as Richard Barbrook, Andy Cameron, and Adrian Daub, as well as a recent post by Sam Altman titled ‘The Intelligence Age’.

There’s definitely a lot more going on behind the scenes.

Once again, thank you for joining us on the Implausipod. I’m your host, Dr. Implausible. You can reach me at drimplausible at implausipod. com. And you can also find the show archives and transcripts of all our previous shows at implausipod. com as well. I’m responsible for all elements of the show, including research, writing, mixing, mastering, and music.

And perhaps somewhat surprisingly, given the topic of our episode, no AI is used in the production of this podcast. Though I think some machine learning goes into the transcription service that we use. And the show is licensed under Creative Commons 4. 0 share alike license. You may have noticed at the beginning of the show that we described the show as an academic podcast and you should be able to find us on the Academic Podcast Network when that gets updated.

You may have also noted that there was no advertising during the program and there’s no cost associated with the show. But it does grow from word of mouth of the community. So if you enjoy the show, please share it with a friend or two, and pass it along. There’s also a buy me a coffee link on each show at implausopod.

com, which will go to any hosting costs associated with the show. I’ve put a bit of a hold on the blog and the newsletter, as WordPress is turning into a bit of a dumpster fire, and I need to figure out how to re host it. But the material is still up there, I own the domain. It’ll just probably look a little bit more basic soon.

Join us next time as we explore that Californian ideology, and then we’ll be asking, who are Roads for? And do a deeper dive into how we model the world. Until next time, take care and have fun.



Bibliography

A bottle of water per email: The hidden environmental costs of using AI chatbots. (2024, September 18). Washington Post. https://www.washingtonpost.com/technology/2024/09/18/energy-ai-use-electricity-water-data-centers/

A Note to Our Community About our Comments on AI – September 2024 | NaNoWriMo. (n.d.). Retrieved October 5, 2024, from https://nanowrimo.org/a-note-to-our-community-about-our-comments-on-ai-september-2024/

Advances in Brain-Computer Interface Technology Help One Man Find His Voice | The ALS Association. (n.d.). Retrieved October 5, 2024, from https://www.als.org/blog/advances-brain-computer-interface-technology-help-one-man-find-his-voice

Balevic, K. (n.d.). Goldman Sachs says the return on investment for AI might be disappointing. Business Insider. Retrieved October 5, 2024, from https://www.businessinsider.com/ai-return-investment-disappointing-goldman-sachs-report-2024-6

Broad, W. J. (2024, July 29). Artificial Intelligence Gives Weather Forecasters a New Edge. The New York Times. https://www.nytimes.com/interactive/2024/07/29/science/ai-weather-forecast-hurricane.html

Card, N. S., Wairagkar, M., Iacobacci, C., Hou, X., Singer-Clark, T., Willett, F. R., Kunz, E. M., Fan, C., Nia, M. V., Deo, D. R., Srinivasan, A., Choi, E. Y., Glasser, M. F., Hochberg, L. R., Henderson, J. M., Shahlaie, K., Stavisky, S. D., & Brandman, D. M. (2024). An Accurate and Rapidly Calibrating Speech Neuroprosthesis. New England Journal of Medicine, 391(7), 609–618. https://doi.org/10.1056/NEJMoa2314132

Consumers Know More About AI Than Business Leaders Think. (2024, April 8). BCG Global. https://www.bcg.com/publications/2024/consumers-know-more-about-ai-than-businesses-think

Cosmos. (1980, September 28). [Documentary]. KCET, Carl Sagan Productions, British Broadcasting Corporation (BBC).

Donna. (2023, October 9). Banksy Replaced by a Robot: A Thought-Provoking Commentary on the Role of Technology in our World, London 2023. GraffitiStreet. https://www.graffitistreet.com/banksy-replaced-by-a-robot-a-thought-provoking-commentary-on-the-role-of-technology-in-our-world-london-2023/

Gen AI: Too much spend, too little benefit? (n.d.). Retrieved October 5, 2024, from https://www.goldmansachs.com/insights/top-of-mind/gen-ai-too-much-spend-too-little-benefit

Goodman, N. (1976). Languages of Art (2 edition). Hackett Publishing Company, Inc.

Goodman, N. (1978). Ways Of Worldmaking. http://archive.org/details/GoodmanWaysOfWorldmaking

Hill, L. W. (2024, September 11). Inside the Heated Controversy That’s Tearing a Writing Community Apart. Slate. https://slate.com/technology/2024/09/national-novel-writing-month-ai-bots-controversy.html

Hu, K. (2024, October 3). OpenAI closes $6.6 billion funding haul with investment from Microsoft and Nvidia. Reuters. https://www.reuters.com/technology/artificial-intelligence/openai-closes-66-billion-funding-haul-valuation-157-billion-with-investment-2024-10-02/

Hu, K., & Cai, K. (2024, September 26). Exclusive: OpenAI to remove non-profit control and give Sam Altman equity. Reuters. https://www.reuters.com/technology/artificial-intelligence/openai-remove-non-profit-control-give-sam-altman-equity-sources-say-2024-09-25/

Knight, W. (n.d.). An ‘AI Scientist’ Is Inventing and Running Its Own Experiments. Wired. Retrieved September 9, 2024, from https://www.wired.com/story/ai-scientist-ubc-lab/

LaBossiere, M. (n.d.). AI: I Want a Banksy vs I Want a Picture of a Dragon. Retrieved October 5, 2024, from https://aphilosopher.drmcl.com/2024/04/01/ai-i-want-a-banksy-vs-i-want-a-picture-of-a-dragon/

Lu, C., Lu, C., Lange, R. T., Foerster, J., Clune, J., & Ha, D. (2024, August 12). The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery. arXiv.Org. https://arxiv.org/abs/2408.06292v3

Manovich, L. (2001). The language of new media. MIT Press.

McLuhan, M. (1964). Understanding Media: The Extensions of Man. The New American Library.

Mickle, T. (2024, September 23). Will A.I. Be a Bust? A Wall Street Skeptic Rings the Alarm. The New York Times. https://www.nytimes.com/2024/09/23/technology/ai-jim-covello-goldman-sachs.html

Milman, O. (2024, March 7). AI likely to increase energy use and accelerate climate misinformation – report. The Guardian. https://www.theguardian.com/technology/2024/mar/07/ai-climate-change-energy-disinformation-report

Mueller, B. (2024, August 14). A.L.S. Stole His Voice. A.I. Retrieved It. The New York Times. https://www.nytimes.com/2024/08/14/health/als-ai-brain-implants.html

Overview and key findings – World Energy Investment 2024 – Analysis. (n.d.). IEA. Retrieved October 5, 2024, from https://www.iea.org/reports/world-energy-investment-2024/overview-and-key-findings

Roberts, S. (2024, July 25). Move Over, Mathematicians, Here Comes AlphaProof. The New York Times. https://www.nytimes.com/2024/07/25/science/ai-math-alphaproof-deepmind.html

Schacter, R. (2024, August 18). How does Banksy feel about the destruction of his art? He may well be cheering. The Guardian. https://www.theguardian.com/commentisfree/article/2024/aug/18/banksy-art-destruction-graffiti-street-art

Science in the age of AI | Royal Society. (n.d.). Retrieved October 2, 2024, from https://royalsociety.org/news-resources/projects/science-in-the-age-of-ai/

Sullivan, S. (2024, September 25). New Mozart Song Released 200 Years Later—How It Was Found. Woman’s World. https://www.womansworld.com/entertainment/music/new-mozart-song-released-200-yaers-later-how-it-was-found

Taylor, C. (2024, September 3). How much is AI hurting the planet? Big tech won’t tell us. Mashable. https://mashable.com/article/ai-environment-energy

The AI Risk Repository. (n.d.). Retrieved October 5, 2024, from https://airisk.mit.edu/

The Intelligence Age. (2024, September 23). https://ia.samaltman.com/

What is NaNoWriMo’s position on Artificial Intelligence (AI)? (2024, September 2). National Novel Writing Month. https://nanowrimo.zendesk.com/hc/en-us/articles/29933455931412-What-is-NaNoWriMo-s-position-on-Artificial-Intelligence-AI

Wickelgren, I. (n.d.). Brain-to-Speech Tech Good Enough for Everyday Use Debuts in a Man with ALS. Scientific American. Retrieved October 5, 2024, from https://www.scientificamerican.com/article/brain-to-speech-tech-good-enough-for-everyday-use-debuts-in-a-man-with-als/

GPT Squared

(this was originally published as Implausipod E0031 on March 31, 2024)


https://www.implausipod.com/1935232/episodes/14799673-e0031-gpt-squared


Are the new GPTs – Generative Pre-trained Transformers – powering the current wave of AI tools actually the emergence of a new GPT – General Purpose Technology – that we will soon find embedded into every aspect of our lives? Earlier examples of GPTs include tech like steam power, electricity, radio, and computing, all tech that is foundational to our modern way of life. Will our AI tools soon join this pantheon as another long wave of technological progress begins?


Let’s start with the question. Do you remember a time before electricity? Unless this show is vastly more popular with time travelers and certain vampires than I thought, the answer is probably not. But now, in 2024, it’s literally everywhere. It’s sublimated into the background. It’s become part of the infrastructure, and we no longer really think about it.

We flip a switch and the lights go on, and we can find a plug in almost anywhere to recharge our devices and take that electricity with us on the go. But how long did it take to get to that point? And the answer is longer than you think. The production of electricity was invented in 1832, but it took half a century for it to become commercially viable, and then from there another 70 years to effectively transform our lives with everything from lights and appliances to communication devices like radios and television.

And even now we’re still feeling the effects of that transformation as we move to electric powered vehicles for personal use. So across all those decades, it took a long time for electricity to come from concept to application to becoming a general purpose technology, or GPT. And in 2024, we’re just starting to feel the impacts of another GPT, degenerative pre trained transformers that are powering the current wave of AI tools.

So the question we’re really trying to find out is, are these current GPTs a new GPT, or what we might call GPT squared, in this week’s episode of The Implosipod.

Welcome to The Implausipod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. And in this episode, we’ll be exploring exactly what a GPT is, a general purpose technology, that is, and how they have had a massive impact on society. By looking at the definition and some commonalities amongst them, we’ll be able to evaluate whether the current GPT, the generative pre-trained transformers, are going to have the same impact, or whether they qualify as a GPT at all.

As always, I’m using a couple of references for this, and I’ll put the bibliography in the show notes so you can track back the people we’re citing here. For us, the two main sources are going to be Engines of Growth by Bresnahan and Trattenberg from 1992 1995, and Mordecai Kurz’s The Market Power of Technology, Understanding the Second Gilded Age, a book he published in 2023.

Kurz is a professor of economics at Stanford University, and his first book was published back in 1970, so he’s literally been doing this longer than I’ve been alive. And in addition to those, I’m sure we’ll fold in a few more references as required. Now, for the first half of this episode, whenever I mention GPT, we’re going to be explicitly talking about general purpose technology, so I’ll call out the AI tools when mentioned.

And we’ll get to the discussion of those in the second half after we talk about the cyclical nature of technological development. But for the moment, we should get right down to business and find out exactly what is the GPT. A general purpose technology is basically that, a technology that can apply broadly to virtually all sectors of the economy.

And by doing so, it can change the way that society functions. They do this by being pervasive, in that they can be used in a wide variety of functions. And they also do this by sublimating into the background, as David E. Nye notes in his history of the Electrification of America, that once they’re part of the infrastructure, we can stop thinking about it and use them in almost any function.

Now, it took a little while for electricity to get to that point, but that’s part of their nature, that the general purpose technology will evolve in advance and spread throughout the economy. For previous instances of GPTs, that led to productivity gains in a wide number of areas, but even if we’re not specifically looking at Productivity growth, we can still see how they have beneficial impacts.

Now in the original study on GPTs, Bresnahan and Trattenberg’s engines of growth from 1992, they looked at three particular case studies, steam power, the electric motor, and the integrated circuit. And by studying these GPTs, they were able to come up with some basic characteristics. The first and most obvious is that their general purpose, that the function they provide is generic, and because of that generic nature, they’re able to apply it in a lot of different contexts.

If we think of all the ways that the continuous rotary motion that was provided by steam power and then electric engines has been adapted and serves throughout our economy, it’s massive, it’s fascinating. And once the production of the integrated circuit really started taking off in the 60s and 70s, it became a product that could be embedded in almost anything, and very nearly has.

This is obviously scaled over time as integrated circuits have followed Moore’s Law, providing exponential growth in the amount of circuitry that can provide it in the same space, and complementary technologies like batteries have also improved and shrunk and been able to service the chips that have gotten more and more power efficient over time, leading to even more widespread adoption.

And this brief description hints at the second and third characteristics. The second one is that they have technological dynamism. That continuous work is done to innovate and improve the basic technology that makes it more efficient over time. This is why you often see that costs to use that GPT drop over time.

And that’s why it shows up in more and more parts of our society. And the third characteristic of GPTs that Bresnahan and Trattenberg talk about are the innovational complementarities that technical advances in the GPT make it more profitable for its users to innovate and vice versa. And we can see hints of that with how the improving battery technologies went hand in hand with the development of integrated circuits.

One of the things that B& T note, especially in their example of the steam engine electric motor, is that the function that they provide isn’t necessarily obvious with respect to some of the jobs. That the continuous rotary motion that is now used in a lot of things, everything from sewing, polishing, cutting, wasn’t necessarily seen as something that could be adapted to those skills.

So the people that were doing them were surprised when there was a technological replacement for the things that they were doing. Let’s put a pin in that idea and we’ll come back to it in about 10 or 15 minutes. Sometimes the way that the GPT is applied is inefficient initially, but as price and performance ratios improve, as the technology and the complementary technologies around it improve, then it becomes more feasible.

Sometimes those payoffs come quickly, but Often it takes a long time for it to get distributed throughout the economy. In the case of electric motors, they note that it took about three decades to go from 5 percent of the installed horsepower in the U. S. to over 80 percent by 1930. And those productivity gains came because everything was getting electrified at the same time.

The infrastructure was there. And these all go hand in hand. They’re complementary. B& T quote at length from Rosenberg from 1982. Quote, the social payoff to electricity would have to include not only lower energy and capital costs, but also the benefits flowing from the newfound freedom to redesign factories with a far more flexible power source.

The steam engine required clumsy belting and shafting techniques for the transmission of power within the plant. These methods imposed serious constraints upon the organization and flow of work. which had to be grouped according to their power requirements close to the energy source. With the advent of fractionalized power made possible by electricity in the electric motor, it now became possible to provide power in very small, less costly units.

This flexibility made possible a wholesale reorganization of work arrangements and in this way, made a wide and pervasive contribution to productivity growth throughout manufacturing. Machines and tools could now be put anywhere efficiency dictated, not where belts and shafts could most easily reach them.

Now, I want to state that I’m not a member of the cult of efficiency by any means, and that Rosenzberg’s claim that quote, it’s not some contradiction to that Foucauldian argument that the architecture of society is shaped by the architecture of our factories and our other buildings. That we have some Deleuzian form of control society because the very hierarchy of the way our power is distributed within our factories lends itself to certain forms of social organization.

Far be it. I think these are saying exactly the same thing. Different perspectives. What the subtext of all these articles is, is that to get past those hierarchical forms, united find different ways to distribute the power. And by doing so, you can have very liberating effects on society as a whole. And.

Ultimately, this is what a GPT is. It’s what it provides. As Kurz notes, GPTs reflect fundamental changes in the state of human knowledge that occur maybe once in a generation or once in a century. They are technologies that enable up to Paradigm Shift, and as Kurz notes,

we need to distinguish between small changes within a given technological paradigm and revolutionary technologies that change everything.

End quote. A GPT serves as a founding technology or platform for Further technological innovation. And because of that, it’s really important to note something on the work that goes into the development of a GPT as both B&T and Kurz note, quote, it is vital to keep in mind the distinction between innovations within the paradigm of a GPT.

And innovation of a new technological paradigm, or a new GPT. Some GPTs, like electricity or IT, change everything and ultimately transform the entire economy. Others, like the discovery of DNA and genetic sequencing, change completely only a segment of the economy, like we’ve seen with CRISPR and genetic engineering.

And this idea of a paradigm shift is perhaps one of the most central features of the introduction of a new GPT, especially if you’re a large incumbent firm well established within the Current dominant technological paradigm for you see a paradigm shift threatens to upset the natural order of things where the large Incumbent firms exercise their market power and use small firms operating within that paradigm effectively as research labs acquiring them if they happen to develop a patent or an innovation that would prove it to be useful or would Threaten their own dominance within the marketplace These patterns have been well observed historically within the development of electricity, with the rollout of radio and television, with the early computing industry, and can be even seen within 21st century industries, where a dominant player like Facebook will acquire Instagram or WhatsApp.

that may threaten their dominance. And if they’re unable to acquire those competitors outright, they may exert their market power through lobbying or other efforts in order to challenge them, as we’re seeing currently in the United States with the proposed TikTok ban of March 2024. This is all standard operating procedure.

It’s the way these things seem to work. But when a new technology comes around, when the paradigm shifts, that’s when things get interesting. As Kurz notes, it’s a period where Quote, the most intense technological competition arises when a new GPT is invented. This leads to the eruption of economy wide technological competition in which winners begin the long journey to consolidate market power.

During that period, we’ll either see new players rise to the level of the incumbents, pushing out the old dominant players that can’t adapt. Or we’ll see those dominant players do everything they can to try and keep their hand in the game. Which is what we’re starting to see already within the field of AI, which is one of the reasons we suggest it might be a new GPT, a General Purpose Technology.

But as we’ve hinted at, these things go in cycles, so let’s look at what some of the earlier ones were.

The idea that the economy behaves in a cyclical manner was first introduced almost 100 years ago, in the 1920s, by Nikolai Kondratiev. They’ve been subsequently named in his honor. Kondratiev hypothesized that these cycles were due to the underlying technological basis of society, the technological paradigms that we’ve been discussing in the first half of this episode, where we see rising boom and bust cycles that take place over a period of roughly 50 to 60 years.

Now, the Kondratiev waves, or what are sometimes called long waves or Carrier waves are only one of the various economic waves or cycles that have been observed. Others, including those proposed by Kuznets or Jugler or Kitchen, look at things like infrastructure or investment or even inventory for various products, and development time frames can have a major impact for all of this as well.

When you map these all out on a timeline, the various economic waves can all seem to interact, much like overlapping sine waves in a synthesizer, where the sum of the smaller waves occasionally comes together in a much larger peak, or ocean waves come together out of nowhere and suddenly form a rogue wave big enough to sink a ship.

When Kondratiev was originally observing that a long 60 year period, he said that there was three phases to the cycle, a period of expansion, stagnation, recession, and nowadays we’ve added collapse into that as well. When Kondratiev was originally writing in the 1920s, he identified a number of periods that as it’s originally taken place with, starting with the Industrial Revolution, followed by the Age of Steam and the expansion of the railways, and then the subsequent rise of electric power that took place, as noted, between the 1890s and 1930s in North America.

Since then, we’ve seen the cycle continue in two other long waves, the rise of the internal combustion engine, the associated technologies that that facilitated, like the automobile and air flight, and then the rise of the microchip and the transformation that the computing technologies and communication technologies had across the face of a modern world.

Now, the idea of an economic long wave has had an enduring appeal. People have taken the theory and have Cast it back earlier in time and a lot of predictions have come about trying to guess what the sixth long wave would be. Again, five that we’ve had so far if we started at the industrial revolution.

Some of the possible contenders as a driver for the sixth Kondratiev wave include that of renewable energy and green technologies as proposed by Moody and Nagredi, or that of biotechnology as proposed by Leo Neffiato. Back in 1996. And while those are strong contenders, they haven’t necessarily turned into the drivers of economic change that we might have expected.

They may still yet, but in some ways they lack the general purpose nature of the technologies that we’ve seen as drivers of previous long waves. In 2024, it looks like another contender has emerged. A GPT build out of GPTs, the generative pre-trained transformers that power our AI tools. So based on our three characteristics of those GPTs that we mentioned earlier, we’ll take a closer look and see if those AI tools might qualify.

As we said earlier, with Bresnahan and Trattenberg’s definition of a GPT, the three characteristics were general purposeness, technological dynamism, and innovational complementarities. Within their paper from 1992, they use the case study of semiconductor technology, which is the dominant GPT at the time of the 1990s.

At the time they were writing, the pervasiveness of computing had already been assumed, but initially that assumption wasn’t the case. Hence early prognostications like Thomas Watson’s from IBM famously saying that I think there is a world market for maybe five computers, a prediction that turned out to be drastically wrong.

By the 1970s, the integrated circuit was well developed and its use in the computing mainframes of large banks was already well underway. What allowed electronic circuits to become a general purpose technology was that they could work inside virtually any system. Those systems can be rationalized and broken down into their component activities, and each of those activities could be replaced with a integrated circuit or transformer at certain stages.

And if you can break the steps down to something that can be replicated by binary logic, like ones and zeros with gates opening or switches turning on and off, then you can apply it anywhere within a production process. It meant that there was a wide range of. technological processes that at its root were pretty simple operations.

But as B& T note here, even though substituting binary logic for a mechanical part was often very inefficient, because you might have to increase the number of steps in order to accomplish something with binary logic, as the price dropped on the circuits and more and more processes could be included within one circuit, it became much easier to actually implement the technology.

electronic circuits within the system. And as the costs came down and the processes were improved, they became more widely implemented within a lot more sectors of the economy, to now that they’re basically everywhere. So, do our current GPTs, the current crop of AI tools, exhibit these same characteristics?

Is there a general purpose-ness to them? Well, qualified yes. I think when it comes to the current AI tools, we need to recognize a few things. The first is that they’re part of a much longer process that a lot of the tools that we’re seeing right now were two years ago called machine learning tools, and they’ve just been rebranded as a I tools with the popularity of Chat GPT and some of the AI art tools like stable diffusion and mid journey.

So both the history of the technology and its implications go back much further. And it’s actually uses are much broader than we’re currently seeing and thinking about the range of industries where I’ve seen AI tools adopted far exceed just the large language models popularized by Chat GPT, or the art tools that were seen online increasingly.

We’re seeing machine learning algorithms deployed in everything from photography to astronomy, to health, to production, to robotics, to website design, to audio engineering, and a whole host of industries. And this explains partially why we’re seeing so many companies involved, which feeds directly into the second characteristics of GPTs, the dynamism, the continuous innovation that’s being brought forth by companies that currently developing those AI tools.

Now, is everyone going to be a hit? No, there’s a lot of them that are absolutely not places where AI should be involved. But some of them are going to be creating tools that are well suited to the application of AI. And just as the early days of electricity and radio and television all saw a lot of different ways that people were trying to apply the new technology to their particular field or product or problem.

We’re seeing a lot of that with AI right now, just as any company that has a machine learning model is either rebranding it or adapting it to the use of AI. I think a lot of people are recognizing that. AI tools could be that general purpose technology that would be applicable to whatever their given field is.

There’s definitely a speculative resource rush component that’s driving some of this growth. There’s a lot of people are getting into the market, but, but as Mordecai Kurz points out, there’s a difference between working within the new paradigm created by a GPT, which a lot of these companies are doing, and on working Directly on the GPT itself, those working directly on the AI tools like OpenAI are the ones that are looking to become the new incumbents, which goes a long way in explaining why Microsoft has reached out and partnered with OpenAI in the development of their tools.

Incumbents that are lagging behind in the development of the tools may soon find themselves locked out, so a company that was dominant within the previous paradigm, like Apple, that currently doesn’t have much in the way of AI development, could be in a precarious position as things change and the cycle of technology continues.

Now, the last characteristics of a GPT was the complementarity that it had that allowed for a Other innovations to take place. And I think at this point, it’s still too soon to tell. We can speculate about how AI may interface with other technologies, but for now, the most interesting ones look to be things like robotics and drones.

Seeing how a tool like OpenAI can integrate with the robots from Boston Dynamics, or the recent announcement of the Fusion AI model that can provide robotic workers for Amazon’s warehouses. Both hinted where some of this may be going. It may seem like the science fiction of 30 or 40 or 50 years ago, but as it was written back then, the future is already here, it’s just not widely distributed yet.

Ultimately, the labeling of a technological era or a GPT or a Kondratiev wave is something that’s done historically. Looking back from a vantage point, it’s confirmed yet, yes, this is what took place and this was the dominant paradigm. But from our vantage point right now, there’s definitely signs and it looks like the gts, maybe the GPT we need to deal with as the wave rises and falls.

Once again, thanks for joining us on this episode of the Implausipod. I’ve been your host Dr. Implausible, responsible for the research, writing, editing, mixing, and mastering. You can reach me at drimplausible at implausipod. com and check out our episode archive at implausipod. com as well. I have a few quick announcements.

Depending on scheduling, I should have another tech based episode about the architecture of our internet coming out in the next few weeks. And then around the middle of April, we’ll invite some guests to discuss the first episode of the Fallout series airing on Amazon Prime. Or streaming on Amazon Prime, I guess.

Nothing’s really broadcast anymore. Following that, Tie in with another Jonathan Nolan series and also its linkages to AI, we’re going to take a look at Westworld season one. And if you’ve been following our Appendix W series on the sci fi prehistory of Warhammer 40, 000, we’re going to spin off the next episode into its own podcast.

Starting on April 15th, we’re currently looking at the Joe Haldeman’s 1974 novel, The Forever War. So if you’d like to read ahead and send us any questions you might have about the text, you can send them to Dr. implausible@implausiblepod.com. We will keep the same address, but the website for Appendix W should now be available.

Check it out@appendixw.com and we’ll start moving those episodes over to there. You can also find the transcript only version of those episodes up on YouTube. Just look for Appendix W in your search bar. We’ve made the first few available, and as I finish off the transcription, I’ll move more and more over.

And just a reminder that both the Implausipod and the Appendix W podcast are licensed under Creative Commons Share A Like 4. 0 license. And we look forward to having you join us with the upcoming episodes soon. Take care, have fun.


Bibliography
Bresnahan, T. F., & Trajtenberg, M. (1995). General purpose technologies “Engines of growth”? Journal of Econometrics, 65(1), 83–108.

Kurz, M. (2023). The Market Power of Technology: Understanding the Second Gilded Age. Columbia University Press.

Nye, D. E. (1990). Electrifying America: Social meanings of a new technology, 1880-1940. MIT Press.

Rosenberg, N. (1982). Inside the black box: Technology and economics. Cambridge University Press.

Winner, L. (1993). Upon Opening the Black Box and Finding it Empty: Social Constructivism and the Philosophy of Technology. Science Technology & Human Values, 18(3), 362–378.

Black Boxes and AI

(this was originally published as Implausipod E0028 on February 26, 2024)

https://www.implausipod.com/1935232/episodes/14575421-e0028-black-boxes-and-ai

How does your technology work? Do you have a deep understanding of the tech, or is it effectively a “black box”? And does this even matter? We do a deep dive into the history of the black box, how it’s understood when it comes to science and technology, and what that means for AI-assisted science.


On January 9th, 2024, rabbit Incorporated introduced their R one, their handheld device that would let you get away from using apps on your phone by connecting them all together through using the power of ai. The handheld device is aimed at consumers and is about half the size of an iPhone, and as the CEO claims, it is the beginning of a new era in human machine interaction.

End quote. By using what they call a large action model, or LAM, it’s supposed to interpret the user’s intention and behavior and allow them to use the apps quicker. It’s acceleration in a box. But what exactly does that box do? When you look at a new tool from the outside, it may seem odd to trust all your actions to something that you barely know how it works.

But let me ask you, can you tell me how anything you own works? Your car, your phone, your laptop, your furnace, your fridge, anything at all. What makes it run? I mean, we might have some grade school ideas from a Richard Scarry book or a past episode of How It’s Made, but But what makes any of those things that we think we know different from an AI device that nobody’s ever seen before?

They’re all effectively black boxes. And we’re going to explore what that means in this episode of the Implosipod.

Welcome to the Implausipod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. And in all this discussion of black boxes, you might have already formed a particular mental image. The most common one is probably that of the airline flight recorder, the device that’s embedded in every modern airplane and becomes the subject of a frantic search in case of an accident.

Now, the thing is, they’re no longer black, they’re rather a bright orange, much like the Rabbit R1 that was demoed. But associating black boxes with the flight recorder isn’t that far off, because its origin was tied to that of the airline industry, specifically in World War II, when the massive amount of flights generated a need to find out what was going on with the planes that were flying continual missions across the English Channel.

Following World War II, the use of black boxes Boxes expanded as the industry shifted from military to commercial applications. I mean, the military still used them too. It was useful to find out what was going on with the flights, but that idea that they became embedded within commercial aircraft and were used to test the conditions and find out what happened so they could fix things and make things safer and more reliable overall.

meant that their existence and use became widely known. But, by using them to figure out the cause of accidents and increase the reliability, they were able to increase trust. To the point that air travel was less dangerous than the drive to the airport in your car, and few, if any, passengers had many qualms left about the Safety of the flight.

And while this is the origin of the black box in other areas, it can have a different meaning in fields like science or engineering or systems. Theory can be something complex where it’s just judged by its inputs and outputs. Now that could be anything from as simple as I can. Simple integrated circuit to a guitar pedal to something complex like a computer or your car or furnace or any of those devices we talked about before but it could also be something super complex like an institution or an organization or the human brain or an AI.

I think the best way to describe it is an old New Yorker cartoon that had a couple scientists in front of a blackboard filled with equations and in the middle of it says, Then a miracle occurs. It’s a good joke. Everyone thinks it’s a Far Side cartoon, but it was actually done by Sidney Harris, but The point being that right now in 2024, it looks like that miracle.

It’s called AI.

So how did we get to thinking that AI is a miracle product? I mean, aside from using the LLMs and generative art tools, things like DALL-E and Sora, and seeing the results, well, as we’ve spent the last couple episodes kinda setting up, a lot of this can occur through the mythic representations of it that we often have in pop culture.

And we have lots of choices to choose from. There’s lots of representations of AI in media in the first nearly two and a half decades of the 21st century. We can look at movies like Her from 2013, where the virtual assistant of Joaquin Phoenix becomes a romantic liaison. Or how Tony Stark’s supercomputer Jarvis is represented in the first Iron Man film in 2008.

Or, for a longer, more intimate look, the growth and development of Samaritan through the five seasons of the CBS show Person of Interest from 2011 through 2016. And I’d be remiss if I didn’t mention their granddaddy Hal from 2001 A Space Odyssey by Kubrick in 1968. But I think we’ll have to return to that one a little bit more in next episode.

The point being that we have lots of representations of AI or artificial intelligences that are not ambulatory machines, but are actually just embedded within a box. And this is why I’m mentioning these examples specifically, because they’re more directly relevant to our current AI tools that we have access to.

The way that these ones are presented not only shapes the cultural form of them, but our expected patterns of use. And that shaping of technology is key by treating AI as a black box, something that can take almost anything from us and output something magical. We project a lot of our hopes and fears upon what it might actually be capable of accomplishing.

What we’re seeing with extended use is that the capabilities might be a little bit more limited than originally anticipated. But every time something new gets shown off, like Sora or the rabbit or what have you, then that expectation grows again, and the fears and hopes and dreams return. So because of these different interpretations, we end up effectively putting another black box around the AI technology itself, which to reiterate is still pretty opaque, but it means our interpretation of it is going to be very contextual.

Our interpretation of the technology is going to be very different based on our particular position or our goals, what we’re hoping to do with the technology or what problems we’re looking for it to solve. That’s something we might call interpretive flexibility, and that leads us into another black box, the black box of the social construction of technology, or SCOT.

So SCOT is one of a cluster of theories or models within the field of science and technology studies that aims to have a sociological understanding of technology in this case. Originally presented in 1987 by Wiebe Biejker and Trevor Pinch, a lot of work was being done within the field throughout the 80s, 90s, and early 2000s when I entered grad school.

So, so if you studied technology as I was, you’d have to grapple with Scott and the STS field in general. One of the arguments that Pinch and Biejker were making was that Science and technology were both often treated as black boxes within their field of study. Now, they were drawing on earlier scholarship for this.

One of their key sources was Leighton, who in 1977 wrote, What is needed is an understanding of technology from inside. Both as a body of knowledge and as a social system. Instead, technology is often treated as a black box whose contents and behavior may be assumed to be common knowledge. End quote. So whether the study was the field of science and the science itself was.

irrelevant, it didn’t have to be known, it could just be treated as a black box and the theory applied to whatever particular thing was being studied. Or people studying innovation that had all the interest in the inputs to innovation but had no particular interest or insight into the technology on its own.

So obviously the studies up to 1987 had a bit of a blind spot in what they were looking at. And Pinch and Becker are arguing that it’s more than just the users and producers, but any relevant social group that might be involved with a particular artifact needs to be examined when we’re trying to understand what’s going on.

Now, their arguments about interpretive flexibility in relevant social groups is just another way of saying, this street finds its own uses for things, the quote from William Gibson in But their main point is that even during the early stages, all these technologies have different groups that are using them in different ways, according to their own needs.

Over time, it kind of becomes rationalized. It’s something that they call closure. And that the technology becomes, you know, what we all think of it. We could look at, say, an iPhone, to use one recent example, as being pretty much static now. There’s some small innovations, incremental innovations, that happen on a regular basis.

But, by and large, the smartphone as it stands is kind of closed. It’s just the thing that it is now. And there isn’t a lot of innovation happening there anymore. But, Perhaps I’ve said too much and we’ll get to the iPhone and the details of that at a later date. But the thing is that once the technology becomes closed like that, it returns to being a black box.

It is what we thought it is, you know? And so if you ask somebody what a smartphone is and how does it work, those are kind of irrelevant questions. A smartphone is what a smartphone is, and it doesn’t really matter how the insides work, its product is its output. It’s what it’s used for. Now, this model of a black box with respect to technology isn’t without its critiques.

Six years after its publication, in 1993, the academic Langdon Winner wrote a critique of Scott in the works of Pinch and Biker. It was called Upon Opening the Black Box and Finding it Empty. Now, Langdon Winner is well known for his 1980 article, Do Artifacts Have Politics? And I think that that text in particular is, like, required reading.

So let’s bust that out in a future episode and take a deep dive on it. But in the meantime, the critique that he had with respect to social constructivism is In four main areas. The first one is the consequences. This is from like page 368 of his article. And he says, the problem there is that they’re so focused on what shapes the technology, what brings it into being that they don’t look at anything that happens afterwards, the consequences.

And we can see that with respect to AI, where there’s a lot of work on the development, but now people are actually going, Hey, what are the impacts of this getting introduced large scale throughout our society? So we can see how our own discourse about technology is actually looking at the impacts. And this is something that was kind of missing from the theory.

theoretical point of view back in like 1987. Now I’ll argue that there’s value in understanding how we came up with a particular technology with how it’s formed so that you can see those signs again, when they happen. And one of the challenges whenever you’re studying technology is looking at something that’s incipient or under development and being able to pick the next big one.

Well, what? AI, we’re already past that point. We know it’s going to have a massive impact. The question is what are going to be the consequences of that impact? How big of a crater is that meteorite going to leave? Now for Winner, a second critique is that Scot looks at all the people that are involved in the production of a technology, but not necessarily at the groups that are excluded from that production.

For AI, we can look at the tech giants and the CEOs, the people doing a lot to promote and roll out this technology as well as those companies that are adopting it, but we’re often not seeing the impacts on those who are going to be directly affected by the large scale. introduction of AI into our economy.

We saw it a little bit with the Hollywood Strikes of 2023, but again, those are the high profile cases and not the vast majority of people that will be impacted by the deployment of a new technology. And this feeds right into Scot’s third critique, that Scot focuses on certain social groups but misses the larger impact or even like the dynamics of what’s going on.

How technological change may impact much wider across our, you know, civilization. And by ignoring these larger scale social processes, the deeper, as Langdon Winters says, the deeper cultural, intellectual, or economic regions of social choices about technology, these things remain hidden, they remain obfuscated, they remain part of the black box and closed off.

And this ties directly into Wiener’s fourth critique as well, is that when Scottis looking at particular technology it doesn’t necessarily make a claim about what it all means. Now in some cases that’s fine because it’s happening in the moment, the technology is dynamic and it’s currently under development like what we’re seeing with AI.

But if you’re looking at something historical that’s been going on for decades and decades and decades, like Oh, the black boxes we mentioned at the beginning, the flight recorders that we started the episode with. That’s pretty much a set thing now. And the only question is, you know, when, say, a new accident happens and we have a search for it.

But by and large, that’s a set technology. Can’t we make an evaluative claim about that, what that means for us as a society? I mean, there’s value in an analysis of maintaining some objectivity and distance, but at some point you have to be able to make a claim. Because if you don’t, you may just end up providing some cover by saying that the construction of a given technology is value neutral, which is what that interpretive flexibility is basically saying.

Near the end of the paper, in his critique of another scholar by the name of Stephen Woolgar, Langdon Winner states, Quote, power holders who have technological megaprojects in mind could well find comfort in a vision like that now offered by the social constructivists. Unlike the inquiries of previous generations of critical social thinkers, social constructivism provides no solid systematic standpoint or core of moral concerns from which to criticize or oppose any particular patterns of technical development.

end quote. And to be absolutely clear, the current development of AI tools around the globe are absolutely technological mega projects. We discussed this back in episode 12 when we looked at Nick Bostrom’s work on superintelligence. So as this global race to develop AI or AGI is taking place, it would serve us well to have a theory of technology that allows us to provide some critique.

Now that Steve Woolgar guy that Winder was critiquing had a writing partner back in the seventies, and they started looking at science from an anthropological perspective in their study of laboratory life. And he wrote that with Bruno Latour. And Bruno Latour was working with another set of theorists who studied technology as a black box and that was called Actor Network Theory.

And that had a couple key components that might help us out. Now, the other people involved were like John Law and Michel Callon, and I think we might have mentioned both of them before. But one of the basic things about actor network theory is that it looks at things involved in a given technology symmetrically.

That means it doesn’t matter whether it’s an artifact, or a creature, or a set of documents, or a person, they’re all actors, and they can be looked at through the actions that they have. Latour calls it a sociology of translation. It’s more about the relationships between the various elements within the network rather than the attributes of any one given thing.

So it’s the study of power relationships between various types of things. It’s what Nick Bostrom would call a flat ontology, but I know as I’m saying those words out loud I’m probably losing, you know, listeners by the droves here. So we’ll just keep it simple and state that a person using a tool is going to have normative expectancy.

About how it works. Like they’re gonna have some basic assumptions, right? If you grab a hammer, it’s gonna have a handle and a head and, and depending on its size or its shape or material, it might, you know, determine its use. It might also have some affordances that suggest how it could be used, but generally that assemblage, that conjunction of the hammer and the user.

I don’t know, we’ll call him Hammer Guy, is going to be different than a guy without a hammer, right? We’re going to say, hey, Hammer Guy, put some nails in that board there, put that thing together, rather than, you know, please hammer, don’t hurt him, or whatever. All right, I might be recording this too late at night, but the point being is that people with tools will have expectations about how they get used, and some of that goes into how those tools are constructed, and that can be shaped by the construction of the technology, but it can also be shaped by our relation to that technology.

And that’s what we’re seeing with AI, as we argued way back in episode 12 that, you know, AI is a assistive technology. It does allow for us to do certain things and extends our reach in certain areas. But here’s the problem. Generally, we can see what kind of condition the hammer’s in. And we can have a good idea of how it’s going to work for us, right?

But we can’t say that with AI. We can maybe trust the hammer or the tools that we become accustomed to using through practice and trial and error. But AI is both too new and too opaque. The black box is too dark that we really don’t know what’s going on. And while we might put in inputs, we can’t trust the output.

And that brings us to the last part of our story.

In the previous section, the authors that we were mentioning, Latour and Woolgar, like winner pitch biker, are key figures, not just in the study of technology, but also in the philosophy of science. Latour and Woolgar’s Laboratory Life from 1977 is a key text and it really sent shockwaves through the whole study of science and is a foundational text within that field.

And part of that is recognizing. Even from a cursory glance, once you start looking at science from a anthropological point of view, is the unique relationship that scientists have with their instruments. And the author Inkeri Koskinen sums up a lot of this in an article from 2023, and they termed the relationship that scientists have with their tools the necessary trust view.

Trust is necessary because collective knowledge production is characterized by relationships of epistemic dependence. Not everything scientists do can be double checked. Scientific collaborations are in practice possible only if its members accept it. Teach other’s contributions without such checks.

Not only does a scientist have to rely on the skills of their colleagues, but they must also trust that the colleagues are honest and will not betray them. For instance, by intentionally or recklessly breaching the standards of practices accepted in the field or by plagiarizing them or someone else.

End quote. And we could probably all think of examples where this relationship of trust is breached, but. The point being is that science, as it normally operates, relies on relative levels of trust between the actors that are involved, in this case scientists and their tools as well. And that’s embedded in the practice throughout science, that idea of peer review of, or of reproducibility or verifiability.

It’s part of the whole process. But the challenge is, especially for large projects, you can’t know how everything works. So you’re dependent in some way that the material or products or tools that you’re using has been verified or checked by at least somebody else that you have that trust with. And this trust is the same that a mountain climber might have in their tools or an airline pilot might have in their instruments.

You know, trust, but verify, because your life might depend on it. And that brings us all the way around to our black boxes that we started the discussion with. Now, scientists lives might not depend on that trust the same way that it would with airline pilots and mountain climbers, but, you know, if they’re working with dangerous materials, it absolutely does, because, you know, chemicals being what they are, we’ve all seen some Mythbusters episodes where things go foosh rather rapidly.

But for most scientists, what Koskinen notes that this trust in their instruments is really kind of a quasi trust, in that they have normative expectations about how the tools they use are going to function. And moreover, this quasi trust is based on rational expectations. They’re rationally grounded.

And this brings us back full circle. How does your AI work? Can you trust it? Is that trust rationally grounded? Now, this has been an ongoing issue in the study of science for a while now, as computer simulations and related tools have been a bigger, bigger part of the way science is conducted, especially in certain disciplines.

Now, Humphrey’s argument is that, quote, computational processes have already become so fast and complex that it was beyond our human cognitive capabilities to understand their details. Basically, computational intensive science is more reliant on the tools than ever before. And those tools are What he calls epistemically opaque.

That means it’s impossible to know all the elements of the process that go into the knowledge production. So this is becoming a challenge for the way science is conducted. And this goes way before the release of ChatGPT. Much of the research that Koskinen is quoting comes from the 20 teens. Research that’s heavily reliant on machine learning or on, say, automatic image classifiers, fields like astronomy or biology, have been finding challenges in the use of these tools.

Now, some are arguing that even though those tools are opaque, they’re black boxed, they can be relied on, and they’re used justified because we can work on the processes surrounding them. They can be tested, verified, and validated, and thus a chain of reliability can be established. This is something that some authors call computational reliabilism, which is a bit of a mouthful for me to say, but it’s basically saying that the use of the tools is grounded through validation.

Basically, it’s performing within acceptable boundaries for whatever that field is. And this gets the idea of thinking of the scientist as not just the person themselves, but also their tools. So they’re an extended agent, the same as, you know, the dude with the hammer that we discussed earlier. Or chainsaw man.

You can think about how they’re one and the same. One of the challenges there is that When a scientist is familiar with the tool, they might not be checking it constantly, you know, so again, it might start pushing out some weird results. So it’s hard to reconcile that trust we have in the combined scientist using AI.

They become, effectively, a black box. And this issue is by no means resolved. It’s still early days, and it’s changing constantly. Weekly, it seems, sometimes. And to show what some of the impacts of AI might be, I’ll take you to a 1998 paper by Martin Weitzman. Now, this is in economics, but it’s a paper that’s titled Recombinant Growth.

And this isn’t the last paper in my database that mentions black boxes, but it is one of them. What Weitzman is arguing is that when we’re looking at innovation, R& D, or Knowledge production is often treated as a black box. And if we look at how new ideas are generated, one of those is through the combination of various elements that are already existing.

If AI tools can take a much larger set of existing knowledge, far more than any one person, or even teams of people can bring together at any one point in time and put those together in new ways. then the ability to come up with new ideas far exceeds that that exists today. This directly challenges a lot of the current arguments going on, on AI and creativity, and that those arguments completely miss the point of what creativity is and how it operates.

Weitzman states that new ideas arise out of existing ideas and some kind of cumulative interactive process. And we know that there’s a lot of stuff out there that we’ve never tried before. So the field of possibilities is exceedingly vast. And the future of AI assisted science could potentially lead to some fantastic discoveries.

But we’re going to need to come to terms with how we relate to the black box of scientist and AI tool. And when it comes to AI, our relationship to our tools has not always been cordial. In our imagination, from everything from Terminator to The Matrix to Dune, it always seems to come down to violence.

So in our next episode, we’re going to look into that, into why it always comes down to a Butlerian Jihad.

Once again, thanks for joining us here on the Implausipod. I’m your host, Dr. Implausible, and the research and editing, mixing, and writing has been by me. If you have any questions, comments, or there’s elements you’d like us to go into additional detail on, please feel free to contact the show at drimplausible at implausipod dot com. And if you made it this far, you’re awesome. Thank you. A brief request. There’s no advertisement, no cost for this show. but it only grows through word of mouth. So, if you like this show, share it with a friend, or mention it elsewhere on social media. We’d appreciate that so much. Until next time, it’s been fantastic.

Take care, have fun.

Bibliography:
Bijker, W., Hughes, T., & Pinch, T. (Eds.). (1987). The Social Construction of Technological Systems. The MIT Press. 

Koskinen, I. (2023). We Have No Satisfactory Social Epistemology of AI-Based Science. Social Epistemology, 0(0), 1–18. https://doi.org/10.1080/02691728.2023.2286253 

Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford University Press. 

Latour, B., & Woolgar, S. (1979). Laboratory Life: The construction of scientific facts. Sage Publications. 

Pierce, D. (2024, January 9). The Rabbit R1 is an AI-powered gadget that can use your apps for you. The Verge. https://www.theverge.com/2024/1/9/24030667/rabbit-r1-ai-action-model-price-release-date 

rabbit—Keynote. (n.d.). Retrieved February 25, 2024, from https://www.rabbit.tech/keynote 

Sutter, P. (2023, October 4). AI is already helping astronomers make incredible discoveries. Here’s how. Space.Com. https://www.space.com/astronomy-research-ai-future 

Weitzman, M. L. (1998). Recombinant Growth*. The Quarterly Journal of Economics, 113(2), 331–360. https://doi.org/10.1162/003355398555595 

Winner, L. (1993). Upon Opening the Black Box and Finding it Empty: Social Constructivism and the Philosophy of Technology. Science Technology & Human Values, 18(3), 362–378. 

Upcoming Trends

With CES wrapping up in Las Vegas this weekend, I’ve been seeing lots of reports of the new technologies that have been on display. I’ve never been, but I think it might be something to take in one of these years.

I want to find a decent article, and cover my commentary of it, but I haven’t quite seen one I want to use yet.

The Verge has some decent coverage here:

https://www.theverge.com/24026787/ces-best-of-samsung-ballie-lg-tv

Which talks about the new Transparent TV from LG:

and I think that may be remarkable enough to talk about on it’s own.

But it’s been a long cold day, with the outside temp staying below -30 C for most of the day, and I’ve just been trying to keep warm. I’ll follow-up with a full write-up (and perhaps an episode if I’m inspired), and we’ll see what comes of it.

Implausipod E0017 – Not a Techno-optimist

Introduction:

If you had asked me on October 15th, 2023, how to self describe myself, I might say I was a techno optimist. But on October 16th, Mark Andreesen, the founder of Netscape, released the Techno-optimist manifesto, and I can no longer say that I’m a techno optimist.

In this episode we’ll walk through the quick scan of the document, and the red flags that it raised while looking through it, and where some of the problems lay in the underlying assumptions of the manifesto.

https://www.implausipod.com/1935232/13859916-implausipod-e0017-not-a-techno-optimist


Transcript:

Technology. If you’ve listened to this podcast for more than a few episodes, you realize that that’s one of the underlying themes here, that I’m interested in technology, how it appears in popular culture, how it’s developed, it’s what I’ve researched, written about, taught about, and I think about it a lot.

I think about its promise and potential and what it can offer humanity. And if you had asked me on October 15th, 2023, how to self describe myself, I might say I was a techno optimist. But on October 16th, Mark Andreesen, the founder of Netscape, released the Techno-optimist manifesto, and I can no longer say that I’m a techno optimist.

I’ll explain why in this episode of the Implausipod.

When the manifesto was originally published, I gave it a quick scan, and that scan raised a number of red flags. And throughout the rest of this episode, we’ll look at those red flags as if they were laid down by a surveyor on the landscape. But before we do, I want to go into the value of giving something a quick scan, of jotting down your initial impressions. 

I’m going to employ another surveyor’s tool, one of triangulation, of being able to hone in on the target by looking at it from different angles and directions, from different points of view. Because, as we talked about a few episodes ago, that empathetic view of technology requires that triangulation; of being able to step outside of one’s own perspective and view it from the perspective of somebody else.  And this can be done for both things we find positive, and things that we find negative as well.

So as is tradition, we’re going to talk about something by chasing down a couple tangents first before we get back to those red flags. But bear with me, it’ll all kind of come together at the end.

So when it comes to the techno optimist manifesto, the thing that really struck me was the ability to identify those red flags, to spot them, to pull them out of the larger text.  (And it was a 5200 word text. There was a lot going on in there.) but I think identifying these red flags speaks to something larger: the ability of experts or people heavily involved in the field to identify key elements or themes and figure out where a problem might be lying. It doesn’t matter which field it’s in: whether it’s a mechanic or medical doctor, academic or art historian.

And if that last one rings a bell, it’s because there’s a source for it. In his 2005 book Blink, Malcolm Gladwell talked about the process by which an art historian was able to evaluate a statue that was brought into the Getty Museum. and at a glance, the evaluators were able to identify key features that led them to believe that it was a forgery, that the statue in question had never actually ever been in the ground and subsequently recovered.  It’s the ability to spot the minutiae of a given artifact or piece of art, and through long experience and knowledge and exposure, be able to determine its authenticity, the validity of a piece of work. And again, this isn’t just an academic thing. It goes across so many fields, crafts, trades, practices.  It’s a key, essential element of them.

And to link it back to the ongoing discussions about AI, it’s one of those things that AI generated texts or artifacts often lack. It’s that authenticity. We can sense that there’s something off about the piece. As the saying goes, we can tell by the pixels. So this assemblage of tools that we have, the skills and knowledge and practice and experience, all come together to form what we might call a set of heuristics.

It’s similar to what Kenneth Burke calls equipment for living, and there he’s referring to literature and proverbs function in a way similar to the memes we talked about last episode, but these are the tools that we can use to judge something, and how we come to an assumption about what we’re seeing in front of us. We do this for pretty much everything. But when it’s something that’s particular to our skill or our particular area, then we can make some judgments about it.

And when it comes to those particular topics, perhaps we have a duty to communicate that information, to share that knowledge with the world around us.  So that’s what we’re going to get into here with the techno optimist discussion and the red flags, because I’ve read a lot of the texts that Andreesen cites within the manifesto, but obviously have a radically different worldview, and we can, discuss why we might come up with those radically different interpretations at the end.

But before we do, I want to throw one more point into the mix, one more element or angle for our triangulation on the topic at hand, and something I like to call the Forest Hypothesis. Now, this is different than the Dark Forest Hypothesis, where we are, as a species are tiny mice in a universe filled with predators lurking in the darkness (which we’ll touch on next episode). Rather the Forest Hypothesis is related back to the Blink idea, that it’s a way of evaluating knowledge, of evaluating expertise. The Forest Hypothesis basically asks how much can you talk about on a given subject if you’re out in the forest away from any cell phone signal, Wikipedia, handheld device, book, or any other form of external knowledge, something that was extrinsic to yourself.

And it’s a good test. There are people that can expound endlessly on stuff that they know about, and there are those who may be less comfortable discussing things online, or in an academic setting, but you know when push comes to shove, they actually do know things, and they don’t have to just reach out to their Wikipedia on their phone. Now, the analogue to this is the bar talk phenomenon that we used to have, where no one had access to phones, and we’d get into discussions about who could recall what. We could call it the Cliff Claven Corollary, right, where we’re not necessarily sure in the moment, but we can use those rhetorical strategies to ask: “eh, does that sound right to you, or are you just, like, making that up?”

And in the interest of full disclosure, much of the rest of the episode about the red flags came from two conversations that I had with different sets of colleagues about the techno optimist manifesto and the material espoused within.  So much of the rest of this episode is going to be me recreating that discussion and talk off the top of my head as best I can. I’ll refer back to specific elements, but without further ado: why I am not a techno optimist.

So, as stated, the Techno Optimist Manifesto was published on the morning of October 16th, 2023. During the day, it started making the rounds on social media, on Mastodon, and elsewhere, and I saw numerous links to it, so I thought I’d dive in and give it a quick look. There’s been articles written about it since, in the intervening ten days or so, but I want to really just capture my thoughts that I had at the time.

I had jotted them down and had them in conversations with colleagues, as stated. So flipping through the manifesto, I kind of gave it a high level skim and a couple of things started to pop out. And these were the red flags that started to be a cause for concern. The first of those was some of the works cited. Now, one of those heuristics that we talked about earlier that you can use whenever you’re evaluating an article is kind of, you read it from the front, you read it from the back, and then you can read the meat of the article itself, which means take a look at the abstract or the introduction, and then take a look at some of the authors they’re citing, because if you’re familiar with them, that can give you an idea of where the conversation’s going to go.

But with respect to the manifesto itself, early on in the work Mark Andreesen starts referring to a number of economists that were influences for the work that he was producing. The first one he mentions is the work of Paul Collier, who wrote an influential book called The Bottom Billion, which talks about development and in the global south. There’s nothing really wrong there. He’s going into some interesting information about what’s happening in the developing economies around the world.

But then Andreesen goes on to cite Frederick Hayek and Milton Friedman as influences. Now from a glance and, these are, you know, well known and respected economists, and Hayek in particular for his work on the Knowledge problem.  But both of them were influential in other ways, and drove the policy for the Thatcher governments in the UK in the 70s and 80s, as well as the Reagan governments in the U.S. in the 1980s, So they had a very neoliberal bent to them and a lot of the underlying ideology from their economic works are what we still see in policy circles today. Taking a look at the state of the world and the economic system, we may want to questions those underlying influences, and seeing them in this manifesto is raising some red flags again. Now, even though, some economists like say Tyler Cowen would recently would include both Hayek and Friedman is part of the greats of all time, and again, I’m not disputing this: they have a massive influence. But those influences can have outsized effects for millions and billions of people across the world.

Some of the other elements that, showed up as red flags in Mark Andreesen’s work was the section of the manifesto, and just a quick second, whenever you declare something as a manifesto, that in and of itself is a red flag, it’s a cause for, just to maybe look at the document from a particular point of view, to go through it with that fine tooth comb.

A manifesto can be seen as like an operating manual, like “this is what we’re working with; these are our stated assumptions” and sometimes getting that down on paper is fine. It gives you a target that you can refer back to. But when we see a manifesto, we also want look at it with a greater degree of incredulity, to dig a little deeper on what’s included therein.

So in the manifesto, there’s this section of beliefs that Mark Andreesen goes through, where each sentence starts “we believe that dot dot dot”. And beliefs are fine, there’s nothing wrong with having beliefs, but it’s when we have beliefs that are contrary to evidence that it can become a problem. And in the belief section, you see a lot of these statements, where the belief is contra to evidence.

One of the things he says is they believe in… That energy should be too cheap to meter, and that if you have widespread access to this energy that’s too cheap to meter, then that can be a net societal good, and by and large, I agree. Now, the method they decide to get there is part of the problem.  They say that through nuclear fission, they will be able to achieve energy that’s too cheap to meter. Now, this is part of the problem, because nuclear fission alone will not get there. Aside from the massive environmental costs of nuclear fission, of the plants that are currently existing (and I’m referring here to an article on phys dot org from 2011, that I still remember), and it was basically saying that at the time in 2011, there was 440 existing nuclear fission reactors that supplied, you know, a portion of the world’s energy. To supply the full energy demand through nothing but nuclear, we would need 15,000 additional nuclear reactors with all the associated costs, the fissile material, the environmental costs, and they’d still be putting out the, you know, the heat, the steam that is released from nuclear reactors. So, there would still be a massive environmental cost from transitioning to that source, and that would require building, like, ten reactors a day, every day, for like half a decade to get us close to those numbers.

There’s no way for us to… as a society build up that kind of capacity through nuclear fission alone. And Andreesen states that that would allow us to provide energy too cheap to meter that we could move away from an oil and gas economy. So, the actual path is through more passive elements like solar, wind, and alternative energy sources, but nuclear fission will not get there, and using nuclear fission to accelerate us into nuclear fusion is also a problem, in that nuclear fusion has always been about some mythical target 20 or 30 years down the road and much like AGI seems to always be off in the future. We’re never quite getting to that point. So citing that as a goal is necessarily a bit of a problem.

We’re barely getting started and we’re already three flags in. Now, the next one is that in this area, they also self identify as apex predators.  Earlier, on he draws a comparison to sharks in nature: move or die, and that ties into this apex predator bit later on. He says that they are predators, that they are able to make the lightning “work for us”.  It moves directly from their to a return to the “great man theory” lionizing the technologists and industrialists who came before.  Hmmm. Really? Do tell. Whenever you’re self identifying as a predator, that’s just like a massive red flag, a warning sign.

And I want to be clear, that there are aspirational elements to the work, it’s just that the aspirational elements are like flowers in a garden filled with these bright red flags.  

I can get behind the aspirational elements, but even some of those have a massive disconnect with reality. They see the earth as having a caring capacity of like 50 billion humans.  we can barely manage with the 8 billion that we currently have, which is massively overusing the resources available to the tune of requiring three earths worth at current consumption rates. And while the may be able to support 50 billion humans, but that would require a massive change in organizational and resource usage and resort in horrible inequalities across massive amounts of that 50 billion, with a very select few having anything close to the living standards that we have now or that are seen across much of the OECD nations, let alone the globe as a whole.

We see a number of other aspirational elements, other flowers in the garden, in quotes from Richard Feynman, Buckminster Fuller, and others, with odes to the transformative power of science to enlighten us and provide answers to the mysteries of the world around us.  But this also comes hand in hand with a de-legitimizing of expertise, using the Feynman quote to propose a return to the “actual scientific method” using “actual information”.  Whenever we start seeing echoes of the No True Scotsman fallacy in a text, making distinctions about what counts, once again, red flag.  Actual information? Who decides?  Isn’t that what science is about?

And from there, the Andreesen leans heavily into accelerationism. And again, this is a massive flag for me personally, whenever someone self identifies as an accelerationist, I start to seriously question everything they’re talking about.

Accelerationism is basically the belief that what capitalism really needs is for the gas to be put all the way down to the floor, to press the pedal all the way down so that we can actually hit “escape velocity” quote-unquote, and move quicker along the curve towards the singularity or whatever.

If you consider technological development as a curve, as a growth curve, then the only way to get higher up it is to go faster. Now, if you look at Geoffrey Moore’s work on innovation in Crossing the Chasm, which is an adaptation of Rogers’ work on the diffusion of innovations, of the innovation adoption curve, there’s a point where any new technology will succeed or fail, based on the point of low down on the slope. If I do the video version of this, we’ll put this on the screen, but basically at the low end of the slope, there’s this little thing, which Moore calls the chasm. And that chasm is where you have the innovators and early adopters have kind of picked up this new tech, and then you’re trying to take that product, that technological tool or artifact out to larger market, to get widespread adoption, and then see if it flies. Basically we’ve seen it with things like virtual reality or DVDs or home video recording, smartphones, whatever. There’s a point where the product might be under development for a while, and then the larger population says, okay, we can use this and they adopt it. And then it sees widespread distribution.

Accelerationism views that as a challenge and views tech more generally. And that, like we said, things need capitalism just really needs more gas, more fuel. Problem with it is that obviously you can’t necessarily tell what’s going to take off, what’s going to get adopted – you can’t necessarily make “fetch” happen, even if you’re a billionaire, and there’s a lot of problems when you start going that fast with no brakes.  If the road starts to swerve ahead of you, you might not be able to change direction in time, and this is where the other side of accelerationism comes in.

You see, Accelerationism isn’t necessarily something that’s either left or right. There are accelerations on both sides of the political spectrum. There’s accelerationists on the right, that are pro-capitalist, pro-tech version seen here.  There are other accelerationists on the right, and you can go check out the Wikipedia page to see what other groups are associated with it. There’s also accelerationists on the left who view that capitalism is inherently unstable and want to see it go faster because that will expose the iniquities in the system and help it go off the rails so something better can be rebuilt.

You see this in the works of like Slavoj Zizek and other academics on the left though. Zizek himself is kind of… Um, mid, I guess, but you’ll see that amongst those who are critics of capitalism, who also want it to go faster. There’s a problem with both these perspectives and the problem is basically that, and this is the problem I have with accelerationism is that it is a perspective of a tiny elite minority and would result in massive amounts of pain for millions and billions of people, while that acceleration is resolving itself.

While things are going faster, more fuel is getting added to the system. You know, the climatic change that we see because of more fuel literally being added to the system. Just the disruption that we can see happening would cause starvation, job loss, and untold pain and suffering, if the current systems we have are disrupted is also a problem. And so, from my perspective of doing the least pain, of not wanting to see humanity as a whole suffer, then accelerationism is necessarily a bad thing. Let’s find a different way.

Now, this is about the point where the Techno Optimist Manifesto gets into the list of enemies as well, and while that may or may not be typical for a manifesto, I think whenever you’re writing something and you have a enemies list, you know, that’s a warning flag in and of itself.

Now amongst the enemies for the technological optimist are things like sustainability, sustainable development, social responsibility, trust and safety, tech ethics, de-growth, and others besides. And when you start to look about who your enemies are, what you’re against, then you start wondering really what you’re for, right? So the concern here is that any kind of regulation or responsible governance is seen as an enemy, as something to be combated, to be avoided, to be dealt with. And aside from being a massive red flag, it reveals some of the under some of the underlying ethos as well.

This is what they’re against. They’re against regulation, things that were put in place for safety, for ethical use, for management, for sustainability, for our continued existence on the planet. And these are things they’re against. And I think that is, again, a massive warning sign. And from there we get to the last one.

The last red flag sign is a quote that comes up near the end. Now the quote is uncited, unattributed. We don’t see the conviction to actually state who this is from because that may be actually make it too obvious. The quote is as follows:

“Beauty exists only in struggle. There is no masterpiece that has not an aggressive character. Technology must be a violent assault on the forces of the unknown, to force them to bow before man.”

That quote is from Filippo Marinetti, from 1909, from the Futurist Manifesto which he wrote. If you’re not familiar with Marinetti, here’s the low down, and it’ll highlight the problem. Like I said, it was uncited, but if you know who Marinetti is and the story, then that is the biggest warning flag in the entire document, of the entire list of warning flags that we’ve already seen. Marinetti, of course, is the founder of the Futurist School in Italy and wrote the Futurist Manifesto in 1909.

Here’s some of the elements of futurism: technology, growth, industry, modernization. Okay, but also these other elements: speed, violence, destruction of museums, war as a necessity for purification… Hmmm. Now, Marinetti would go on to get into politics in Italy a few years later, and work with another group of Italians on another manifesto in 1919. That, of course, is the Fascist Manifesto, which he co-authored. So there’s a direct lineage from Marinetti’s work and elements of it that appear in that later manifesto and the works that that was adopted to as well.

If we take all these things, all these red flags together: a list of neoliberal economists, denialism and beliefs contrary to facts while downplaying education, self identifying as predators, accelerationism, lists of enemies, and citing proto-fascist literature. All this combined is a massive red flag and why I am not a techno optimist.

So, that being said, then how would I identify?

And that’s a fantastic question, because judging on the words alone, “Technical Optimist” is pretty close to where I am. I believe that technology can be used as an assistive tool, as we’ve stated prior, and that it can help people out, and is generally an extension of man, that we can use it for adding to our capabilities.

So I might be a techno-optimist, or at least I was until October 16th. Other terms I’ve seen floating around that I could self-identify as include things like techno revivalist, which is close, but not quite. That feels like it ties more into like experimental archeology, where we try and recreate the past or use methods of the past in the modern era to kind of figure out what they were doing. It’s a fascinating field. We should talk about it at some time, but that isn’t really where I am.

Solarpunk isn’t quite where I’m at either because, well, or cyberpunk either. I don’t think I’m really fit within any of the punk genres.  I’m pretty straight-laced. I’m a basic B to be honest.

Taking the opposite stance doesn’t work either; I’m not a techno-pessimist.  I’m generally hopeful for the opportunities that the new technologies can bring. I think that’s part of the challenge is that there isn’t a good line for where I sit. Aside from what is now defined as a techno optimist. And I don’t think it can be reclaimed because as I went through the number of red flags there, the well is really well… well and truly poisoned and with the breadth of reach that that particular manifesto got and the reporting that it saw in multiple areas, I don’t think that that would ever come back, even though much like Michael Bolton in Office Space, why should I change if they’re the ones that suck, right?

So I think techno-optimist as a term is where it is, and that will not change. But I am almost anything but that. And why? Well, part of it I think is just exposure and upbringing.  As I said, I’ve had a significantly different path. One that doesn’t lead through Silicon Valley, one that’s not even in the same solar system as a billionaire.

When you have to go about the business of daily life, when you’re almost middle class, you’re going to have a very different view of technology and its uses, and how it can be used for exploitation as well. And I think that comes through in some of our work.

So, to tie this back to the beginning, to close the loop on why we had to triangulate with examples before we could assay the manifesto: if exposure and experience are what lead one to be able to make quick judgments about a particular work and see where the references are coming from, they also can allow one to see some of the harms that might come about from exposure to those statements as well.  And that’s really what we’re trying to do: to bring some of those associations to light through this particular podcast episode.

So as I still search for a term: Retro Tech Enthusiast, just Tech Enthusiast perhaps, media historian, media archaeologist, etc. I’ll keep working on it. And once we figure it out – and the figuring it out is what I think is going to be the journey of this podcast as a whole – once we figure it out, I’ll let you know.

But if you have any great suggestions in the meantime, reach out and let me know at drimplausible at implausipod. com or on the implausi dot blog. I’ll see you around. Take care.