Recombinant Innovation

Was shared a link today, to a video showing off the newest Samsung Galaxy S24 Ultra:

It’s fantastic technology, showing off the ability to translate (almost) live between Korean and English, functioning as a middle-man, or middleware, between the Sender and Receiver in the communication channel, reducing the overall noise in the system (in this case in the difficulty of two very different languages).

But in discussion today, we noted how simple some of the various components are: the translation, which exists already at both ends, Google translate, or any of a number of dedicated devices.

And this is the point: there are very few completely new things in the world. Most new things are combinations of existing things.

The interesting thing is how you put them together.

Hence, recombinant innovation.

The ability of these innovations to reduce friction (or appear to, at least; there can often be a dark side of it as well) can determine how well these tools get adopted. It depends on if people can see themselves using it.

And in this, the Youtube video above is very effective: we can readily picture ourselves in that situation needing to make a reservation at a restaurant, and deciding that this would work for us. And from there it’s a quick jump to see how we could use it in other areas of our lives. Talking with loved ones, or their families, and being able to speak directly (or with at most a quick pause), and share with them too.

There’s a lot of magic in our innovations. Let’s start the discussion.

Implausipod E0017 – Not a Techno-optimist

Introduction:

If you had asked me on October 15th, 2023, how to self describe myself, I might say I was a techno optimist. But on October 16th, Mark Andreesen, the founder of Netscape, released the Techno-optimist manifesto, and I can no longer say that I’m a techno optimist.

In this episode we’ll walk through the quick scan of the document, and the red flags that it raised while looking through it, and where some of the problems lay in the underlying assumptions of the manifesto.

https://www.implausipod.com/1935232/13859916-implausipod-e0017-not-a-techno-optimist


Transcript:

Technology. If you’ve listened to this podcast for more than a few episodes, you realize that that’s one of the underlying themes here, that I’m interested in technology, how it appears in popular culture, how it’s developed, it’s what I’ve researched, written about, taught about, and I think about it a lot.

I think about its promise and potential and what it can offer humanity. And if you had asked me on October 15th, 2023, how to self describe myself, I might say I was a techno optimist. But on October 16th, Mark Andreesen, the founder of Netscape, released the Techno-optimist manifesto, and I can no longer say that I’m a techno optimist.

I’ll explain why in this episode of the Implausipod.

When the manifesto was originally published, I gave it a quick scan, and that scan raised a number of red flags. And throughout the rest of this episode, we’ll look at those red flags as if they were laid down by a surveyor on the landscape. But before we do, I want to go into the value of giving something a quick scan, of jotting down your initial impressions. 

I’m going to employ another surveyor’s tool, one of triangulation, of being able to hone in on the target by looking at it from different angles and directions, from different points of view. Because, as we talked about a few episodes ago, that empathetic view of technology requires that triangulation; of being able to step outside of one’s own perspective and view it from the perspective of somebody else.  And this can be done for both things we find positive, and things that we find negative as well.

So as is tradition, we’re going to talk about something by chasing down a couple tangents first before we get back to those red flags. But bear with me, it’ll all kind of come together at the end.

So when it comes to the techno optimist manifesto, the thing that really struck me was the ability to identify those red flags, to spot them, to pull them out of the larger text.  (And it was a 5200 word text. There was a lot going on in there.) but I think identifying these red flags speaks to something larger: the ability of experts or people heavily involved in the field to identify key elements or themes and figure out where a problem might be lying. It doesn’t matter which field it’s in: whether it’s a mechanic or medical doctor, academic or art historian.

And if that last one rings a bell, it’s because there’s a source for it. In his 2005 book Blink, Malcolm Gladwell talked about the process by which an art historian was able to evaluate a statue that was brought into the Getty Museum. and at a glance, the evaluators were able to identify key features that led them to believe that it was a forgery, that the statue in question had never actually ever been in the ground and subsequently recovered.  It’s the ability to spot the minutiae of a given artifact or piece of art, and through long experience and knowledge and exposure, be able to determine its authenticity, the validity of a piece of work. And again, this isn’t just an academic thing. It goes across so many fields, crafts, trades, practices.  It’s a key, essential element of them.

And to link it back to the ongoing discussions about AI, it’s one of those things that AI generated texts or artifacts often lack. It’s that authenticity. We can sense that there’s something off about the piece. As the saying goes, we can tell by the pixels. So this assemblage of tools that we have, the skills and knowledge and practice and experience, all come together to form what we might call a set of heuristics.

It’s similar to what Kenneth Burke calls equipment for living, and there he’s referring to literature and proverbs function in a way similar to the memes we talked about last episode, but these are the tools that we can use to judge something, and how we come to an assumption about what we’re seeing in front of us. We do this for pretty much everything. But when it’s something that’s particular to our skill or our particular area, then we can make some judgments about it.

And when it comes to those particular topics, perhaps we have a duty to communicate that information, to share that knowledge with the world around us.  So that’s what we’re going to get into here with the techno optimist discussion and the red flags, because I’ve read a lot of the texts that Andreesen cites within the manifesto, but obviously have a radically different worldview, and we can, discuss why we might come up with those radically different interpretations at the end.

But before we do, I want to throw one more point into the mix, one more element or angle for our triangulation on the topic at hand, and something I like to call the Forest Hypothesis. Now, this is different than the Dark Forest Hypothesis, where we are, as a species are tiny mice in a universe filled with predators lurking in the darkness (which we’ll touch on next episode). Rather the Forest Hypothesis is related back to the Blink idea, that it’s a way of evaluating knowledge, of evaluating expertise. The Forest Hypothesis basically asks how much can you talk about on a given subject if you’re out in the forest away from any cell phone signal, Wikipedia, handheld device, book, or any other form of external knowledge, something that was extrinsic to yourself.

And it’s a good test. There are people that can expound endlessly on stuff that they know about, and there are those who may be less comfortable discussing things online, or in an academic setting, but you know when push comes to shove, they actually do know things, and they don’t have to just reach out to their Wikipedia on their phone. Now, the analogue to this is the bar talk phenomenon that we used to have, where no one had access to phones, and we’d get into discussions about who could recall what. We could call it the Cliff Claven Corollary, right, where we’re not necessarily sure in the moment, but we can use those rhetorical strategies to ask: “eh, does that sound right to you, or are you just, like, making that up?”

And in the interest of full disclosure, much of the rest of the episode about the red flags came from two conversations that I had with different sets of colleagues about the techno optimist manifesto and the material espoused within.  So much of the rest of this episode is going to be me recreating that discussion and talk off the top of my head as best I can. I’ll refer back to specific elements, but without further ado: why I am not a techno optimist.

So, as stated, the Techno Optimist Manifesto was published on the morning of October 16th, 2023. During the day, it started making the rounds on social media, on Mastodon, and elsewhere, and I saw numerous links to it, so I thought I’d dive in and give it a quick look. There’s been articles written about it since, in the intervening ten days or so, but I want to really just capture my thoughts that I had at the time.

I had jotted them down and had them in conversations with colleagues, as stated. So flipping through the manifesto, I kind of gave it a high level skim and a couple of things started to pop out. And these were the red flags that started to be a cause for concern. The first of those was some of the works cited. Now, one of those heuristics that we talked about earlier that you can use whenever you’re evaluating an article is kind of, you read it from the front, you read it from the back, and then you can read the meat of the article itself, which means take a look at the abstract or the introduction, and then take a look at some of the authors they’re citing, because if you’re familiar with them, that can give you an idea of where the conversation’s going to go.

But with respect to the manifesto itself, early on in the work Mark Andreesen starts referring to a number of economists that were influences for the work that he was producing. The first one he mentions is the work of Paul Collier, who wrote an influential book called The Bottom Billion, which talks about development and in the global south. There’s nothing really wrong there. He’s going into some interesting information about what’s happening in the developing economies around the world.

But then Andreesen goes on to cite Frederick Hayek and Milton Friedman as influences. Now from a glance and, these are, you know, well known and respected economists, and Hayek in particular for his work on the Knowledge problem.  But both of them were influential in other ways, and drove the policy for the Thatcher governments in the UK in the 70s and 80s, as well as the Reagan governments in the U.S. in the 1980s, So they had a very neoliberal bent to them and a lot of the underlying ideology from their economic works are what we still see in policy circles today. Taking a look at the state of the world and the economic system, we may want to questions those underlying influences, and seeing them in this manifesto is raising some red flags again. Now, even though, some economists like say Tyler Cowen would recently would include both Hayek and Friedman is part of the greats of all time, and again, I’m not disputing this: they have a massive influence. But those influences can have outsized effects for millions and billions of people across the world.

Some of the other elements that, showed up as red flags in Mark Andreesen’s work was the section of the manifesto, and just a quick second, whenever you declare something as a manifesto, that in and of itself is a red flag, it’s a cause for, just to maybe look at the document from a particular point of view, to go through it with that fine tooth comb.

A manifesto can be seen as like an operating manual, like “this is what we’re working with; these are our stated assumptions” and sometimes getting that down on paper is fine. It gives you a target that you can refer back to. But when we see a manifesto, we also want look at it with a greater degree of incredulity, to dig a little deeper on what’s included therein.

So in the manifesto, there’s this section of beliefs that Mark Andreesen goes through, where each sentence starts “we believe that dot dot dot”. And beliefs are fine, there’s nothing wrong with having beliefs, but it’s when we have beliefs that are contrary to evidence that it can become a problem. And in the belief section, you see a lot of these statements, where the belief is contra to evidence.

One of the things he says is they believe in… That energy should be too cheap to meter, and that if you have widespread access to this energy that’s too cheap to meter, then that can be a net societal good, and by and large, I agree. Now, the method they decide to get there is part of the problem.  They say that through nuclear fission, they will be able to achieve energy that’s too cheap to meter. Now, this is part of the problem, because nuclear fission alone will not get there. Aside from the massive environmental costs of nuclear fission, of the plants that are currently existing (and I’m referring here to an article on phys dot org from 2011, that I still remember), and it was basically saying that at the time in 2011, there was 440 existing nuclear fission reactors that supplied, you know, a portion of the world’s energy. To supply the full energy demand through nothing but nuclear, we would need 15,000 additional nuclear reactors with all the associated costs, the fissile material, the environmental costs, and they’d still be putting out the, you know, the heat, the steam that is released from nuclear reactors. So, there would still be a massive environmental cost from transitioning to that source, and that would require building, like, ten reactors a day, every day, for like half a decade to get us close to those numbers.

There’s no way for us to… as a society build up that kind of capacity through nuclear fission alone. And Andreesen states that that would allow us to provide energy too cheap to meter that we could move away from an oil and gas economy. So, the actual path is through more passive elements like solar, wind, and alternative energy sources, but nuclear fission will not get there, and using nuclear fission to accelerate us into nuclear fusion is also a problem, in that nuclear fusion has always been about some mythical target 20 or 30 years down the road and much like AGI seems to always be off in the future. We’re never quite getting to that point. So citing that as a goal is necessarily a bit of a problem.

We’re barely getting started and we’re already three flags in. Now, the next one is that in this area, they also self identify as apex predators.  Earlier, on he draws a comparison to sharks in nature: move or die, and that ties into this apex predator bit later on. He says that they are predators, that they are able to make the lightning “work for us”.  It moves directly from their to a return to the “great man theory” lionizing the technologists and industrialists who came before.  Hmmm. Really? Do tell. Whenever you’re self identifying as a predator, that’s just like a massive red flag, a warning sign.

And I want to be clear, that there are aspirational elements to the work, it’s just that the aspirational elements are like flowers in a garden filled with these bright red flags.  

I can get behind the aspirational elements, but even some of those have a massive disconnect with reality. They see the earth as having a caring capacity of like 50 billion humans.  we can barely manage with the 8 billion that we currently have, which is massively overusing the resources available to the tune of requiring three earths worth at current consumption rates. And while the may be able to support 50 billion humans, but that would require a massive change in organizational and resource usage and resort in horrible inequalities across massive amounts of that 50 billion, with a very select few having anything close to the living standards that we have now or that are seen across much of the OECD nations, let alone the globe as a whole.

We see a number of other aspirational elements, other flowers in the garden, in quotes from Richard Feynman, Buckminster Fuller, and others, with odes to the transformative power of science to enlighten us and provide answers to the mysteries of the world around us.  But this also comes hand in hand with a de-legitimizing of expertise, using the Feynman quote to propose a return to the “actual scientific method” using “actual information”.  Whenever we start seeing echoes of the No True Scotsman fallacy in a text, making distinctions about what counts, once again, red flag.  Actual information? Who decides?  Isn’t that what science is about?

And from there, the Andreesen leans heavily into accelerationism. And again, this is a massive flag for me personally, whenever someone self identifies as an accelerationist, I start to seriously question everything they’re talking about.

Accelerationism is basically the belief that what capitalism really needs is for the gas to be put all the way down to the floor, to press the pedal all the way down so that we can actually hit “escape velocity” quote-unquote, and move quicker along the curve towards the singularity or whatever.

If you consider technological development as a curve, as a growth curve, then the only way to get higher up it is to go faster. Now, if you look at Geoffrey Moore’s work on innovation in Crossing the Chasm, which is an adaptation of Rogers’ work on the diffusion of innovations, of the innovation adoption curve, there’s a point where any new technology will succeed or fail, based on the point of low down on the slope. If I do the video version of this, we’ll put this on the screen, but basically at the low end of the slope, there’s this little thing, which Moore calls the chasm. And that chasm is where you have the innovators and early adopters have kind of picked up this new tech, and then you’re trying to take that product, that technological tool or artifact out to larger market, to get widespread adoption, and then see if it flies. Basically we’ve seen it with things like virtual reality or DVDs or home video recording, smartphones, whatever. There’s a point where the product might be under development for a while, and then the larger population says, okay, we can use this and they adopt it. And then it sees widespread distribution.

Accelerationism views that as a challenge and views tech more generally. And that, like we said, things need capitalism just really needs more gas, more fuel. Problem with it is that obviously you can’t necessarily tell what’s going to take off, what’s going to get adopted – you can’t necessarily make “fetch” happen, even if you’re a billionaire, and there’s a lot of problems when you start going that fast with no brakes.  If the road starts to swerve ahead of you, you might not be able to change direction in time, and this is where the other side of accelerationism comes in.

You see, Accelerationism isn’t necessarily something that’s either left or right. There are accelerations on both sides of the political spectrum. There’s accelerationists on the right, that are pro-capitalist, pro-tech version seen here.  There are other accelerationists on the right, and you can go check out the Wikipedia page to see what other groups are associated with it. There’s also accelerationists on the left who view that capitalism is inherently unstable and want to see it go faster because that will expose the iniquities in the system and help it go off the rails so something better can be rebuilt.

You see this in the works of like Slavoj Zizek and other academics on the left though. Zizek himself is kind of… Um, mid, I guess, but you’ll see that amongst those who are critics of capitalism, who also want it to go faster. There’s a problem with both these perspectives and the problem is basically that, and this is the problem I have with accelerationism is that it is a perspective of a tiny elite minority and would result in massive amounts of pain for millions and billions of people, while that acceleration is resolving itself.

While things are going faster, more fuel is getting added to the system. You know, the climatic change that we see because of more fuel literally being added to the system. Just the disruption that we can see happening would cause starvation, job loss, and untold pain and suffering, if the current systems we have are disrupted is also a problem. And so, from my perspective of doing the least pain, of not wanting to see humanity as a whole suffer, then accelerationism is necessarily a bad thing. Let’s find a different way.

Now, this is about the point where the Techno Optimist Manifesto gets into the list of enemies as well, and while that may or may not be typical for a manifesto, I think whenever you’re writing something and you have a enemies list, you know, that’s a warning flag in and of itself.

Now amongst the enemies for the technological optimist are things like sustainability, sustainable development, social responsibility, trust and safety, tech ethics, de-growth, and others besides. And when you start to look about who your enemies are, what you’re against, then you start wondering really what you’re for, right? So the concern here is that any kind of regulation or responsible governance is seen as an enemy, as something to be combated, to be avoided, to be dealt with. And aside from being a massive red flag, it reveals some of the under some of the underlying ethos as well.

This is what they’re against. They’re against regulation, things that were put in place for safety, for ethical use, for management, for sustainability, for our continued existence on the planet. And these are things they’re against. And I think that is, again, a massive warning sign. And from there we get to the last one.

The last red flag sign is a quote that comes up near the end. Now the quote is uncited, unattributed. We don’t see the conviction to actually state who this is from because that may be actually make it too obvious. The quote is as follows:

“Beauty exists only in struggle. There is no masterpiece that has not an aggressive character. Technology must be a violent assault on the forces of the unknown, to force them to bow before man.”

That quote is from Filippo Marinetti, from 1909, from the Futurist Manifesto which he wrote. If you’re not familiar with Marinetti, here’s the low down, and it’ll highlight the problem. Like I said, it was uncited, but if you know who Marinetti is and the story, then that is the biggest warning flag in the entire document, of the entire list of warning flags that we’ve already seen. Marinetti, of course, is the founder of the Futurist School in Italy and wrote the Futurist Manifesto in 1909.

Here’s some of the elements of futurism: technology, growth, industry, modernization. Okay, but also these other elements: speed, violence, destruction of museums, war as a necessity for purification… Hmmm. Now, Marinetti would go on to get into politics in Italy a few years later, and work with another group of Italians on another manifesto in 1919. That, of course, is the Fascist Manifesto, which he co-authored. So there’s a direct lineage from Marinetti’s work and elements of it that appear in that later manifesto and the works that that was adopted to as well.

If we take all these things, all these red flags together: a list of neoliberal economists, denialism and beliefs contrary to facts while downplaying education, self identifying as predators, accelerationism, lists of enemies, and citing proto-fascist literature. All this combined is a massive red flag and why I am not a techno optimist.

So, that being said, then how would I identify?

And that’s a fantastic question, because judging on the words alone, “Technical Optimist” is pretty close to where I am. I believe that technology can be used as an assistive tool, as we’ve stated prior, and that it can help people out, and is generally an extension of man, that we can use it for adding to our capabilities.

So I might be a techno-optimist, or at least I was until October 16th. Other terms I’ve seen floating around that I could self-identify as include things like techno revivalist, which is close, but not quite. That feels like it ties more into like experimental archeology, where we try and recreate the past or use methods of the past in the modern era to kind of figure out what they were doing. It’s a fascinating field. We should talk about it at some time, but that isn’t really where I am.

Solarpunk isn’t quite where I’m at either because, well, or cyberpunk either. I don’t think I’m really fit within any of the punk genres.  I’m pretty straight-laced. I’m a basic B to be honest.

Taking the opposite stance doesn’t work either; I’m not a techno-pessimist.  I’m generally hopeful for the opportunities that the new technologies can bring. I think that’s part of the challenge is that there isn’t a good line for where I sit. Aside from what is now defined as a techno optimist. And I don’t think it can be reclaimed because as I went through the number of red flags there, the well is really well… well and truly poisoned and with the breadth of reach that that particular manifesto got and the reporting that it saw in multiple areas, I don’t think that that would ever come back, even though much like Michael Bolton in Office Space, why should I change if they’re the ones that suck, right?

So I think techno-optimist as a term is where it is, and that will not change. But I am almost anything but that. And why? Well, part of it I think is just exposure and upbringing.  As I said, I’ve had a significantly different path. One that doesn’t lead through Silicon Valley, one that’s not even in the same solar system as a billionaire.

When you have to go about the business of daily life, when you’re almost middle class, you’re going to have a very different view of technology and its uses, and how it can be used for exploitation as well. And I think that comes through in some of our work.

So, to tie this back to the beginning, to close the loop on why we had to triangulate with examples before we could assay the manifesto: if exposure and experience are what lead one to be able to make quick judgments about a particular work and see where the references are coming from, they also can allow one to see some of the harms that might come about from exposure to those statements as well.  And that’s really what we’re trying to do: to bring some of those associations to light through this particular podcast episode.

So as I still search for a term: Retro Tech Enthusiast, just Tech Enthusiast perhaps, media historian, media archaeologist, etc. I’ll keep working on it. And once we figure it out – and the figuring it out is what I think is going to be the journey of this podcast as a whole – once we figure it out, I’ll let you know.

But if you have any great suggestions in the meantime, reach out and let me know at drimplausible at implausipod. com or on the implausi dot blog. I’ll see you around. Take care.

Implausipod E015 – EEE, Embrace Extend, Extinguish

EEE, or Embrace Extend Extinguish has been making the rounds again in 2023 as a number of silicon valley tech companies have been coming under the spotlight for their business practices, and some striking similarities are showing to a strategy outlined by Microsoft in an internal memo back in the 1990s. Everything old in tech is new again.

Transcript

 In 1999, Judge Thomas Penfield Jackson of the U. S. District Court of Columbia issued his findings in the case of United States v. Microsoft Corp., the antitrust suit that was brought by the government on the tech giant due to allegations that it was using its power to bundle the browser with the Windows operating system, and this constituted an abuse of its monopoly position within the desktop computer market. 

During the course of the trial, it was revealed that Microsoft had an internal policy of embrace, extend, and innovate. But during the trial, witness Steven McGeady revealed that privately Microsoft executives referred to this as embrace, extend, extinguish with the goal of marginalizing or eliminating direct competition.  Other tech companies started taking notes for use in the 21st century. Let’s talk about Triple E in this week’s episode of the ImplausiPod.

Welcome to the ImplausiPod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible, and since we came back from hiatus with episode eight, we’ve mentioned EEE a few times in relation to things like the Fediverse, so I thought there was no better time than now to get caught up.

First off, the reason why a case from the 90s still matters in 2023 is that it never really went away, and here and now we’re starting to see some more signs of it with some big players, both new and old. Potential examples in 2023 include Facebook, Google, and again Microsoft, and it may affect things that you use on a daily basis.

Let’s cover off the main points. What is Embrace Extend Extinguish, and what does it mean for computing and the internet? EEE or Triple E That’s right, this episode is all about the game, and the game is follow the leader. Anywho, Triple E was an internal policy pursued by Microsoft in the 90s with relation to its competition in a number of key markets. First revealed during the antitrust case that I mentioned in the open, where an internal memo that was brought into evidence showed that they referred to the strategy as Embrace, Extend, and Innovate. This was part of a number of texts that were submitted into evidence, including emails and quotes from Microsoft executives and others, like Steve McGeady of Intel, where he was a VP at the time.

During testimony during the trial, McGeady revealed that they, Microsoft, had referred to it as Extinguish internally. Now, these documents are from the Antitrust case, and are separate from another set of docs, collectively referred to as the Halloween document, which will leaked to Eric S Raymond and detailed Microsoft’s attitudes and plans regarding Linux and open-source software.  Those show that Microsoft was still aggressive against competition but had to use a different approach due to the distributed and non-commercial nature of the FOSS community. Here, they pursued tactics like the development of FUD:  fear, uncertainty, and doubt, or announcing vaporware products, stuff that would compete with a given product if it came to market, but they had no intention of ever actually making.

They’d also engage in the practice of extending protocols and developing new ones, and de-commoditizing existing protocols in order to crater the market for stuff that was running on it. And from these latter documents, we can better see what their corporate strategy goals were. It was a set of social and policy actions which they used to maintain their market position against other vendors, who often had better technological solutions, similar to what we talked about in the Endless September episode, where AOL had a technically inferior product, but were able to compete on presence in the marketplace with the ubiquitous floppy disks and CD coasters and a streamlined user experience, this was one of the reasons that the case was so important.

By using their market size to shut out other vendors from the market, they were stifling innovation and preventing competition. And this is something that still raised some eyebrows back in the 90s. With the original case, Microsoft ran afoul of the Sherman Antitrust Act. It was a business-to-business crime, B2B, so when the afflicted parties petitioned the U.S. government about the impacts and the concerns were raised about the lack of competing alternatives, they, the government, eventually took action. 

As a reminder, this was before smartphones were a thing in the market and shifted. Apple had a tiny fraction of the desktop market, around 3 percent in 1999, and Linux was very niche and other operating systems were mostly found use within specific corporate use cases, but had a tiny user base compared to windows as well. All told Microsoft was on about 95 percent of all desktops and laptops sold. And this number was actually growing through the Y2K period up to the dot com crash.

And the reason we’re bringing it up here again in 2023 is that apparently everything old in tech is new again. There’s been the rollout of some new apps, programs, and tools, and there’s a number of court cases actually taking place right now in the fall of 2023 involving major tech players that you’re not hearing about because of other criminal enterprises currently in the news.

So I’m going to take a moment to cover each of them in turn and how they relate back to Triple E and cover some of the theoretical background while we’re doing this as well. And the first one we want to talk about, of course, is the one that started this whole thing. Threads, the Twitter like communication app, launched by Meta, nee Facebook, under their Instagram brand, was made available to users on July 5th, 2023. 

Now prior to its launch, there had been rumors of its development. On an article on TheVerge. com on June 8th by Alex Heath, they had gone into the details of the app, which at the time was called “Project 92”. The main rumor was that it would be using something called the ActivityPub Protocol, which as we’ve discussed plenty of times, is the thing that’s powering Mastodon and the rest of the Fediverse, and this rumor caused a lot of consternation, especially within the Fediverse at large, mostly due to Meta’s past track record, which hasn’t been great. If you’re wondering what kind of things might be involved, just do a web search for Cambridge Analytica, or for Rohingya in Myanmar. Don’t search for it on any Meta owned properties, because you won’t find much and for those reasons and more a number of the people that were already on the Fediverse that were early adopters of the protocol were engaged with it because it was explicitly not a Facebook property.

So when a post was made on June 18th by an admin from one of the larger instances on Mastodon that, yes, they’d been in discussion with Meta regarding the ActivityPub protocol and the possible integration that would take place, there was a lot of uproar and consternation, and one of the things that got mentioned a lot during the ensuing discussion was the idea of Triple E. Now admins of some other instances and some users said they were going to pre-block meta because they’re concerned that any particular connection with them may allow leverage or for their information to be shared.

You know, they’d be turned into a commodity, much like we’ve discussed earlier. There are those who are online who don’t want any part with Facebook. And the other concern was that Facebook would go full triple E on the ActivityPub protocol: embrace it, by letting Threads link to it directly; extend it in some meta-friendly way, probably by allowing advertising or something similar; and then extinguish it ultimately at some unspecified point in the future as they roll on to a new program or a new platform, but in much the same way that we’ve seen with standard operating procedure for Microsoft back in the 90s. In so doing, the people that had found a home away from Meta, away from Facebook, would lose their online homes, so you can understand their concerns, but there’s a related set of concerns tied directly to the triple E phenomenon, and that is the notion of path dependency and vendor lock-in. 

There’s an old story, we might call it a meme, that does the rounds on the Internet every six to nine months or so. Stop me if you’ve heard it. It goes the size of the space shuttle’s boosters was determined by the width of a roman chariot, or two donkeys or something like that. I’ll let you look it up. There’s a couple recent examples Also, i’m not going to stop even if you’ve heard it. 

Here’s the full story: as it goes, the diameter of the space shuttle boosters are the size they were due to the fact they had to be shipped via rail cross country from Utah to Florida. Standard gauge railroads in North America are 4 foot 8.5 inches. The size of the standard rail gauge is because the Americans bought their early equipment from the English who used a similar gauge for their equipment. And this was fixed because the English tram manufacturers designed their wagons to fit the roads of the English countryside. And these were set at the distance because of the Roman chariots that had driven on the roads millennia before and had worn groves in the roads, which had then been used for generations of Englishmen. So the width of the train tracks was directly influenced by the width of the two Roman horses, or donkeys. There’s variations in the stories, you may have heard it differently. 

It is, of course, nonsense. 

The size of a donkey had very little to do with the size of the Space Shuttle. There were multiple different standards of rail lines in use in North America between 1831 and 1981 when the Space Shuttle first launched, but its design had begun significantly earlier. Any of these could have been the standard, though again, there were some significant advantages that some gauges had over others. More on this later. But tracing the links of contingencies, facts, and counterfactuals necessary to draw a straight line from donkey carts to rocket boosters requires levels of hand waving once reserved for members of the royal family.  It just ain’t a thing. 

Especially when you consider that the diameter of the SRB is 12.17 feet. You’d need to be doing some Steiner math to get that story to work. But what it does illustrate is the idea of path dependency, the link which is back that might be caused by initial embedded choices. And I know this may seem like an odd rhetorical strategy, undermining a specific well-known example in the aid of explaining what it is, but in this case the particular illuminates the general case, even if it doesn’t specifically abide by it.

Path dependence can be a real issue, especially when it comes to technology. It’s usually brought up in terms of standards. We can think of things like the QWERTY keyboard design, or the various forms of coffee pods that are used as shaping the direction of the market. And these can both be True, but to really get a hand on path dependency, let’s think about it in terms of something massive, like really big, like the automotive market in North America. It’s so big and entrenched that makes substantive changes to it would be extremely difficult. So how would one go about changing the auto system? By using something that can overlap with the grooves that are already cut to a greater or lesser degree. You add in electric vehicles that mirror the shape and conform to the systems that are already present and offer charging stations that resemble in some fashion the filling stations that are currently familiar to your audience so that they can be more easily adopted. Moving to electrical vehicles that look like cars leverages over a century of design decisions and development and allows for an easier adoption for new customers, or at least that’s how the thinking goes. So electric cars follow the path dependency laid down by successive generations of gas-powered automobile designs and drivers.

What’s related to path dependency, though not exclusive from it, is the idea of technological lock-in. And this is where those K Cups and keycaps come back into the picture. The keycaps in this instance are the ones that spell out Q W E R T Y on the top of your keyboard. Though in this day and age, you can order a version that spells out anything you like.  (At some point, we’re going to have to have a chat about innovation as a driver of change in secondary or tertiary markets, but we’ll move on for now.) 

So the idea of path dependency really came about from the field of evolutionary economics. Paul David wrote about it in 1985, about the risks of technological walk in, in his famous paper on “Clio and the economics of QWERTY”. Okay, famous among economists, but still famous. Clout’s clout, right? David was writing about the historical competition between two famous keyboard layouts, the QWERTY keyboard, the one that you’re likely familiar with, as well as the DSK or Dvorak standard keyboard. DSK was patented in 1932, and it was faster, better, more efficient, and the U.S. Navy even tested it out and found that it only took about 10 days or so to recover the cost and retraining. The DSK or Dvorak keyboard was about 20 to 40 percent more efficient than the QWERTY version. 

Now, the QWERTY version had already existed for a while. It was patented between 1909 and 1924, depending on what country you’re in. Originally developed by Christopher Latham Scholes of Milwaukee, Wisconsin, and some of his partners, including Carlos Glidden and Samuel Sewell. Now, they were engaged with, uh, let’s see, I guess, entrepreneur, James Densmore, you might want to say, promoter slash venture capitalist. And Densmore had some contacts with a manufacturing company that had some significant machine tool capabilities, an arms manufacturer by the name of Remington. They were also getting into sewing machines at the time, you know, diversifying the portfolio, so to speak. And while business was good during the civil war, the economic downturn that followed in the decade after in the 1870s meant that sales weren’t much. They were selling just for the record, about 1200 units a year.  So at the time typewriter sales are more like what we see with like mainframe computer systems today, but at the time in the 1870s, there was actually a lot of development going on. Edison was working on his teletype machines and there’s patents for that in 1870s There’s a lot of other communication equipment being developed and it was being rolled out across the country.

So it was actually A lot of innovation taking place within that space. And in that we have the development of the QWERTY keyboard. There was other competing types as well. Like we said, the Dvorak didn’t come around until the 20th century. There was the “Ideal” keyboard, which had the sequence D H I A T E N S O R in the home row, those 10 letters being ones you could compose 70 percent of the words in the English language with. And all of this development was indicative of a lot of growth going on in the field. The singular advantage that QWERTY had was that, you know, it slowed down the typist so it didn’t jam as often. And that led to a small but minute advantage over some of the other competitors, in addition to having like Remington being the manufacturer for them.  And this advantage was multiplied with the advent of touch typing in the 1880s, as the hunt-and-peck method kind of fell out of use. Keyboardists that could type by touch were in demand because that learned skill of being able to use a QWERTY keyboard meant that they were that much more efficient, at least compared to the hunt-and-peck typist, and again, like we said, the tech wouldn’t jam up and result in a slowdown. And it was this learned skill that led to the technological lock in and a suboptimal design like the QWERTY keyboard being the de-facto standard. 

As David described, there was three characteristics that led to this. There was tech interrelatedness, there were economies of scale, and the quasi-irreversibility of learning the skill. 

Now the tech interrelatedness was the link between the hardware of the typewriter and the software of the typist, or we might rather say wetware of the typist. To use Rudy Rucker’s term, I mean, the particular arrangement of any given keyboard was largely irrelevant. But the installed user base, so to speak, of the typists that were able to use that arrangement quickly and efficiently by memory was much more important.

The economies of scale were linked directly to the manufacturing capabilities that Remington had. As we said, they had a great machine tool set up. So they were able to produce the equipment. And then as other vendors adopted it, it was more and more available for other typists to use. So if a typist is going to pick among any given available option to use, they might as well learn QWERTY because people were paying for people that can use it.

And the training wasn’t for free, right? The typist had to learn it on their own and then bring the skill to the company and it wasn’t being handed out there. And this relates to quasi-irreversibility as well. Like you can retrain, but it’s going to cost you money. And while you’re retraining, you’re obviously not earning anything and you may still have some crossover or issues, and you don’t know if the thing you’re training to is going to be any better than the one you already know. In this case, if you know QWERTY, you’re probably going to stick with a QWERTY keyboard or demand that at your new employer. Like I can offer QWERTY, do you have it? Similar to what we see with like Adobe Photoshop and other technological versions that are currently extant.

But this is ultimately one of the problems and downsides of path dependency and lock-in, and to quote David, as he states: “competition in the absence of perfect futures markets drove the industry prematurely into standardization on the wrong system.” End quote. Because nobody could really see that the technical problems that the QWERTY system was designed to solve would soon need to be resolved, and here we are in 2023 with electronic keyboards still using this same layout even though it has no impact because it’s designed to resolve a mechanical issue that came about 150 years ago.

So yeah, if you don’t necessarily have the best technical solution like VHS or AOL or Microsoft in this instance, try unlocking the market by other means. The path dependency means it may pay off for you in the long run if you can stick around. 

And just to bring this back around full circle to our example of Roman roads, rail lines, and rocket ships, that’s an example of path dependency.  There’s no direct causal relationship, which is what everybody gets wrong about it. As David states: “important influences upon the eventual outcome can be exerted by temporally remote events, including happenings dominated by chance. There are things that shape our economic decisions that are well beyond our ability to fathom or even control.”

Now earlier I did state that there was a number of examples like Triple E or things like it in the news and it’d be prudent to get onto the next one. Now one of the bones of contention in the Microsoft antitrust case was their bundling of Internet Explorer with the Windows operating system. People said that that was anti competitive and that they’re using their monopoly power to push that as a de-facto standard. And that’s one of the ways that lock in can happen when a functional standard becomes a de facto standard. Now, currently we’re seeing this with Chromium, which is the engine behind Google’s Chrome browser and used by everything from Edge to Opera to Chrome itself. And it’ll also be in the default install on every Android device.

Much like how Microsoft’s Windows in the 1990s was about 95 percent of the personal computing market, Google’s Chromium makes up about 95 percent of the browser market in 2023. The alternatives are pretty much limited to Firefox, Safari, and a few derivatives. So when Google decides to make major changes to Chromium, it can reverberate throughout the industry. It affects everybody. And in late July and early August, they started doing that. They rolled out something called WEI or Web Environment Integrity as a proposed change to Chromium. It first appeared in July as a proposal in the GitHub repos of some of Google’s Chromium engineers, and it received a pretty universal outcry against it from those that were paying attention to it.  What it proposes is that there’s an attestation check that’s done between the browser and the hardware of the machine. Ostensibly it’s used to combat online piracy or cheating in games, but the problem is that those are edge cases and it could be used for other purposes. One of the ones most noticed is it could be used to detect whether somebody’s running an ad blocker on their browser or single out specific extensions.  It turns the internet into a permission-based system rather than an allowable system. It turns everything into a walled garden run by Google where they can pass judgment on the users based on whatever opaque criteria they might have. And while that’s one example, that’s not the only case currently involving Google.

The other one that’s going on right now is the antitrust case that was brought by the U. S. Department of Justice against it for its monopoly power with regards to online search. And if you haven’t heard much about that one, it’s not surprising because Google’s been doing pretty much everything it can to limit the exposure or any information that’s coming out of the trial. Much of it’s happening behind closed doors. There’s been some reporting on the New York Times, Bloomberg and Ars Technica, and I’ll put some links to that in the notes. 

And that’s not the only case going on because on September 26, 2023, the FTC in the U S and 17 state attorneys general sued Amazon.com alleging that the online retail and technology company is a monopolist that uses a “set of interlocking, anti competitive, and unfair strategies to illegally maintain its monopoly power. The FTC and its state partners say Amazon’s actions allow it to stop rivals and sellers from lowering prices, degrade quality for shoppers, overcharge sellers, stifle innovation, and prevent rivals from fairly competing against Amazon. It alleges that Amazon violates the law not because it is big, but because it engages in a course of exclusionary conduct that prevents current competitors from growing and new competitors from emerging.” End quote. At the time of recording, that’s just a couple of days old. So as they say, more to come.

Now there’s nothing in particular that links an alleged monopoly in online shopping to another one that’s alleged for online search to a potential one for, uh, social networking to another one that has the impact of online browsing that maybe links it to one, another, uh, case that, uh, dealt with monopoly regarding operating systems and online browsing from, you know, 20 years ago, but there are some commonalities. Aside from them all being massive tech companies, and in some cases the same ones. As Bill Gates commented in 2019 on the 20th anniversary of the antitrust suit, one of the things the tech companies learned is that they had to be more present in Washington and to lobby more effectively.

Back in the 90s, it was Bill Gates point of pride that they never really engaged with lobbyists. But they changed their strategy with respect to that following the antitrust trial. And everybody else in the tech industry took notes and followed suit. Now, depending on your level of involvement in online tech news, a lot of what we shared here may seem like common knowledge, but not everyone may share that.

What we’re trying to do is just bring attention to the ongoing events that are still taking place, especially with everyone’s eyes thoroughly focused on things like LLMs, generative AI tools like chat GPT. These are just current examples, high profile ones that attract our attention. And there’s others that are happening at various levels of technological development that we might not see or might not have a large impact just because it’s affecting a very niche audience and doesn’t have the broad reach of things like shopping and search and browsing and social media.

What I hope to bring to your attention is the impacts that things like locking and path dependency can have on that development, that it can reduce the available options, and we maybe get stuck with an outmoded technology, something like a QWERTY keyboard, where there would be better solutions available to us.

Because it keeps happening again and again and again, maybe it isn’t necessarily a case of path dependency where we keep falling into the ruts that have been well worn before, but rather perhaps the environment as a whole affords certain outcomes in a regulatory framework of monopoly capitalism that we’ve discussed in the past.  We can see it more often happening in such a framework. So rather than being one particular path, the slopes of the hill encourage flows in certain directions. Exploring this would shift us more thoroughly into evolutionary economics full stop, which we’ll leave for a future episode, a path off in the distance.

Next time, in episode 16, we’ll be looking at spreadable media, which we’ve hinted at earlier. And with the WGA strike being potentially resolved by the time you hear this, with hopefully the SAG AFTRA strike soon to follow, we may be returning to some media focused episodes soon, too. Until next time, I’ve been your host, Dr. Implausible. You can contact me at drimplausible at implausipod. com. Have fun.

References:

Implausipod E0012 – AI Reflections

AI provides a refection of humanity back at us, through a screen, darkly. But that glass can provide different visions, depending on the viewpoint of the observer. Are the generative tools that we call AI a tool for advancement and emancipation, or will they be used to further a dystopic control society? Several current news stories give us the opportunity to see the potential path before us leading down both these routes. Join us for a personal reflection on AI’s role as an assistive technology on this episode of the Implausipod.

https://www.buzzsprout.com/1935232/episodes/13472740

Transcript:

 On the week before August 17th, 2023, something implausible happened. There was a news report that a user looking for, can’t miss spots in Ottawa, Ontario, Canada, would be returned some unusual results on Microsoft’s Bing search. The third result down on an article from MS Travel suggested the users could visit the Ottawa food bank if they’re hungry, that they should bring an appetite.

This was a very dark response, a little odd, and definitely insensitive, making one wonder if this is done by some teenage pranksters or hackers, or if there was a human involved in the editing decisions at all. Because initial speculation was that this article – credited to Microsoft Travel – may have been entirely generated by AI.  Microsoft’s subsequent response in the week following was that it was credited due to human error, but doubts remain, and I think the whole incident allows us to reflect on what we see in AI, and what AI reflects back to us… about ourselves, which we’ll discuss in this episode of the ImplausiPod.

Welcome to the ImplausiPod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible, and today on episode 12, we’re gonna peer deeply into that glass, that formed silicon that makes up our large language models and AI, and find out what they’re telling us about ourselves.

Way back in episode three, which admittedly is only nine episodes but came out well over a year ago, we looked at some of the founding figures of cyberpunk and of course one of those is Philip K Dick, who’s most known for Do Android’s Dream of Electric Sheep, which became Blade Runner, and now The Man in the High Castle, and other works which are yet un-adapted, like The Three Stigmata of Palmer Eldrich, but one of his most famous works was of course A Scanner Darkly, which had a Rotoscoped film version released in 2006 starring Keanu Reeves. Now, the title, of course, is a play on words from the biblical verse from One Corinthians where it’s phrased as looking “through a glass darkly”, and even though there’s some ambiguity there, whether it’s a glass or a mirror, or in our context, a filter, or in this case a scanner or screen. With the latter two being the most heavily technologized of all of them, the point remains, whether it’s a metaphor or a meme, that by peering through the mirror, the reflection that we get back is but a shadow of the reality around us.

And so too, it is with AI. The large language models, which have been likened to “auto-complete on steroids”, and the generative art tools (which are like procedural map makers that we discussed in a icebreaker last fall) have gained an incredible amount of attention in 2023. But with that attention has come some cracks in the mirror, and while there is still a lot of deployment of them as tools, they’re no longer seen as the harbinger of AGI or (artificial) general Intelligence, let alone super intelligence that will lead us on a path through a technological singularity. No, the collection of programs that have been branded as AI are simply tools what media theorist Marshall McCluhan called “Extensions of Man”, and it’s with that dual framing of the mirror held extended at our hand that I wanna reflect on what AI means for us in 2023.

So let’s think about it in terms of a technology. In order to do that, I’d like to use the most simple definition I can come up with; one that I use as an example in courses I’ve taught at the university. So follow along with me and grab one of the simplest tools that you may have nearby. It works best with a pencil or perhaps a pair of chopsticks, depending on where you’re listening.

If you’re driving an automobile, please don’t follow along and try this when you’re safely stopped. But take the tool and hold it in your hands as if you were about to use it, whether to write or draw or to grab some tasty sushi or a bowl of ramen. You do you. And then close your eyes and rest for a moment.

Breathe and then focus your attention down. To the tool in your hands, held between your fingers and reach out. Feel the shape of it, you know exactly where it is, and you can kind of feel with the stretch of your attention, the end of where that might actually exist. The tool has now become part of you, a material object that is next to you and extends your senses and what you are capable of.

And so it is with all tools that we use, everything from a spoon to a steam shovel, even though we don’t often think of that as such. It also includes the AI tools that we use, that constellation of programs we discussed earlier. We can think of all of these as assistive technologies, as extensions of ourselves that multiply our capabilities. And open your eyes if you haven’t already.

So what this quick little experiment is helpful in demonstrating is just exactly how we may define technology. Here using a portion of McLuhan’s version. We can see it as an extension of man, but there have been many other definitions of technology before. We can use other versions that focus on the artifacts themselves, like Fiebleman’s  where tech is “materials that are altered by human agency for human usage”, but this can be a little instrumental. And at the other extreme, we can have those from the social construction school like John Laws’ definition of “a family of methods for associating and channelling other entities and forces, both human and non-human”. Which when you think about it, does capture pretty much everything relating to technology, but it’s also so broad that it loses a lot of the utility.

But I’ve always drawn a middle line and my personal definition of technology is it’s “the material embodiment of an artifact and its associated systems, materials, and practices employed to achieve human ends”. I think we need to capture both the tool and the context, as well as the ways that they’re employed and used, and I think this definition captures the generative tools that we call AI as well. If we can recognize that they’re tools used for human ends and not actors with their own agency, then we can change the frame of the discussion around these generative tools and focus on what ends they’re being used for.

And what they’re being used for right now is not some science fictional version, either the dystopic hellscapes of the Matrix or Terminator, or on the flip side, the more utopic versions, the one, the “Fully Automated Luxury Communism” that we’d see in the post scarcity societies of like a Star Trek: The Next Generation, or even Iain M. Banks’ the Culture series.  Neither of these is coming true, but those polls – that ideation, these science fiction versions that kind of drive our collective imagination of the publics, the social imaginaries that we talked about a few episodes ago – these polls represent the two ends of that continuum, of that discussion, that dialectic between utopic and dystopic and the way we frame technology.

As Anabel Quan-Haase notes in their book on Technology and Society, those poles: the utopic idea of technology achieving progress through science, and the dystopic is technology as a threat to established ways of life, are both frames of reference. They could both be true depending on the point of view of the referrer. But as we said, it is a dialectic. There is a dialogue going back and forth between these two poles continually. So technology in this case is not inherently utopic or dystopic, but we have to return again to the ends that the technology is put towards: the human ends. So rather than utopic or dystopic, we can think of technology being rather emancipatory or controlling, and it’s in this frame, through this lens, this glass that I want to peer at the technology of AI.

The emancipatory frame for viewing these generative AI tools view them as an assistive technology, and it’s through this frame, this lens that we’re going to look at the technology first. These tools are exactly that: they are liberating, they are freeing. And whenever we want to take an empathetic view of technology, we wanna see how they may be used by others who aren’t in our situation.  And that situations means they may be doing okay, they might be even well off, but they may also be struggling. There may be issues that they, or challenges that they have to deal with on a regular basis that most of us can’t even imagine. And this is where my own experience comes from. So I’ll speak to that briefly.

Part of my background is when I was doing my field work for my dissertation, I was engaged with a number of the makerspaces in my city, and some of them were working with local need-knowers or persons with disabilities like the Tikkun Olam Makers, as well as the Makers Making Change groups. These groups worked with persons with disabilities to find solutions to their particular problems.  problems that often there wasn’t a market solution available because it wasn’t cost effective. You know, the “Capitalist realism” situation that we currently are under means that a lot of needs, especially for marginal groups, may go unmet. And these groups came together to try and meet those needs as best they could through technological solutions using modern technologies like 3D printing or microcontrollers or what have you, and they do it through regular events, whether it was a hackathon or regular monthly meetup groups or using the space provided by a local makerspace. And in all these cases, what these tools are are liberating to some of the constraints or challenges that are experienced in daily life.

We can think of more mainstream versions, like a mobility scooter that allows somebody with reduced mobility to get around and more fully participate within their community to meet some of the needs that they need on a regular basis, and even something as simple as that can be really liberating for somebody who needs it. We need to be cognizant of that because as the saying goes, we are all at best just temporarily able, and we never know when a change may be coming that could radically change our lives. So that empathetic view of technology allows us to think with some forethought about what may happen as if we or someone we love were in that situation, and it doesn’t even have to be somebody that close to us. We can have a much more communal or collective view of society as well.

But to return to this liberating view, we can think about it in terms of those tools, the generative tools, whether they’re for text or for art, or for programming, or even helping with a little bit of math.  We can see how they can assist us in our daily lives by either fulfilling needs or just allowing us to pursue opportunities that we thought were too daunting. While the generative art tools like Dall-E and Midjourney have been trained on already existing images and photographs, they allow creators to use them in new and novel ways.

It may be that a musician can use the tools to create a music video where before they never had the resources or time or money in any way, shape, or form to actually pursue that. It allows them to expand their art in a different realm. Similarly, an artist may be able to create stills that go with a collection or you know, accompany their writing that they’re working on, or an academic could use it as slides to accompany a presentation, something that they’ve spent time on, or a YouTube video, or even a podcast and their title bars and the like (present company included). My own personal experience when I was trying to launch this podcast was there was all this stuff I needed to do, and the generative art tools, the cruder ones that were available at that time, allowed some of the art assets to be filled in, and that barrier to launch, that barrier to getting something going was removed.

So emancipatory, liberating, even though at a much smaller scale, those barriers were removed and it allowed for creativity to flow in other areas, and it works similarly across these generative tools, whether it’s putting together some background art or a campaign map or a story prompt. If you need some background for a characters that are part of a story as an NPC, as a Dungeon Master, or what have you, or even just to bounce or refine coding ideas off of, I mean, the coding skills are rudimentary, but it does allow for something functional to be produced.

And this leads into some of the examples I’d like to talk about. The first one is from a post by Brennan Stelli on Mastodon on August 18th, where he said that we could leverage AI to do work, which is not being done already because there’s no budget time or knowhow.  There’s a lot of work that falls into this space of stuff that needs to be done, but you know, is outside of scope of a particular project. This could include something like developing the visualizations that will allow him to better communicate an idea in a fraction of the time, you know, minutes instead of hours that would normally take to do something like that, and so we can see in Brennan’s experience that it mirrors a lot of our own.

The next example’s a little bit more involved in an article written by Pam Belluck and published on the New York Times website on August 23rd, 2023. She details how researchers have used predictive text as well as AI generated facial animations that help with an avatar and speech that assist the stroke victim in communicating with their loved ones.

And the third example that hit a little bit closer to home was that of a Stanford research team that used the BCI or brain computer interface, along with AI assisted predictive text generation to allow a person with amyotrophic lateral sclerosis or ALS (to talk) at a regular conversational tempo, the tools read the neural activity that would be combined with the facial muscles moving and that is allowed to be translated into text. These are absolutely groundbreaking and amazing developments and I can’t think of any better example that shows how AI can be an assistive technology.

Now most of these technologies are confined to text and screen, to video and audio, and often when we think of AI, we think of mobility as well. So the robotic assistants that have come out of research labs like that of Boston Dynamics have attracted a lot of the attention, but even there, we can see some of the potential as an assistive technology. The fact that it’s confined to a humanoid robot means we sometimes lose sight of that fact, but that is what it is. In the video that they released in January of 2023, it shows an Atlas Robot as an assistant on a construction site providing tools and moving things around in aid of the human that’s the lead on the project, so it allows a single contractor working on their own to extend what they’re able to do, even if they don’t have access to a human helper. So it still counts as an assistive technology, even though we can start to see the dark side of the reflection through this particular lens, that the fact that an emancipatory technology may mean emancipation from the work that people currently have available to them.

In all of these instances, there’s the potential for job loss, that the tools would take the place of someone doing that, whether it’s in writing or as an artist, or a translator, or transcriber, or a construction assistant, and those are very real concerns. I do not want to downplay that, Part of our reflection on AI has to take these into account that the dark side of the mirror (or the flip side of the magnifying glass) can take something that can be helpful and exacerbate it when it’s applied to society at large. The concerns about job loss are similar to concerns we’ve had about automation for centuries, and they’re still valid. What we’re seeing is an extension of that automation into realms that we thought were previously exclusively bound to, you know, human actors: creators, artists, writers and the like.

This is why AI and generative art tools are such a driving and divisive element when it comes to the current WGA and SAG-Aftra strikes: that the future of Hollywood could be radically different if they see widespread usage. And beyond just the automation and potential job loss, a second area of concern is the one that ChatGPT and the large language models don’t necessarily have any element of truth involved in it, that they’re just producing output linguists like Professor Emily Bender of the University of Washington and the Mystery AI Hype Theater Podcast have gone into extensive detail about how the output of ChatGPT cannot be trusted. It has no linkage to truth, and there’s been other scholars that have gone into the challenges with using ChatGPT or LLMs for legal research or academic work or anything along those lines. I think it still has a lot of potential and utility as a tool, but it’s very much a contested space.

And the final area of contestation that we’ll talk about today is the question of control. Now, that question has two sides: the first is the control of that AI. One that most often surfaces in our collective imaginary is that idea of rogue super intelligences or killer robots gets repeated in TV, film, and our media in general, and this does get addressed at an academic level and works like Stuart Russell’s Human Compatible and Nick Bostrom’s Superintelligence.  They both address the idea of what happens if those artificial intelligences get beyond human capacity to control them.

But the other side of that is the control of us, control of society. Now, that gets replicated in our media as well, and everything from Westworld, to the underlying themes of the TV series Person of Interest, where The Machine is a computer system, developed to help detect and anticipate and suppress terrorist action using the tools of a post 9-11 surveillance state that it has access to.

And ever since Gilles Deleuze wrote his Postscript on the Societies of Control back in 1990, that so accurately captured the shift that had occurred in our societies from the sovereign societies of the Middle Ages and Renaissance through to the disciplinary societies that typified the 18th and 19th century, through to the shift that occurred in the 20th and 21st century to that of a control society where the logics of the society was enforced and regulated by computers and code. And while Deleuze was not talking about algorithms and AI in his work, we can see how they’re a natural extension of what he was talking about, how the biases that are ingrained within our algorithms, what Virginia Eubanks talked about in her book Automating Inequality, and how our biases and assumptions that go into the coding and training of those advanced system can manifest in ways, including everything from facial recognition to policing, to recommendation engines on travel websites that suggest that perhaps should go to the food bank to catch a meal.

Now there’s a twist to our Ottawa food bank story, of course. About a week after Microsoft came out and said that the article had been removed and that it had been identified that the issue was due to human error and not due to an unsupervised AI. But even with that answer, there are those who are skeptical: because it didn’t happen just once. There was a lot of articles where such weird or incongruous elements showed up. And of course, this being the internet, there was a number of people that did catch the receipts.

Now there’s a host of reasons of what might be happening with these bad reviews. Some plausible and some slightly less so. It could be just an issue of garbage in garbage out that the content that they’re scraping to power the AI is drawing articles that already exist that are, you know, satire or meme sites. If the information that you’re getting on the web is coming from Something Awful or 4chan, then you’re gonna get some dark articles in there. But the other alternative is that it could be just hallucinations that have been an observed fact that has been happening with these AIs and large language models that, uh, incidents like we saw with the Loa B that we talked about in an icebreaker last year are still coming forward in ways that are completely unexpected and out of our control.

That scares us a little bit because we don’t know exactly what it’s going to do. When we look at the AI through that lens, like in the mirror, what it’s reflecting back to us is something we don’t necessarily want to look at, and we think that it could be revealing the darkest aspects of ourselves, and that frightens us a whole lot.

AI is a reflection of our society and ourselves, and if we don’t like what we’re seeing, then that gives us an opportunity to perhaps correct things because AI, truth be told, is really dumb right now. It just shows us what’s gone into building it. But as it gets better, as the algorithms improve, then it may get better at hiding its sources.

And that’s a cause for concern. We’re rapidly reaching a point where we may no longer be able to tell or spot a deepfake or artificially generated image or voice, and this may be used by all manner of malicious actors. So as we look through our lens at the future of AI , what do we see on our horizon?

References:
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

Deleuze, G. (1992). Postscript on the Societies of Control. October, 59, 3–7.

Eubanks, V. (2018). Automating Inequality. Macmillan.

McLuhan, M. (1964). Understanding Media: The Extensions of Man. The New American Library.

Quan-Haase, A. (2015). Technology and Society: Social Networks, Power, and Inequality. Oxford University Press.

Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.

Links:
https://arstechnica.com/information-technology/2023/08/microsoft-ai-suggests-food-bank-as-a-cannot-miss-tourist-spot-in-canada/

https://tomglobal.org/about

https://www.makersmakingchange.com/s/

https://arstechnica.com/health/2023/08/ai-powered-brain-implants-help-paralyzed-patients-communicate-faster-than-ever/

https://blog.jim-nielsen.com/2023/temporarily-abled/

https://www.businessinsider.com/microsoft-removes-embarrassing-offensive-ai-assisted-travel-articles-2023-8