Black Boxes and AI

(this was originally published as Implausipod E0028 on February 26, 2024)

https://www.implausipod.com/1935232/episodes/14575421-e0028-black-boxes-and-ai

How does your technology work? Do you have a deep understanding of the tech, or is it effectively a “black box”? And does this even matter? We do a deep dive into the history of the black box, how it’s understood when it comes to science and technology, and what that means for AI-assisted science.


On January 9th, 2024, rabbit Incorporated introduced their R one, their handheld device that would let you get away from using apps on your phone by connecting them all together through using the power of ai. The handheld device is aimed at consumers and is about half the size of an iPhone, and as the CEO claims, it is the beginning of a new era in human machine interaction.

End quote. By using what they call a large action model, or LAM, it’s supposed to interpret the user’s intention and behavior and allow them to use the apps quicker. It’s acceleration in a box. But what exactly does that box do? When you look at a new tool from the outside, it may seem odd to trust all your actions to something that you barely know how it works.

But let me ask you, can you tell me how anything you own works? Your car, your phone, your laptop, your furnace, your fridge, anything at all. What makes it run? I mean, we might have some grade school ideas from a Richard Scarry book or a past episode of How It’s Made, but But what makes any of those things that we think we know different from an AI device that nobody’s ever seen before?

They’re all effectively black boxes. And we’re going to explore what that means in this episode of the Implosipod.

Welcome to the Implausipod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible. And in all this discussion of black boxes, you might have already formed a particular mental image. The most common one is probably that of the airline flight recorder, the device that’s embedded in every modern airplane and becomes the subject of a frantic search in case of an accident.

Now, the thing is, they’re no longer black, they’re rather a bright orange, much like the Rabbit R1 that was demoed. But associating black boxes with the flight recorder isn’t that far off, because its origin was tied to that of the airline industry, specifically in World War II, when the massive amount of flights generated a need to find out what was going on with the planes that were flying continual missions across the English Channel.

Following World War II, the use of black boxes Boxes expanded as the industry shifted from military to commercial applications. I mean, the military still used them too. It was useful to find out what was going on with the flights, but that idea that they became embedded within commercial aircraft and were used to test the conditions and find out what happened so they could fix things and make things safer and more reliable overall.

meant that their existence and use became widely known. But, by using them to figure out the cause of accidents and increase the reliability, they were able to increase trust. To the point that air travel was less dangerous than the drive to the airport in your car, and few, if any, passengers had many qualms left about the Safety of the flight.

And while this is the origin of the black box in other areas, it can have a different meaning in fields like science or engineering or systems. Theory can be something complex where it’s just judged by its inputs and outputs. Now that could be anything from as simple as I can. Simple integrated circuit to a guitar pedal to something complex like a computer or your car or furnace or any of those devices we talked about before but it could also be something super complex like an institution or an organization or the human brain or an AI.

I think the best way to describe it is an old New Yorker cartoon that had a couple scientists in front of a blackboard filled with equations and in the middle of it says, Then a miracle occurs. It’s a good joke. Everyone thinks it’s a Far Side cartoon, but it was actually done by Sidney Harris, but The point being that right now in 2024, it looks like that miracle.

It’s called AI.

So how did we get to thinking that AI is a miracle product? I mean, aside from using the LLMs and generative art tools, things like DALL-E and Sora, and seeing the results, well, as we’ve spent the last couple episodes kinda setting up, a lot of this can occur through the mythic representations of it that we often have in pop culture.

And we have lots of choices to choose from. There’s lots of representations of AI in media in the first nearly two and a half decades of the 21st century. We can look at movies like Her from 2013, where the virtual assistant of Joaquin Phoenix becomes a romantic liaison. Or how Tony Stark’s supercomputer Jarvis is represented in the first Iron Man film in 2008.

Or, for a longer, more intimate look, the growth and development of Samaritan through the five seasons of the CBS show Person of Interest from 2011 through 2016. And I’d be remiss if I didn’t mention their granddaddy Hal from 2001 A Space Odyssey by Kubrick in 1968. But I think we’ll have to return to that one a little bit more in next episode.

The point being that we have lots of representations of AI or artificial intelligences that are not ambulatory machines, but are actually just embedded within a box. And this is why I’m mentioning these examples specifically, because they’re more directly relevant to our current AI tools that we have access to.

The way that these ones are presented not only shapes the cultural form of them, but our expected patterns of use. And that shaping of technology is key by treating AI as a black box, something that can take almost anything from us and output something magical. We project a lot of our hopes and fears upon what it might actually be capable of accomplishing.

What we’re seeing with extended use is that the capabilities might be a little bit more limited than originally anticipated. But every time something new gets shown off, like Sora or the rabbit or what have you, then that expectation grows again, and the fears and hopes and dreams return. So because of these different interpretations, we end up effectively putting another black box around the AI technology itself, which to reiterate is still pretty opaque, but it means our interpretation of it is going to be very contextual.

Our interpretation of the technology is going to be very different based on our particular position or our goals, what we’re hoping to do with the technology or what problems we’re looking for it to solve. That’s something we might call interpretive flexibility, and that leads us into another black box, the black box of the social construction of technology, or SCOT.

So SCOT is one of a cluster of theories or models within the field of science and technology studies that aims to have a sociological understanding of technology in this case. Originally presented in 1987 by Wiebe Biejker and Trevor Pinch, a lot of work was being done within the field throughout the 80s, 90s, and early 2000s when I entered grad school.

So, so if you studied technology as I was, you’d have to grapple with Scott and the STS field in general. One of the arguments that Pinch and Biejker were making was that Science and technology were both often treated as black boxes within their field of study. Now, they were drawing on earlier scholarship for this.

One of their key sources was Leighton, who in 1977 wrote, What is needed is an understanding of technology from inside. Both as a body of knowledge and as a social system. Instead, technology is often treated as a black box whose contents and behavior may be assumed to be common knowledge. End quote. So whether the study was the field of science and the science itself was.

irrelevant, it didn’t have to be known, it could just be treated as a black box and the theory applied to whatever particular thing was being studied. Or people studying innovation that had all the interest in the inputs to innovation but had no particular interest or insight into the technology on its own.

So obviously the studies up to 1987 had a bit of a blind spot in what they were looking at. And Pinch and Becker are arguing that it’s more than just the users and producers, but any relevant social group that might be involved with a particular artifact needs to be examined when we’re trying to understand what’s going on.

Now, their arguments about interpretive flexibility in relevant social groups is just another way of saying, this street finds its own uses for things, the quote from William Gibson in But their main point is that even during the early stages, all these technologies have different groups that are using them in different ways, according to their own needs.

Over time, it kind of becomes rationalized. It’s something that they call closure. And that the technology becomes, you know, what we all think of it. We could look at, say, an iPhone, to use one recent example, as being pretty much static now. There’s some small innovations, incremental innovations, that happen on a regular basis.

But, by and large, the smartphone as it stands is kind of closed. It’s just the thing that it is now. And there isn’t a lot of innovation happening there anymore. But, Perhaps I’ve said too much and we’ll get to the iPhone and the details of that at a later date. But the thing is that once the technology becomes closed like that, it returns to being a black box.

It is what we thought it is, you know? And so if you ask somebody what a smartphone is and how does it work, those are kind of irrelevant questions. A smartphone is what a smartphone is, and it doesn’t really matter how the insides work, its product is its output. It’s what it’s used for. Now, this model of a black box with respect to technology isn’t without its critiques.

Six years after its publication, in 1993, the academic Langdon Winner wrote a critique of Scott in the works of Pinch and Biker. It was called Upon Opening the Black Box and Finding it Empty. Now, Langdon Winner is well known for his 1980 article, Do Artifacts Have Politics? And I think that that text in particular is, like, required reading.

So let’s bust that out in a future episode and take a deep dive on it. But in the meantime, the critique that he had with respect to social constructivism is In four main areas. The first one is the consequences. This is from like page 368 of his article. And he says, the problem there is that they’re so focused on what shapes the technology, what brings it into being that they don’t look at anything that happens afterwards, the consequences.

And we can see that with respect to AI, where there’s a lot of work on the development, but now people are actually going, Hey, what are the impacts of this getting introduced large scale throughout our society? So we can see how our own discourse about technology is actually looking at the impacts. And this is something that was kind of missing from the theory.

theoretical point of view back in like 1987. Now I’ll argue that there’s value in understanding how we came up with a particular technology with how it’s formed so that you can see those signs again, when they happen. And one of the challenges whenever you’re studying technology is looking at something that’s incipient or under development and being able to pick the next big one.

Well, what? AI, we’re already past that point. We know it’s going to have a massive impact. The question is what are going to be the consequences of that impact? How big of a crater is that meteorite going to leave? Now for Winner, a second critique is that Scot looks at all the people that are involved in the production of a technology, but not necessarily at the groups that are excluded from that production.

For AI, we can look at the tech giants and the CEOs, the people doing a lot to promote and roll out this technology as well as those companies that are adopting it, but we’re often not seeing the impacts on those who are going to be directly affected by the large scale. introduction of AI into our economy.

We saw it a little bit with the Hollywood Strikes of 2023, but again, those are the high profile cases and not the vast majority of people that will be impacted by the deployment of a new technology. And this feeds right into Scot’s third critique, that Scot focuses on certain social groups but misses the larger impact or even like the dynamics of what’s going on.

How technological change may impact much wider across our, you know, civilization. And by ignoring these larger scale social processes, the deeper, as Langdon Winters says, the deeper cultural, intellectual, or economic regions of social choices about technology, these things remain hidden, they remain obfuscated, they remain part of the black box and closed off.

And this ties directly into Wiener’s fourth critique as well, is that when Scottis looking at particular technology it doesn’t necessarily make a claim about what it all means. Now in some cases that’s fine because it’s happening in the moment, the technology is dynamic and it’s currently under development like what we’re seeing with AI.

But if you’re looking at something historical that’s been going on for decades and decades and decades, like Oh, the black boxes we mentioned at the beginning, the flight recorders that we started the episode with. That’s pretty much a set thing now. And the only question is, you know, when, say, a new accident happens and we have a search for it.

But by and large, that’s a set technology. Can’t we make an evaluative claim about that, what that means for us as a society? I mean, there’s value in an analysis of maintaining some objectivity and distance, but at some point you have to be able to make a claim. Because if you don’t, you may just end up providing some cover by saying that the construction of a given technology is value neutral, which is what that interpretive flexibility is basically saying.

Near the end of the paper, in his critique of another scholar by the name of Stephen Woolgar, Langdon Winner states, Quote, power holders who have technological megaprojects in mind could well find comfort in a vision like that now offered by the social constructivists. Unlike the inquiries of previous generations of critical social thinkers, social constructivism provides no solid systematic standpoint or core of moral concerns from which to criticize or oppose any particular patterns of technical development.

end quote. And to be absolutely clear, the current development of AI tools around the globe are absolutely technological mega projects. We discussed this back in episode 12 when we looked at Nick Bostrom’s work on superintelligence. So as this global race to develop AI or AGI is taking place, it would serve us well to have a theory of technology that allows us to provide some critique.

Now that Steve Woolgar guy that Winder was critiquing had a writing partner back in the seventies, and they started looking at science from an anthropological perspective in their study of laboratory life. And he wrote that with Bruno Latour. And Bruno Latour was working with another set of theorists who studied technology as a black box and that was called Actor Network Theory.

And that had a couple key components that might help us out. Now, the other people involved were like John Law and Michel Callon, and I think we might have mentioned both of them before. But one of the basic things about actor network theory is that it looks at things involved in a given technology symmetrically.

That means it doesn’t matter whether it’s an artifact, or a creature, or a set of documents, or a person, they’re all actors, and they can be looked at through the actions that they have. Latour calls it a sociology of translation. It’s more about the relationships between the various elements within the network rather than the attributes of any one given thing.

So it’s the study of power relationships between various types of things. It’s what Nick Bostrom would call a flat ontology, but I know as I’m saying those words out loud I’m probably losing, you know, listeners by the droves here. So we’ll just keep it simple and state that a person using a tool is going to have normative expectancy.

About how it works. Like they’re gonna have some basic assumptions, right? If you grab a hammer, it’s gonna have a handle and a head and, and depending on its size or its shape or material, it might, you know, determine its use. It might also have some affordances that suggest how it could be used, but generally that assemblage, that conjunction of the hammer and the user.

I don’t know, we’ll call him Hammer Guy, is going to be different than a guy without a hammer, right? We’re going to say, hey, Hammer Guy, put some nails in that board there, put that thing together, rather than, you know, please hammer, don’t hurt him, or whatever. All right, I might be recording this too late at night, but the point being is that people with tools will have expectations about how they get used, and some of that goes into how those tools are constructed, and that can be shaped by the construction of the technology, but it can also be shaped by our relation to that technology.

And that’s what we’re seeing with AI, as we argued way back in episode 12 that, you know, AI is a assistive technology. It does allow for us to do certain things and extends our reach in certain areas. But here’s the problem. Generally, we can see what kind of condition the hammer’s in. And we can have a good idea of how it’s going to work for us, right?

But we can’t say that with AI. We can maybe trust the hammer or the tools that we become accustomed to using through practice and trial and error. But AI is both too new and too opaque. The black box is too dark that we really don’t know what’s going on. And while we might put in inputs, we can’t trust the output.

And that brings us to the last part of our story.

In the previous section, the authors that we were mentioning, Latour and Woolgar, like winner pitch biker, are key figures, not just in the study of technology, but also in the philosophy of science. Latour and Woolgar’s Laboratory Life from 1977 is a key text and it really sent shockwaves through the whole study of science and is a foundational text within that field.

And part of that is recognizing. Even from a cursory glance, once you start looking at science from a anthropological point of view, is the unique relationship that scientists have with their instruments. And the author Inkeri Koskinen sums up a lot of this in an article from 2023, and they termed the relationship that scientists have with their tools the necessary trust view.

Trust is necessary because collective knowledge production is characterized by relationships of epistemic dependence. Not everything scientists do can be double checked. Scientific collaborations are in practice possible only if its members accept it. Teach other’s contributions without such checks.

Not only does a scientist have to rely on the skills of their colleagues, but they must also trust that the colleagues are honest and will not betray them. For instance, by intentionally or recklessly breaching the standards of practices accepted in the field or by plagiarizing them or someone else.

End quote. And we could probably all think of examples where this relationship of trust is breached, but. The point being is that science, as it normally operates, relies on relative levels of trust between the actors that are involved, in this case scientists and their tools as well. And that’s embedded in the practice throughout science, that idea of peer review of, or of reproducibility or verifiability.

It’s part of the whole process. But the challenge is, especially for large projects, you can’t know how everything works. So you’re dependent in some way that the material or products or tools that you’re using has been verified or checked by at least somebody else that you have that trust with. And this trust is the same that a mountain climber might have in their tools or an airline pilot might have in their instruments.

You know, trust, but verify, because your life might depend on it. And that brings us all the way around to our black boxes that we started the discussion with. Now, scientists lives might not depend on that trust the same way that it would with airline pilots and mountain climbers, but, you know, if they’re working with dangerous materials, it absolutely does, because, you know, chemicals being what they are, we’ve all seen some Mythbusters episodes where things go foosh rather rapidly.

But for most scientists, what Koskinen notes that this trust in their instruments is really kind of a quasi trust, in that they have normative expectations about how the tools they use are going to function. And moreover, this quasi trust is based on rational expectations. They’re rationally grounded.

And this brings us back full circle. How does your AI work? Can you trust it? Is that trust rationally grounded? Now, this has been an ongoing issue in the study of science for a while now, as computer simulations and related tools have been a bigger, bigger part of the way science is conducted, especially in certain disciplines.

Now, Humphrey’s argument is that, quote, computational processes have already become so fast and complex that it was beyond our human cognitive capabilities to understand their details. Basically, computational intensive science is more reliant on the tools than ever before. And those tools are What he calls epistemically opaque.

That means it’s impossible to know all the elements of the process that go into the knowledge production. So this is becoming a challenge for the way science is conducted. And this goes way before the release of ChatGPT. Much of the research that Koskinen is quoting comes from the 20 teens. Research that’s heavily reliant on machine learning or on, say, automatic image classifiers, fields like astronomy or biology, have been finding challenges in the use of these tools.

Now, some are arguing that even though those tools are opaque, they’re black boxed, they can be relied on, and they’re used justified because we can work on the processes surrounding them. They can be tested, verified, and validated, and thus a chain of reliability can be established. This is something that some authors call computational reliabilism, which is a bit of a mouthful for me to say, but it’s basically saying that the use of the tools is grounded through validation.

Basically, it’s performing within acceptable boundaries for whatever that field is. And this gets the idea of thinking of the scientist as not just the person themselves, but also their tools. So they’re an extended agent, the same as, you know, the dude with the hammer that we discussed earlier. Or chainsaw man.

You can think about how they’re one and the same. One of the challenges there is that When a scientist is familiar with the tool, they might not be checking it constantly, you know, so again, it might start pushing out some weird results. So it’s hard to reconcile that trust we have in the combined scientist using AI.

They become, effectively, a black box. And this issue is by no means resolved. It’s still early days, and it’s changing constantly. Weekly, it seems, sometimes. And to show what some of the impacts of AI might be, I’ll take you to a 1998 paper by Martin Weitzman. Now, this is in economics, but it’s a paper that’s titled Recombinant Growth.

And this isn’t the last paper in my database that mentions black boxes, but it is one of them. What Weitzman is arguing is that when we’re looking at innovation, R& D, or Knowledge production is often treated as a black box. And if we look at how new ideas are generated, one of those is through the combination of various elements that are already existing.

If AI tools can take a much larger set of existing knowledge, far more than any one person, or even teams of people can bring together at any one point in time and put those together in new ways. then the ability to come up with new ideas far exceeds that that exists today. This directly challenges a lot of the current arguments going on, on AI and creativity, and that those arguments completely miss the point of what creativity is and how it operates.

Weitzman states that new ideas arise out of existing ideas and some kind of cumulative interactive process. And we know that there’s a lot of stuff out there that we’ve never tried before. So the field of possibilities is exceedingly vast. And the future of AI assisted science could potentially lead to some fantastic discoveries.

But we’re going to need to come to terms with how we relate to the black box of scientist and AI tool. And when it comes to AI, our relationship to our tools has not always been cordial. In our imagination, from everything from Terminator to The Matrix to Dune, it always seems to come down to violence.

So in our next episode, we’re going to look into that, into why it always comes down to a Butlerian Jihad.

Once again, thanks for joining us here on the Implausipod. I’m your host, Dr. Implausible, and the research and editing, mixing, and writing has been by me. If you have any questions, comments, or there’s elements you’d like us to go into additional detail on, please feel free to contact the show at drimplausible at implausipod dot com. And if you made it this far, you’re awesome. Thank you. A brief request. There’s no advertisement, no cost for this show. but it only grows through word of mouth. So, if you like this show, share it with a friend, or mention it elsewhere on social media. We’d appreciate that so much. Until next time, it’s been fantastic.

Take care, have fun.

Bibliography:
Bijker, W., Hughes, T., & Pinch, T. (Eds.). (1987). The Social Construction of Technological Systems. The MIT Press. 

Koskinen, I. (2023). We Have No Satisfactory Social Epistemology of AI-Based Science. Social Epistemology, 0(0), 1–18. https://doi.org/10.1080/02691728.2023.2286253 

Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford University Press. 

Latour, B., & Woolgar, S. (1979). Laboratory Life: The construction of scientific facts. Sage Publications. 

Pierce, D. (2024, January 9). The Rabbit R1 is an AI-powered gadget that can use your apps for you. The Verge. https://www.theverge.com/2024/1/9/24030667/rabbit-r1-ai-action-model-price-release-date 

rabbit—Keynote. (n.d.). Retrieved February 25, 2024, from https://www.rabbit.tech/keynote 

Sutter, P. (2023, October 4). AI is already helping astronomers make incredible discoveries. Here’s how. Space.Com. https://www.space.com/astronomy-research-ai-future 

Weitzman, M. L. (1998). Recombinant Growth*. The Quarterly Journal of Economics, 113(2), 331–360. https://doi.org/10.1162/003355398555595 

Winner, L. (1993). Upon Opening the Black Box and Finding it Empty: Social Constructivism and the Philosophy of Technology. Science Technology & Human Values, 18(3), 362–378. 

Upcoming Trends

With CES wrapping up in Las Vegas this weekend, I’ve been seeing lots of reports of the new technologies that have been on display. I’ve never been, but I think it might be something to take in one of these years.

I want to find a decent article, and cover my commentary of it, but I haven’t quite seen one I want to use yet.

The Verge has some decent coverage here:

https://www.theverge.com/24026787/ces-best-of-samsung-ballie-lg-tv

Which talks about the new Transparent TV from LG:

and I think that may be remarkable enough to talk about on it’s own.

But it’s been a long cold day, with the outside temp staying below -30 C for most of the day, and I’ve just been trying to keep warm. I’ll follow-up with a full write-up (and perhaps an episode if I’m inspired), and we’ll see what comes of it.

Implausipod E0017 – Not a Techno-optimist

Introduction:

If you had asked me on October 15th, 2023, how to self describe myself, I might say I was a techno optimist. But on October 16th, Mark Andreesen, the founder of Netscape, released the Techno-optimist manifesto, and I can no longer say that I’m a techno optimist.

In this episode we’ll walk through the quick scan of the document, and the red flags that it raised while looking through it, and where some of the problems lay in the underlying assumptions of the manifesto.

https://www.implausipod.com/1935232/13859916-implausipod-e0017-not-a-techno-optimist


Transcript:

Technology. If you’ve listened to this podcast for more than a few episodes, you realize that that’s one of the underlying themes here, that I’m interested in technology, how it appears in popular culture, how it’s developed, it’s what I’ve researched, written about, taught about, and I think about it a lot.

I think about its promise and potential and what it can offer humanity. And if you had asked me on October 15th, 2023, how to self describe myself, I might say I was a techno optimist. But on October 16th, Mark Andreesen, the founder of Netscape, released the Techno-optimist manifesto, and I can no longer say that I’m a techno optimist.

I’ll explain why in this episode of the Implausipod.

When the manifesto was originally published, I gave it a quick scan, and that scan raised a number of red flags. And throughout the rest of this episode, we’ll look at those red flags as if they were laid down by a surveyor on the landscape. But before we do, I want to go into the value of giving something a quick scan, of jotting down your initial impressions. 

I’m going to employ another surveyor’s tool, one of triangulation, of being able to hone in on the target by looking at it from different angles and directions, from different points of view. Because, as we talked about a few episodes ago, that empathetic view of technology requires that triangulation; of being able to step outside of one’s own perspective and view it from the perspective of somebody else.  And this can be done for both things we find positive, and things that we find negative as well.

So as is tradition, we’re going to talk about something by chasing down a couple tangents first before we get back to those red flags. But bear with me, it’ll all kind of come together at the end.

So when it comes to the techno optimist manifesto, the thing that really struck me was the ability to identify those red flags, to spot them, to pull them out of the larger text.  (And it was a 5200 word text. There was a lot going on in there.) but I think identifying these red flags speaks to something larger: the ability of experts or people heavily involved in the field to identify key elements or themes and figure out where a problem might be lying. It doesn’t matter which field it’s in: whether it’s a mechanic or medical doctor, academic or art historian.

And if that last one rings a bell, it’s because there’s a source for it. In his 2005 book Blink, Malcolm Gladwell talked about the process by which an art historian was able to evaluate a statue that was brought into the Getty Museum. and at a glance, the evaluators were able to identify key features that led them to believe that it was a forgery, that the statue in question had never actually ever been in the ground and subsequently recovered.  It’s the ability to spot the minutiae of a given artifact or piece of art, and through long experience and knowledge and exposure, be able to determine its authenticity, the validity of a piece of work. And again, this isn’t just an academic thing. It goes across so many fields, crafts, trades, practices.  It’s a key, essential element of them.

And to link it back to the ongoing discussions about AI, it’s one of those things that AI generated texts or artifacts often lack. It’s that authenticity. We can sense that there’s something off about the piece. As the saying goes, we can tell by the pixels. So this assemblage of tools that we have, the skills and knowledge and practice and experience, all come together to form what we might call a set of heuristics.

It’s similar to what Kenneth Burke calls equipment for living, and there he’s referring to literature and proverbs function in a way similar to the memes we talked about last episode, but these are the tools that we can use to judge something, and how we come to an assumption about what we’re seeing in front of us. We do this for pretty much everything. But when it’s something that’s particular to our skill or our particular area, then we can make some judgments about it.

And when it comes to those particular topics, perhaps we have a duty to communicate that information, to share that knowledge with the world around us.  So that’s what we’re going to get into here with the techno optimist discussion and the red flags, because I’ve read a lot of the texts that Andreesen cites within the manifesto, but obviously have a radically different worldview, and we can, discuss why we might come up with those radically different interpretations at the end.

But before we do, I want to throw one more point into the mix, one more element or angle for our triangulation on the topic at hand, and something I like to call the Forest Hypothesis. Now, this is different than the Dark Forest Hypothesis, where we are, as a species are tiny mice in a universe filled with predators lurking in the darkness (which we’ll touch on next episode). Rather the Forest Hypothesis is related back to the Blink idea, that it’s a way of evaluating knowledge, of evaluating expertise. The Forest Hypothesis basically asks how much can you talk about on a given subject if you’re out in the forest away from any cell phone signal, Wikipedia, handheld device, book, or any other form of external knowledge, something that was extrinsic to yourself.

And it’s a good test. There are people that can expound endlessly on stuff that they know about, and there are those who may be less comfortable discussing things online, or in an academic setting, but you know when push comes to shove, they actually do know things, and they don’t have to just reach out to their Wikipedia on their phone. Now, the analogue to this is the bar talk phenomenon that we used to have, where no one had access to phones, and we’d get into discussions about who could recall what. We could call it the Cliff Claven Corollary, right, where we’re not necessarily sure in the moment, but we can use those rhetorical strategies to ask: “eh, does that sound right to you, or are you just, like, making that up?”

And in the interest of full disclosure, much of the rest of the episode about the red flags came from two conversations that I had with different sets of colleagues about the techno optimist manifesto and the material espoused within.  So much of the rest of this episode is going to be me recreating that discussion and talk off the top of my head as best I can. I’ll refer back to specific elements, but without further ado: why I am not a techno optimist.

So, as stated, the Techno Optimist Manifesto was published on the morning of October 16th, 2023. During the day, it started making the rounds on social media, on Mastodon, and elsewhere, and I saw numerous links to it, so I thought I’d dive in and give it a quick look. There’s been articles written about it since, in the intervening ten days or so, but I want to really just capture my thoughts that I had at the time.

I had jotted them down and had them in conversations with colleagues, as stated. So flipping through the manifesto, I kind of gave it a high level skim and a couple of things started to pop out. And these were the red flags that started to be a cause for concern. The first of those was some of the works cited. Now, one of those heuristics that we talked about earlier that you can use whenever you’re evaluating an article is kind of, you read it from the front, you read it from the back, and then you can read the meat of the article itself, which means take a look at the abstract or the introduction, and then take a look at some of the authors they’re citing, because if you’re familiar with them, that can give you an idea of where the conversation’s going to go.

But with respect to the manifesto itself, early on in the work Mark Andreesen starts referring to a number of economists that were influences for the work that he was producing. The first one he mentions is the work of Paul Collier, who wrote an influential book called The Bottom Billion, which talks about development and in the global south. There’s nothing really wrong there. He’s going into some interesting information about what’s happening in the developing economies around the world.

But then Andreesen goes on to cite Frederick Hayek and Milton Friedman as influences. Now from a glance and, these are, you know, well known and respected economists, and Hayek in particular for his work on the Knowledge problem.  But both of them were influential in other ways, and drove the policy for the Thatcher governments in the UK in the 70s and 80s, as well as the Reagan governments in the U.S. in the 1980s, So they had a very neoliberal bent to them and a lot of the underlying ideology from their economic works are what we still see in policy circles today. Taking a look at the state of the world and the economic system, we may want to questions those underlying influences, and seeing them in this manifesto is raising some red flags again. Now, even though, some economists like say Tyler Cowen would recently would include both Hayek and Friedman is part of the greats of all time, and again, I’m not disputing this: they have a massive influence. But those influences can have outsized effects for millions and billions of people across the world.

Some of the other elements that, showed up as red flags in Mark Andreesen’s work was the section of the manifesto, and just a quick second, whenever you declare something as a manifesto, that in and of itself is a red flag, it’s a cause for, just to maybe look at the document from a particular point of view, to go through it with that fine tooth comb.

A manifesto can be seen as like an operating manual, like “this is what we’re working with; these are our stated assumptions” and sometimes getting that down on paper is fine. It gives you a target that you can refer back to. But when we see a manifesto, we also want look at it with a greater degree of incredulity, to dig a little deeper on what’s included therein.

So in the manifesto, there’s this section of beliefs that Mark Andreesen goes through, where each sentence starts “we believe that dot dot dot”. And beliefs are fine, there’s nothing wrong with having beliefs, but it’s when we have beliefs that are contrary to evidence that it can become a problem. And in the belief section, you see a lot of these statements, where the belief is contra to evidence.

One of the things he says is they believe in… That energy should be too cheap to meter, and that if you have widespread access to this energy that’s too cheap to meter, then that can be a net societal good, and by and large, I agree. Now, the method they decide to get there is part of the problem.  They say that through nuclear fission, they will be able to achieve energy that’s too cheap to meter. Now, this is part of the problem, because nuclear fission alone will not get there. Aside from the massive environmental costs of nuclear fission, of the plants that are currently existing (and I’m referring here to an article on phys dot org from 2011, that I still remember), and it was basically saying that at the time in 2011, there was 440 existing nuclear fission reactors that supplied, you know, a portion of the world’s energy. To supply the full energy demand through nothing but nuclear, we would need 15,000 additional nuclear reactors with all the associated costs, the fissile material, the environmental costs, and they’d still be putting out the, you know, the heat, the steam that is released from nuclear reactors. So, there would still be a massive environmental cost from transitioning to that source, and that would require building, like, ten reactors a day, every day, for like half a decade to get us close to those numbers.

There’s no way for us to… as a society build up that kind of capacity through nuclear fission alone. And Andreesen states that that would allow us to provide energy too cheap to meter that we could move away from an oil and gas economy. So, the actual path is through more passive elements like solar, wind, and alternative energy sources, but nuclear fission will not get there, and using nuclear fission to accelerate us into nuclear fusion is also a problem, in that nuclear fusion has always been about some mythical target 20 or 30 years down the road and much like AGI seems to always be off in the future. We’re never quite getting to that point. So citing that as a goal is necessarily a bit of a problem.

We’re barely getting started and we’re already three flags in. Now, the next one is that in this area, they also self identify as apex predators.  Earlier, on he draws a comparison to sharks in nature: move or die, and that ties into this apex predator bit later on. He says that they are predators, that they are able to make the lightning “work for us”.  It moves directly from their to a return to the “great man theory” lionizing the technologists and industrialists who came before.  Hmmm. Really? Do tell. Whenever you’re self identifying as a predator, that’s just like a massive red flag, a warning sign.

And I want to be clear, that there are aspirational elements to the work, it’s just that the aspirational elements are like flowers in a garden filled with these bright red flags.  

I can get behind the aspirational elements, but even some of those have a massive disconnect with reality. They see the earth as having a caring capacity of like 50 billion humans.  we can barely manage with the 8 billion that we currently have, which is massively overusing the resources available to the tune of requiring three earths worth at current consumption rates. And while the may be able to support 50 billion humans, but that would require a massive change in organizational and resource usage and resort in horrible inequalities across massive amounts of that 50 billion, with a very select few having anything close to the living standards that we have now or that are seen across much of the OECD nations, let alone the globe as a whole.

We see a number of other aspirational elements, other flowers in the garden, in quotes from Richard Feynman, Buckminster Fuller, and others, with odes to the transformative power of science to enlighten us and provide answers to the mysteries of the world around us.  But this also comes hand in hand with a de-legitimizing of expertise, using the Feynman quote to propose a return to the “actual scientific method” using “actual information”.  Whenever we start seeing echoes of the No True Scotsman fallacy in a text, making distinctions about what counts, once again, red flag.  Actual information? Who decides?  Isn’t that what science is about?

And from there, the Andreesen leans heavily into accelerationism. And again, this is a massive flag for me personally, whenever someone self identifies as an accelerationist, I start to seriously question everything they’re talking about.

Accelerationism is basically the belief that what capitalism really needs is for the gas to be put all the way down to the floor, to press the pedal all the way down so that we can actually hit “escape velocity” quote-unquote, and move quicker along the curve towards the singularity or whatever.

If you consider technological development as a curve, as a growth curve, then the only way to get higher up it is to go faster. Now, if you look at Geoffrey Moore’s work on innovation in Crossing the Chasm, which is an adaptation of Rogers’ work on the diffusion of innovations, of the innovation adoption curve, there’s a point where any new technology will succeed or fail, based on the point of low down on the slope. If I do the video version of this, we’ll put this on the screen, but basically at the low end of the slope, there’s this little thing, which Moore calls the chasm. And that chasm is where you have the innovators and early adopters have kind of picked up this new tech, and then you’re trying to take that product, that technological tool or artifact out to larger market, to get widespread adoption, and then see if it flies. Basically we’ve seen it with things like virtual reality or DVDs or home video recording, smartphones, whatever. There’s a point where the product might be under development for a while, and then the larger population says, okay, we can use this and they adopt it. And then it sees widespread distribution.

Accelerationism views that as a challenge and views tech more generally. And that, like we said, things need capitalism just really needs more gas, more fuel. Problem with it is that obviously you can’t necessarily tell what’s going to take off, what’s going to get adopted – you can’t necessarily make “fetch” happen, even if you’re a billionaire, and there’s a lot of problems when you start going that fast with no brakes.  If the road starts to swerve ahead of you, you might not be able to change direction in time, and this is where the other side of accelerationism comes in.

You see, Accelerationism isn’t necessarily something that’s either left or right. There are accelerations on both sides of the political spectrum. There’s accelerationists on the right, that are pro-capitalist, pro-tech version seen here.  There are other accelerationists on the right, and you can go check out the Wikipedia page to see what other groups are associated with it. There’s also accelerationists on the left who view that capitalism is inherently unstable and want to see it go faster because that will expose the iniquities in the system and help it go off the rails so something better can be rebuilt.

You see this in the works of like Slavoj Zizek and other academics on the left though. Zizek himself is kind of… Um, mid, I guess, but you’ll see that amongst those who are critics of capitalism, who also want it to go faster. There’s a problem with both these perspectives and the problem is basically that, and this is the problem I have with accelerationism is that it is a perspective of a tiny elite minority and would result in massive amounts of pain for millions and billions of people, while that acceleration is resolving itself.

While things are going faster, more fuel is getting added to the system. You know, the climatic change that we see because of more fuel literally being added to the system. Just the disruption that we can see happening would cause starvation, job loss, and untold pain and suffering, if the current systems we have are disrupted is also a problem. And so, from my perspective of doing the least pain, of not wanting to see humanity as a whole suffer, then accelerationism is necessarily a bad thing. Let’s find a different way.

Now, this is about the point where the Techno Optimist Manifesto gets into the list of enemies as well, and while that may or may not be typical for a manifesto, I think whenever you’re writing something and you have a enemies list, you know, that’s a warning flag in and of itself.

Now amongst the enemies for the technological optimist are things like sustainability, sustainable development, social responsibility, trust and safety, tech ethics, de-growth, and others besides. And when you start to look about who your enemies are, what you’re against, then you start wondering really what you’re for, right? So the concern here is that any kind of regulation or responsible governance is seen as an enemy, as something to be combated, to be avoided, to be dealt with. And aside from being a massive red flag, it reveals some of the under some of the underlying ethos as well.

This is what they’re against. They’re against regulation, things that were put in place for safety, for ethical use, for management, for sustainability, for our continued existence on the planet. And these are things they’re against. And I think that is, again, a massive warning sign. And from there we get to the last one.

The last red flag sign is a quote that comes up near the end. Now the quote is uncited, unattributed. We don’t see the conviction to actually state who this is from because that may be actually make it too obvious. The quote is as follows:

“Beauty exists only in struggle. There is no masterpiece that has not an aggressive character. Technology must be a violent assault on the forces of the unknown, to force them to bow before man.”

That quote is from Filippo Marinetti, from 1909, from the Futurist Manifesto which he wrote. If you’re not familiar with Marinetti, here’s the low down, and it’ll highlight the problem. Like I said, it was uncited, but if you know who Marinetti is and the story, then that is the biggest warning flag in the entire document, of the entire list of warning flags that we’ve already seen. Marinetti, of course, is the founder of the Futurist School in Italy and wrote the Futurist Manifesto in 1909.

Here’s some of the elements of futurism: technology, growth, industry, modernization. Okay, but also these other elements: speed, violence, destruction of museums, war as a necessity for purification… Hmmm. Now, Marinetti would go on to get into politics in Italy a few years later, and work with another group of Italians on another manifesto in 1919. That, of course, is the Fascist Manifesto, which he co-authored. So there’s a direct lineage from Marinetti’s work and elements of it that appear in that later manifesto and the works that that was adopted to as well.

If we take all these things, all these red flags together: a list of neoliberal economists, denialism and beliefs contrary to facts while downplaying education, self identifying as predators, accelerationism, lists of enemies, and citing proto-fascist literature. All this combined is a massive red flag and why I am not a techno optimist.

So, that being said, then how would I identify?

And that’s a fantastic question, because judging on the words alone, “Technical Optimist” is pretty close to where I am. I believe that technology can be used as an assistive tool, as we’ve stated prior, and that it can help people out, and is generally an extension of man, that we can use it for adding to our capabilities.

So I might be a techno-optimist, or at least I was until October 16th. Other terms I’ve seen floating around that I could self-identify as include things like techno revivalist, which is close, but not quite. That feels like it ties more into like experimental archeology, where we try and recreate the past or use methods of the past in the modern era to kind of figure out what they were doing. It’s a fascinating field. We should talk about it at some time, but that isn’t really where I am.

Solarpunk isn’t quite where I’m at either because, well, or cyberpunk either. I don’t think I’m really fit within any of the punk genres.  I’m pretty straight-laced. I’m a basic B to be honest.

Taking the opposite stance doesn’t work either; I’m not a techno-pessimist.  I’m generally hopeful for the opportunities that the new technologies can bring. I think that’s part of the challenge is that there isn’t a good line for where I sit. Aside from what is now defined as a techno optimist. And I don’t think it can be reclaimed because as I went through the number of red flags there, the well is really well… well and truly poisoned and with the breadth of reach that that particular manifesto got and the reporting that it saw in multiple areas, I don’t think that that would ever come back, even though much like Michael Bolton in Office Space, why should I change if they’re the ones that suck, right?

So I think techno-optimist as a term is where it is, and that will not change. But I am almost anything but that. And why? Well, part of it I think is just exposure and upbringing.  As I said, I’ve had a significantly different path. One that doesn’t lead through Silicon Valley, one that’s not even in the same solar system as a billionaire.

When you have to go about the business of daily life, when you’re almost middle class, you’re going to have a very different view of technology and its uses, and how it can be used for exploitation as well. And I think that comes through in some of our work.

So, to tie this back to the beginning, to close the loop on why we had to triangulate with examples before we could assay the manifesto: if exposure and experience are what lead one to be able to make quick judgments about a particular work and see where the references are coming from, they also can allow one to see some of the harms that might come about from exposure to those statements as well.  And that’s really what we’re trying to do: to bring some of those associations to light through this particular podcast episode.

So as I still search for a term: Retro Tech Enthusiast, just Tech Enthusiast perhaps, media historian, media archaeologist, etc. I’ll keep working on it. And once we figure it out – and the figuring it out is what I think is going to be the journey of this podcast as a whole – once we figure it out, I’ll let you know.

But if you have any great suggestions in the meantime, reach out and let me know at drimplausible at implausipod. com or on the implausi dot blog. I’ll see you around. Take care.

Implausipod E015 – EEE, Embrace Extend, Extinguish

EEE, or Embrace Extend Extinguish has been making the rounds again in 2023 as a number of silicon valley tech companies have been coming under the spotlight for their business practices, and some striking similarities are showing to a strategy outlined by Microsoft in an internal memo back in the 1990s. Everything old in tech is new again.

Transcript

 In 1999, Judge Thomas Penfield Jackson of the U. S. District Court of Columbia issued his findings in the case of United States v. Microsoft Corp., the antitrust suit that was brought by the government on the tech giant due to allegations that it was using its power to bundle the browser with the Windows operating system, and this constituted an abuse of its monopoly position within the desktop computer market. 

During the course of the trial, it was revealed that Microsoft had an internal policy of embrace, extend, and innovate. But during the trial, witness Steven McGeady revealed that privately Microsoft executives referred to this as embrace, extend, extinguish with the goal of marginalizing or eliminating direct competition.  Other tech companies started taking notes for use in the 21st century. Let’s talk about Triple E in this week’s episode of the ImplausiPod.

Welcome to the ImplausiPod, a podcast about the intersection of art, technology, and popular culture. I’m your host, Dr. Implausible, and since we came back from hiatus with episode eight, we’ve mentioned EEE a few times in relation to things like the Fediverse, so I thought there was no better time than now to get caught up.

First off, the reason why a case from the 90s still matters in 2023 is that it never really went away, and here and now we’re starting to see some more signs of it with some big players, both new and old. Potential examples in 2023 include Facebook, Google, and again Microsoft, and it may affect things that you use on a daily basis.

Let’s cover off the main points. What is Embrace Extend Extinguish, and what does it mean for computing and the internet? EEE or Triple E That’s right, this episode is all about the game, and the game is follow the leader. Anywho, Triple E was an internal policy pursued by Microsoft in the 90s with relation to its competition in a number of key markets. First revealed during the antitrust case that I mentioned in the open, where an internal memo that was brought into evidence showed that they referred to the strategy as Embrace, Extend, and Innovate. This was part of a number of texts that were submitted into evidence, including emails and quotes from Microsoft executives and others, like Steve McGeady of Intel, where he was a VP at the time.

During testimony during the trial, McGeady revealed that they, Microsoft, had referred to it as Extinguish internally. Now, these documents are from the Antitrust case, and are separate from another set of docs, collectively referred to as the Halloween document, which will leaked to Eric S Raymond and detailed Microsoft’s attitudes and plans regarding Linux and open-source software.  Those show that Microsoft was still aggressive against competition but had to use a different approach due to the distributed and non-commercial nature of the FOSS community. Here, they pursued tactics like the development of FUD:  fear, uncertainty, and doubt, or announcing vaporware products, stuff that would compete with a given product if it came to market, but they had no intention of ever actually making.

They’d also engage in the practice of extending protocols and developing new ones, and de-commoditizing existing protocols in order to crater the market for stuff that was running on it. And from these latter documents, we can better see what their corporate strategy goals were. It was a set of social and policy actions which they used to maintain their market position against other vendors, who often had better technological solutions, similar to what we talked about in the Endless September episode, where AOL had a technically inferior product, but were able to compete on presence in the marketplace with the ubiquitous floppy disks and CD coasters and a streamlined user experience, this was one of the reasons that the case was so important.

By using their market size to shut out other vendors from the market, they were stifling innovation and preventing competition. And this is something that still raised some eyebrows back in the 90s. With the original case, Microsoft ran afoul of the Sherman Antitrust Act. It was a business-to-business crime, B2B, so when the afflicted parties petitioned the U.S. government about the impacts and the concerns were raised about the lack of competing alternatives, they, the government, eventually took action. 

As a reminder, this was before smartphones were a thing in the market and shifted. Apple had a tiny fraction of the desktop market, around 3 percent in 1999, and Linux was very niche and other operating systems were mostly found use within specific corporate use cases, but had a tiny user base compared to windows as well. All told Microsoft was on about 95 percent of all desktops and laptops sold. And this number was actually growing through the Y2K period up to the dot com crash.

And the reason we’re bringing it up here again in 2023 is that apparently everything old in tech is new again. There’s been the rollout of some new apps, programs, and tools, and there’s a number of court cases actually taking place right now in the fall of 2023 involving major tech players that you’re not hearing about because of other criminal enterprises currently in the news.

So I’m going to take a moment to cover each of them in turn and how they relate back to Triple E and cover some of the theoretical background while we’re doing this as well. And the first one we want to talk about, of course, is the one that started this whole thing. Threads, the Twitter like communication app, launched by Meta, nee Facebook, under their Instagram brand, was made available to users on July 5th, 2023. 

Now prior to its launch, there had been rumors of its development. On an article on TheVerge. com on June 8th by Alex Heath, they had gone into the details of the app, which at the time was called “Project 92”. The main rumor was that it would be using something called the ActivityPub Protocol, which as we’ve discussed plenty of times, is the thing that’s powering Mastodon and the rest of the Fediverse, and this rumor caused a lot of consternation, especially within the Fediverse at large, mostly due to Meta’s past track record, which hasn’t been great. If you’re wondering what kind of things might be involved, just do a web search for Cambridge Analytica, or for Rohingya in Myanmar. Don’t search for it on any Meta owned properties, because you won’t find much and for those reasons and more a number of the people that were already on the Fediverse that were early adopters of the protocol were engaged with it because it was explicitly not a Facebook property.

So when a post was made on June 18th by an admin from one of the larger instances on Mastodon that, yes, they’d been in discussion with Meta regarding the ActivityPub protocol and the possible integration that would take place, there was a lot of uproar and consternation, and one of the things that got mentioned a lot during the ensuing discussion was the idea of Triple E. Now admins of some other instances and some users said they were going to pre-block meta because they’re concerned that any particular connection with them may allow leverage or for their information to be shared.

You know, they’d be turned into a commodity, much like we’ve discussed earlier. There are those who are online who don’t want any part with Facebook. And the other concern was that Facebook would go full triple E on the ActivityPub protocol: embrace it, by letting Threads link to it directly; extend it in some meta-friendly way, probably by allowing advertising or something similar; and then extinguish it ultimately at some unspecified point in the future as they roll on to a new program or a new platform, but in much the same way that we’ve seen with standard operating procedure for Microsoft back in the 90s. In so doing, the people that had found a home away from Meta, away from Facebook, would lose their online homes, so you can understand their concerns, but there’s a related set of concerns tied directly to the triple E phenomenon, and that is the notion of path dependency and vendor lock-in. 

There’s an old story, we might call it a meme, that does the rounds on the Internet every six to nine months or so. Stop me if you’ve heard it. It goes the size of the space shuttle’s boosters was determined by the width of a roman chariot, or two donkeys or something like that. I’ll let you look it up. There’s a couple recent examples Also, i’m not going to stop even if you’ve heard it. 

Here’s the full story: as it goes, the diameter of the space shuttle boosters are the size they were due to the fact they had to be shipped via rail cross country from Utah to Florida. Standard gauge railroads in North America are 4 foot 8.5 inches. The size of the standard rail gauge is because the Americans bought their early equipment from the English who used a similar gauge for their equipment. And this was fixed because the English tram manufacturers designed their wagons to fit the roads of the English countryside. And these were set at the distance because of the Roman chariots that had driven on the roads millennia before and had worn groves in the roads, which had then been used for generations of Englishmen. So the width of the train tracks was directly influenced by the width of the two Roman horses, or donkeys. There’s variations in the stories, you may have heard it differently. 

It is, of course, nonsense. 

The size of a donkey had very little to do with the size of the Space Shuttle. There were multiple different standards of rail lines in use in North America between 1831 and 1981 when the Space Shuttle first launched, but its design had begun significantly earlier. Any of these could have been the standard, though again, there were some significant advantages that some gauges had over others. More on this later. But tracing the links of contingencies, facts, and counterfactuals necessary to draw a straight line from donkey carts to rocket boosters requires levels of hand waving once reserved for members of the royal family.  It just ain’t a thing. 

Especially when you consider that the diameter of the SRB is 12.17 feet. You’d need to be doing some Steiner math to get that story to work. But what it does illustrate is the idea of path dependency, the link which is back that might be caused by initial embedded choices. And I know this may seem like an odd rhetorical strategy, undermining a specific well-known example in the aid of explaining what it is, but in this case the particular illuminates the general case, even if it doesn’t specifically abide by it.

Path dependence can be a real issue, especially when it comes to technology. It’s usually brought up in terms of standards. We can think of things like the QWERTY keyboard design, or the various forms of coffee pods that are used as shaping the direction of the market. And these can both be True, but to really get a hand on path dependency, let’s think about it in terms of something massive, like really big, like the automotive market in North America. It’s so big and entrenched that makes substantive changes to it would be extremely difficult. So how would one go about changing the auto system? By using something that can overlap with the grooves that are already cut to a greater or lesser degree. You add in electric vehicles that mirror the shape and conform to the systems that are already present and offer charging stations that resemble in some fashion the filling stations that are currently familiar to your audience so that they can be more easily adopted. Moving to electrical vehicles that look like cars leverages over a century of design decisions and development and allows for an easier adoption for new customers, or at least that’s how the thinking goes. So electric cars follow the path dependency laid down by successive generations of gas-powered automobile designs and drivers.

What’s related to path dependency, though not exclusive from it, is the idea of technological lock-in. And this is where those K Cups and keycaps come back into the picture. The keycaps in this instance are the ones that spell out Q W E R T Y on the top of your keyboard. Though in this day and age, you can order a version that spells out anything you like.  (At some point, we’re going to have to have a chat about innovation as a driver of change in secondary or tertiary markets, but we’ll move on for now.) 

So the idea of path dependency really came about from the field of evolutionary economics. Paul David wrote about it in 1985, about the risks of technological walk in, in his famous paper on “Clio and the economics of QWERTY”. Okay, famous among economists, but still famous. Clout’s clout, right? David was writing about the historical competition between two famous keyboard layouts, the QWERTY keyboard, the one that you’re likely familiar with, as well as the DSK or Dvorak standard keyboard. DSK was patented in 1932, and it was faster, better, more efficient, and the U.S. Navy even tested it out and found that it only took about 10 days or so to recover the cost and retraining. The DSK or Dvorak keyboard was about 20 to 40 percent more efficient than the QWERTY version. 

Now, the QWERTY version had already existed for a while. It was patented between 1909 and 1924, depending on what country you’re in. Originally developed by Christopher Latham Scholes of Milwaukee, Wisconsin, and some of his partners, including Carlos Glidden and Samuel Sewell. Now, they were engaged with, uh, let’s see, I guess, entrepreneur, James Densmore, you might want to say, promoter slash venture capitalist. And Densmore had some contacts with a manufacturing company that had some significant machine tool capabilities, an arms manufacturer by the name of Remington. They were also getting into sewing machines at the time, you know, diversifying the portfolio, so to speak. And while business was good during the civil war, the economic downturn that followed in the decade after in the 1870s meant that sales weren’t much. They were selling just for the record, about 1200 units a year.  So at the time typewriter sales are more like what we see with like mainframe computer systems today, but at the time in the 1870s, there was actually a lot of development going on. Edison was working on his teletype machines and there’s patents for that in 1870s There’s a lot of other communication equipment being developed and it was being rolled out across the country.

So it was actually A lot of innovation taking place within that space. And in that we have the development of the QWERTY keyboard. There was other competing types as well. Like we said, the Dvorak didn’t come around until the 20th century. There was the “Ideal” keyboard, which had the sequence D H I A T E N S O R in the home row, those 10 letters being ones you could compose 70 percent of the words in the English language with. And all of this development was indicative of a lot of growth going on in the field. The singular advantage that QWERTY had was that, you know, it slowed down the typist so it didn’t jam as often. And that led to a small but minute advantage over some of the other competitors, in addition to having like Remington being the manufacturer for them.  And this advantage was multiplied with the advent of touch typing in the 1880s, as the hunt-and-peck method kind of fell out of use. Keyboardists that could type by touch were in demand because that learned skill of being able to use a QWERTY keyboard meant that they were that much more efficient, at least compared to the hunt-and-peck typist, and again, like we said, the tech wouldn’t jam up and result in a slowdown. And it was this learned skill that led to the technological lock in and a suboptimal design like the QWERTY keyboard being the de-facto standard. 

As David described, there was three characteristics that led to this. There was tech interrelatedness, there were economies of scale, and the quasi-irreversibility of learning the skill. 

Now the tech interrelatedness was the link between the hardware of the typewriter and the software of the typist, or we might rather say wetware of the typist. To use Rudy Rucker’s term, I mean, the particular arrangement of any given keyboard was largely irrelevant. But the installed user base, so to speak, of the typists that were able to use that arrangement quickly and efficiently by memory was much more important.

The economies of scale were linked directly to the manufacturing capabilities that Remington had. As we said, they had a great machine tool set up. So they were able to produce the equipment. And then as other vendors adopted it, it was more and more available for other typists to use. So if a typist is going to pick among any given available option to use, they might as well learn QWERTY because people were paying for people that can use it.

And the training wasn’t for free, right? The typist had to learn it on their own and then bring the skill to the company and it wasn’t being handed out there. And this relates to quasi-irreversibility as well. Like you can retrain, but it’s going to cost you money. And while you’re retraining, you’re obviously not earning anything and you may still have some crossover or issues, and you don’t know if the thing you’re training to is going to be any better than the one you already know. In this case, if you know QWERTY, you’re probably going to stick with a QWERTY keyboard or demand that at your new employer. Like I can offer QWERTY, do you have it? Similar to what we see with like Adobe Photoshop and other technological versions that are currently extant.

But this is ultimately one of the problems and downsides of path dependency and lock-in, and to quote David, as he states: “competition in the absence of perfect futures markets drove the industry prematurely into standardization on the wrong system.” End quote. Because nobody could really see that the technical problems that the QWERTY system was designed to solve would soon need to be resolved, and here we are in 2023 with electronic keyboards still using this same layout even though it has no impact because it’s designed to resolve a mechanical issue that came about 150 years ago.

So yeah, if you don’t necessarily have the best technical solution like VHS or AOL or Microsoft in this instance, try unlocking the market by other means. The path dependency means it may pay off for you in the long run if you can stick around. 

And just to bring this back around full circle to our example of Roman roads, rail lines, and rocket ships, that’s an example of path dependency.  There’s no direct causal relationship, which is what everybody gets wrong about it. As David states: “important influences upon the eventual outcome can be exerted by temporally remote events, including happenings dominated by chance. There are things that shape our economic decisions that are well beyond our ability to fathom or even control.”

Now earlier I did state that there was a number of examples like Triple E or things like it in the news and it’d be prudent to get onto the next one. Now one of the bones of contention in the Microsoft antitrust case was their bundling of Internet Explorer with the Windows operating system. People said that that was anti competitive and that they’re using their monopoly power to push that as a de-facto standard. And that’s one of the ways that lock in can happen when a functional standard becomes a de facto standard. Now, currently we’re seeing this with Chromium, which is the engine behind Google’s Chrome browser and used by everything from Edge to Opera to Chrome itself. And it’ll also be in the default install on every Android device.

Much like how Microsoft’s Windows in the 1990s was about 95 percent of the personal computing market, Google’s Chromium makes up about 95 percent of the browser market in 2023. The alternatives are pretty much limited to Firefox, Safari, and a few derivatives. So when Google decides to make major changes to Chromium, it can reverberate throughout the industry. It affects everybody. And in late July and early August, they started doing that. They rolled out something called WEI or Web Environment Integrity as a proposed change to Chromium. It first appeared in July as a proposal in the GitHub repos of some of Google’s Chromium engineers, and it received a pretty universal outcry against it from those that were paying attention to it.  What it proposes is that there’s an attestation check that’s done between the browser and the hardware of the machine. Ostensibly it’s used to combat online piracy or cheating in games, but the problem is that those are edge cases and it could be used for other purposes. One of the ones most noticed is it could be used to detect whether somebody’s running an ad blocker on their browser or single out specific extensions.  It turns the internet into a permission-based system rather than an allowable system. It turns everything into a walled garden run by Google where they can pass judgment on the users based on whatever opaque criteria they might have. And while that’s one example, that’s not the only case currently involving Google.

The other one that’s going on right now is the antitrust case that was brought by the U. S. Department of Justice against it for its monopoly power with regards to online search. And if you haven’t heard much about that one, it’s not surprising because Google’s been doing pretty much everything it can to limit the exposure or any information that’s coming out of the trial. Much of it’s happening behind closed doors. There’s been some reporting on the New York Times, Bloomberg and Ars Technica, and I’ll put some links to that in the notes. 

And that’s not the only case going on because on September 26, 2023, the FTC in the U S and 17 state attorneys general sued Amazon.com alleging that the online retail and technology company is a monopolist that uses a “set of interlocking, anti competitive, and unfair strategies to illegally maintain its monopoly power. The FTC and its state partners say Amazon’s actions allow it to stop rivals and sellers from lowering prices, degrade quality for shoppers, overcharge sellers, stifle innovation, and prevent rivals from fairly competing against Amazon. It alleges that Amazon violates the law not because it is big, but because it engages in a course of exclusionary conduct that prevents current competitors from growing and new competitors from emerging.” End quote. At the time of recording, that’s just a couple of days old. So as they say, more to come.

Now there’s nothing in particular that links an alleged monopoly in online shopping to another one that’s alleged for online search to a potential one for, uh, social networking to another one that has the impact of online browsing that maybe links it to one, another, uh, case that, uh, dealt with monopoly regarding operating systems and online browsing from, you know, 20 years ago, but there are some commonalities. Aside from them all being massive tech companies, and in some cases the same ones. As Bill Gates commented in 2019 on the 20th anniversary of the antitrust suit, one of the things the tech companies learned is that they had to be more present in Washington and to lobby more effectively.

Back in the 90s, it was Bill Gates point of pride that they never really engaged with lobbyists. But they changed their strategy with respect to that following the antitrust trial. And everybody else in the tech industry took notes and followed suit. Now, depending on your level of involvement in online tech news, a lot of what we shared here may seem like common knowledge, but not everyone may share that.

What we’re trying to do is just bring attention to the ongoing events that are still taking place, especially with everyone’s eyes thoroughly focused on things like LLMs, generative AI tools like chat GPT. These are just current examples, high profile ones that attract our attention. And there’s others that are happening at various levels of technological development that we might not see or might not have a large impact just because it’s affecting a very niche audience and doesn’t have the broad reach of things like shopping and search and browsing and social media.

What I hope to bring to your attention is the impacts that things like locking and path dependency can have on that development, that it can reduce the available options, and we maybe get stuck with an outmoded technology, something like a QWERTY keyboard, where there would be better solutions available to us.

Because it keeps happening again and again and again, maybe it isn’t necessarily a case of path dependency where we keep falling into the ruts that have been well worn before, but rather perhaps the environment as a whole affords certain outcomes in a regulatory framework of monopoly capitalism that we’ve discussed in the past.  We can see it more often happening in such a framework. So rather than being one particular path, the slopes of the hill encourage flows in certain directions. Exploring this would shift us more thoroughly into evolutionary economics full stop, which we’ll leave for a future episode, a path off in the distance.

Next time, in episode 16, we’ll be looking at spreadable media, which we’ve hinted at earlier. And with the WGA strike being potentially resolved by the time you hear this, with hopefully the SAG AFTRA strike soon to follow, we may be returning to some media focused episodes soon, too. Until next time, I’ve been your host, Dr. Implausible. You can contact me at drimplausible at implausipod. com. Have fun.

References: