Sunday, March 24, 2024

Coevolving with new technology

A little bit of history

I remember the sinking feeling I had when the Communications Decency Act (CDA) passed, back in the 1990s. I was stunned. A veil was ripped off for me. I had grown up with the U.S. government doing lots of things I didn't understand but that sounded like things that were going to protect me. Our president was building up our defenses while also negotiating with the leader on the other side for disarmament and peace. (Though, our side didn't disarm?) I heard tales about how recreational drugs were going to destroy us, and the government was running these big campaigns to protect us all, because one taste is all it takes before your life will be destroyed. I was enough of a STEM student to be very confused why "drugs" will be the end of your life if you have one single taste, but everywhere around me people were drinking and smoking and not having such a bad result; how do you draw the line for such a thing? I went along with it, though, and sort of thought that smarter people than me must know the reasons for all this. Plus, as Paul Graham says, I was mindful of What you can't say. These topics I mention would get you instantly socially ostracized if you weren't on the right side of them.

The CDA was different, in that it covered a topic I do know about. I knew enough about computer software to see that one individual could set up and run a bulletin board system that could then host a billion people using it. I could picture how this would work, and whenever people talked about social forums, my mind's eye would fill in the gaps of what they are saying with a number of the details--servers, networks, databases, and so on. In the world of the CDA, these rooms are supposed to be monitored, so the picture in my mind can no longer be implemented by one person. The CDA vision was that a chat room would have maybe 1 chaperone for every 1000 users, so to support a chat room with a billion users, you would need one developer and one million chaperones. The CDA vision would make a basic chat room literally one million times more expensive to run than it was before the CDA.

The cost to humanity can be very high. I can't say what we are missing from social networking, because we live in a post-CDA world, and we don't know what we are missing out on by raising the bar on who can even run an experiment. I can given an example from another kind of high, however. Research and limited trials have begun, today, for mind-altering substances including a parade of the villains from my 1980s classrooms: ketamine, psilocybin, and LSD. This is a complex subject, but imagine for a moment that the new trials are at least sometimes heading in the right direction. If that is true, then the American people lost 60 years where we could not access something that, it is now looking like, can turn people's lives completely around. 60 years is a long time. My brother was born and then died during that 60-year window of dark-out.

When I think about how to regulate AI, we are in a similar situation to chatrooms of the 90s or psychedelics in the 60s. We don't yet know what is possible. We don't know what people will try, or what the results will be. Some of those experiments will lead to good results, and some of them will lead to bad results. How do we pursue these experiments while keeping ourselves safe?

My general sense, based on the history of technology and of governance to date, is that it's preliminary to have any blanket AI regulation right now that can do more good than harm. Instead, it is better to wait for those slow-burn problems that take a while to happen, but that don't get phased out all by themselves. Let me paint a picture of what this can look like, from my point of view as a technologist for four decades.

The concept of coevolution

I believe we can think of AI-based technology as co-evolving with humanity. We can think of technology as moving forward in a way akin to evolution, and we can approach our responses to that evolution based on our experience with other instances of coevolution that are by now very familiar.

The mycelial network. I enjoyed the new form of space travel in Star Trek: Discovery, but I have learned that there is a real-world mycelial network that is even more interesting. It turns out that fungi are everywhere. Tree root systems of the real world are permeated by mycelia that stabilize the soil and provide vital nutrients. All living plants include endophytic fungi that they can't live without. The larger ecosystem would not function as we know it without fungi recycling nutrients from dead plants and animals.

The living world would simply not exist without this weird third kingdom of life that permeates it in many different ways. Imagine, now, a national government trying to regulate the evolution of fungi, and the challenges they would face. We need fungi for human life as we know it to exist, but also fungi can harm us. There's no one simple way that fungi interact with us and with our environment, so there's also no one simple rule that fungi need to follow in order to be safe for us. In a territory like this, the perspective that seems best to me is one of growing together. We are here, and fungi are here, and we need to move forward by inches, with diversity in each kingdom and some way to respond if a change looks harmful.

The Butlerian Jihad. Movies like Dune are always grasping for a reason to make heroes with fantastically advanced technology still resort to low-grade sword fighting. In the case of the Dune universe, the in-story answer is called the Butlerian Jihad, and it's based on a real-world novelist named Samuel Butler. Butler explored the idea that the machines around us are developing from a selection mechanism that is similar to the natural selection of Charles Darwin.

Butler didn't limit this idea to AI. Butler meant the idea very broadly as applying to all of the machines we create and market to each other in order to automate our lives. People are always making machines that don't work very well; for a humorous example, check out Thoren Bradley's review of a 4-way splitting wedge. The machines that catch on are produced more widely and become the basis of the next round of machines. Any software engineer can tell you that new software is created by modifying old software. As a result, the development of machines follows the two ingredients that are needed for natural selection to occur: new entities are replicated from old ones, with a modest rate of mutation; and entities are selected for continued existence by some kind of selection criteria.

Butler's idea seems right to me. Machines are evolving along with us, and it doesn't really change anything that newer machines are doing tasks that used to be things only a human could do. On the contrary, the most helpful machines for humanity are those that remove some of our toil.

Of these two ideas, fungi are the scarier one to me. I know how to turn off a machine, but fungi work at a molecular level that cannot be directly controlled. And yet, fungi have not just gone okay but are an essential part of the world as we know it. Life can exist without fungi, but humans wouldn't be there. Yet, both fungi and machines have so far made human life much, much better than it ever was before.

How to regulate coevolving systems

If we follow the idea of coevolution, then we can think through the conditions where a U.S. intervention is likely to help more than it hurts.

Above all, one of the biggest reasons that we can feel safe about fungi, and less safe about other things, is that fungi exist in a large, diverse ecosystem where any damage will have a limited distance it can travel. It scares me more than any specific technological risk that several categories of technology are near monocultures right now. Chrome, Firefox, and Edge are the only real desktop web browsers; Facebook is ascendant for a certain kind of social media; Gmail is about the only email reader; and Amazon is by far the online market for physically shipped goods. These companies can cause tremendous harm with any mistakes they make, because those mistakes will propagate to all 8 billion of us within seconds of being launched. Monocultures are death traps, and I worry that we have so many of them right now.

For specific technological developments, it seems like there are three categories to bear in mind.

  • In some cases, direct contact with a human and the new machine will quickly kill both of them. In a case like this, there's nothing useful for the regulation to do, because the machine will go away on its own all by itself. An example would be the Clippy assistant for Microsoft Word. It died all on its own.
  • Many developments are unambiguously simple and positive. For these, as well, any attempt to regulate it is just going to reduce some of the benefits we gain. An example would be the original Google Web Search. Unless you've used some intranet search engine such as Atlassian's, you may have a hard time imagining how bad the web search engines used to be. It's a very good thing for all of us that Page and Brin were able to do their web search experiments without needing a lot of approvals. Imagine what kind of search engine we would have if it took 5-10 years to get an experiment approved, and if the search engine were only able to present results that incorporate today's suite of politically correct speech and representation. It would never have gotten off the ground, and very likely most people would assume that an effective web searcher just can't be made.
  • That leaves the category that is neither of the above. Something that is harmful, but not harmful enough that people immediately balk in terror and shut the whole thing down voluntarily. An example in my mind is the slot machine. The slot machine doesn't do anything dramatic like shoot out laser beams or organize armies to go marching around. Yet, it does enough harm, to a significant number of people, that I would say there is a significant practical benefit that a little bit of regulation could establish.

Regulating slot machines is tough after the fact, but take a moment to imagine if someone had tried to regulate them before they took off. Imagine looking at a slot machine before they really caught on, and imagine deciding just based on its design that it might cause some harm. I posit that essentially no one could effectively figure out what the rules should be. The negative effects of slot machines cannot be understood from the technology, which after all is pretty simple. To understand the effects, and therefore to know what to do about them, one has to reason about things like: human psychology; the kinds of establishments that slot machines end up at; the kind of clientele that end up at these establishments; and the larger market developments that would lead to situations of the mass-produced slot machines that have a large enough effect to be worth doing anything about. Based on all of that, a lot of the regulation wouldn't even be about the technology itself; some of the slot machine regulation on the books today is about fairness and about the maximum rate of money that the player will lose. It seems better to me, though, to do things like require payment up front, and to limit payments to once per hour; likewise, it seems likely helpful to put limits on the visual effects and on the advertising claims that surround these machines. It's tricky and is subtle, and it can only be done after the fact, and even after the fact, a lot of the attempts aren't going to go well. I would say that even today, after decades of tinkering, regulation around slot machines is not yet really figured out.

The good news is that slot machines haven't destroyed the world just yet. They have been a slow burn, long in development, because anything that was hyper-bad simply wouldn't have caught on at all. As well, our existing social immune systems are carrying a lot of the slack. Each of us that sees someone else in the herd get caught up in a gambling addiction is quick to raise an alarm to the rest of us. The regulation can be better, but it's more than fast enough to settle in and spend multiple decades working out what that regulation should look like.

It strikes me that generative AI can be thought of the same way. Lots of things will be really good, and we don't want to wait one year to get them, much less 5, 10, or, based on U.S. history, 60 years to get the benefits. Lots of things will be super-bad, and almost everyone will stop using them immediately. Lots of things will seem somewhat bad, and most people will stop using them, but a few will carry on; that small fraction is actually good for humanity as a whole, because it increases diversity in the gene pool. Then in some tiny remaining pie slice, after all the previous cases are covered, there will be a harmful technology that wasn't weeded out by itself and that wasn't stopped by our more general-purpose defense mechanisms. That tiny pie slice is where regulation can help, and it won't be machines shooting laser beams. It will be something we haven't thought of, that hardly anyone is talking about, because we haven't even tried yet and don't have any information to go on.

No comments: