What’s the best-case scenario for the application of artificial intelligence? What’s the worst-case scenario for AI going wrong? There are, of course, speculative answers to these questions, and it’s interesting to list them. But there is also a deeper conversation to be had about the nature of risk and the assumptions (and obscure spots) involved in scenario building. We’re bringing you this post with support from data append and consumer contact vendor Accurate Append.
Begin with the best- and worst-case scenarios:
Among the promising developments of artificial intelligence: The slowing of disease spread. The elimination or at least radical reduction car crashes. The ability to address a host of environmental crises, including climate change. And the ability to cure cancer and heart disease. On the cardiovascular front specifically, AI allows for “deep learning” so that programs can identify novel genotypes and phenotypes across a wide range of cardiovascular diseases,
Okay, so those are some promising applications. Why be worried? Well, there are two types of “AI bad” scenarios: the apocalyptic “it could be over in minutes” scenarios, and the slow agonizing societal turmoil scenarios. I’ll explain the apocalyptic scenarios first. There is the possibility that the more autonomous the systems, the greater the risk of them being deployed either purposely or by accident against innocent life. The psychological distancing of a machine, even a smart one, decreases empathy and increases acceptability of attacks. There is also the possibility that lethal AI warfighting systems could be captured, compromised, or subject to malfunction. Alexy Turchin, researcher with Science for Life Extension Foundation, and David Denkenberger, researcher with Global Catastrophic Risk Institute, developed a system of cataloguing these “global catastrophic risks” and published it in the journal AI & Society in 2018. In the section on viruses, they write: “A narrow AI virus may be intentionally created as a weapon capable of producing extreme damage to enemy infrastructure. However, later it could be used against the full globe, perhaps by accident. A ‘multi-pandemic,’ in which many AI viruses appear almost simultaneously, is also a possibility, and one that has been discussed in an article about biological multi-pandemics.” The further advanced the entire network of AI tech, so in other words “the further into the future such an attack occurs,” the worse it will be, including risking human extinction. To put some icing on that cake, the authors point out that multiple viruses, a kind of “AI pandemic” could occur, “affecting billions of sophisticated robots with a large degree of autonomy” and pretty much sealing our fate.
Turchin and Denkenberger even delve into the scenarios wherein such a virus could get past firewalls. Instead of the clumsy and obvious phishing emails we get now, imagine getting an email from someone you nominally know or have exchanged emails before; someone you trust. But it isn’t really them—it’s a really, really good simulation, the kind created by machines that learn. The speed of that learning is several million times faster than our own. An AI virus could simulate so many aspects of human communication that people would either have to completely stop trusting one another, or eventually someone is going to let the bugs in.
Before we go onto the higher probability and lower magnitude negative impacts of AI, though, I think we should say a few things about risk. First, actual risk is much harder to predict than it seems. We can catalogue worst-case scenarios, but this says nothing about their probability, and probability may be infinitely regressive, frankly, because, as the principle of “Laplace’s Demon” holds, we’d have to step outside of the universe to accurately assess probabilities.
But what if Laplace’s Demon not only applies to what technology can and cannot predict, but to the development of technology itself? This may mean that the elimination of sole risks inadvertently gives rise to others. But just as flipping heads three times in a row doesn’t bear on whether the next coin flip will yield heads or tails, so the elimination of certain risks doesn’t make it any more or less likely, in the scheme of things, to create new risks. They just happen.
The problem with the more apocalyptic worst-case scenarios is not that there is no possible world where they could happen, but that in a world where they could happen, any number of other apocalyptic scenarios could also happen. This is because the worst-case scenarios assume a complete lack of regulations, fail-safe measures, or other checks and balances. And while we have reason to fear that the industry will not adequately police itself or allow policing from other entities, it’s a bit of a slippery slope from there to imagining no checks whatsoever.
One piece on AI policy from George Mason University discusses the proposal of Gary E. Marchant and Wendell Wallach to form “governance coordinating committees (GCCs) to work together with all the interested stakeholders to monitor technological development and to develop solutions to perceived problems.” This is perhaps a nuanced version of industry self-regulation, but it really proposes to work both within existing institutions and for entities to monitor one another, a sort of commons-based approach where producers keep each other honest. “If done properly,” the paper concludes, “GCCs, or something like them, could provide appropriate counsel and recommendations without the often-onerous costs of traditional regulatory structures.” Combined with public education about the benefits and risks of AI, perhaps cultural practices will grow to preempt concern about worst-case scenarios. But regulators can always step in where needed.
Besides, once the possibility and knowledge sets exist for a particular level of technology, it’s virtually impossible to ban it—or even to enforce a ban on a particular direction or application for its research. This is why Spyros Makridakis, Rector of Neapolis University, writes in a 2017 paper on AI development that “progress cannot be halted which means that the only rational alternative is to identify the risks involved and devise effective actions to avoid their negative consequences.”
As we said earlier, though, there’s a more realistic apocalypse we need to face with AI: the loss of massive amounts of jobs (assuming we live in a world approaching full employment ever again post-pandemic and actually have jobs to lose). AI shifts cause massive structural patterns of transitional unemployment, markets will not correct this in a timely manner, and the number of suffering people could be overwhelming
But this ultimately seems like a political question rather than an economic one: Even without the economy transitioning into the accurate definition of socialism, which is democratic control of the means of production, a shift to a universal basic income would preserve some of the basic economic structures and assumptions of capitalism, allow a greater flexibility about defining employment in the first place, and facilitate either transitions into new work or settlement into less work. There’s nothing wrong with both dreaming about risks and preparing for inevitable challenges. If AI is a genie we can’t put back, we may as well negotiate with it.