CEOs and the wealthy intelligentsia of technology are calling for regulation in much the same way that Mark Zuckerberg says he’ll welcome regulation: as a kind of banner or veneer of legitimacy, designed to decrease risk and increase public ethos. For them, the language of regulation is the language of predictability and a level playing field among the big players.

For them, it’s all about risk management, as Natasha Lomas writes at Tech Crunch in reference to Google’s Sundar Pichai, who published an op-ed in Financial Times calling for the regulation of artificial intelligence. Lomas recognizes, beneath the taglines of regulation in the public interest, a “suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale”—in other words, a public call for regulation that assumes a world where the tech companies do what they want within that regulatory framework, or even before that framework completely develops.

Lomas also points out that Pichai “downplays” AI’s potential negative consequences as simply the cost of doing business, “the inevitable and necessary price of technological progress.” There is no discussion of how we might, either as a society or as tech workers, build the ethics and values into our technological development, through democratic deliberation—including foundational deliberation about whether we should be scaling AI technology society-wide. The call for regulation is a kind of blanket placed over the entire discussion of who should make R&D, production, and distribution decisions in the first place, and what framework they/we should use to make those decisions. Let us be responsible for creating monsters, and then you can regulate the monsters.

Perhaps this is the logic that makes Elon Musk fear both apocalyptic AI scenarios and labor unions.

Tech executives get many things about regulation half-right, or kernels of wisdom are found hidden within their “tech inevitable, might as well regulate it” platitudes. Last September Carol Ann Browne, director of communications and external relations at Microsoft, co-wrote a piece in the Atlantic with Microsoft president Brad Smith entitled “Please Regulate Us.” The crux of their argument was that since leaders of industry are unelected, regulatory frameworks are preferable because they come from democratically elected officials. “Democratic countries should not cede the future to leaders the public did not elect.”

Hard to argue with that. Instead, I am curious about pushing it even a little further. There are many kinds of democracy, and there are many ways to make managerial and ownership positions more accountable. What if, at least in the case of big technology companies (maybe with $100 million or more in assets?), the tech workers themselves were given a vote on things like ethics and social responsibility? Last November, Ohana Bhuiyan had an article at the LA Times about employee walkouts at Google and similar employee-initiated protests at other companies, all in the name of allowing greater employee-based guidelines and decisionmaking. Topics include contracts with Immigrations and Customs Enforcement (ICE), an agency under heightened public scrutiny and criticism, or the decision by Apple “to block an app used by pro-democracy protesters in Hong Kong to avoid police.”

Imagine a similar framework emerging in AI development in general, where workers and management could participate in deliberative, open conversations, and workers were given a direct vote in controversial decisions? I enjoy working with small businesses of 20 or fewer employees, such as AHG’s client Accurate Append, a data processing vendor. Imagine instead of a single mammoth company and its close competitors developing self-enriching policy frameworks, you had hundreds or thousands of semi-autonomous creators working openly with society.

AI might be an especially appropriate object for democratization and deliberation given the concerns raised by its development and use. We’re thinking of Sukhayl Niyazov’s piece in Towards Data Science just a few months ago, describing the concerns raised by what AI might do to democracy itself. Using mountains of personal data to cook up AI tends to result in “information bubbles,” which Niyazov calls “virtual worlds consisting of the familiar . . one-way mirrors reflecting our own views.” This artificially engineered echo chamber effect is the opposite of deliberation. So why not invite those who are developing the technology to be deliberative themselves? Yes, the process might be uncomfortable, but many algorithmic tech products are currently developed to reward that very avoidance of “pay[ing] attention to things that are either difficult to understand or unpleasant.”

Concern and public outcry over information bubbles, along with other practices, led Facebook late last year to establish a review board for its advertising practices. But surely a company like Facebook could go deeper than that and fold workers directly into internal policymaking.

Back in 2016, the World Economic Forum published a list of the top nine ethical issues in artificial intelligence. The listed issues were:

  • increased unemployment
  • unequal distribution of machine-created wealth
  • humanity: how AI will change human behavior
  • AI making mistakes (“Artificial Stupidity”)
  • racism—how AI will duplicate, magnify, and reflect human prejudice
  • security issues
  • unintended consequences (“Siri, eliminate cancer. Wait! Siri, don’t eliminate cancer by killing all humans!”)
  • ethical treatment of AI itself—treating robots humanely

Internal discussion and at least some degree of worker-level decisionmaking implicates most of these questions directly or indirectly. While increased unemployment may be inevitable in a developing AI universe, workers can push the company to pitch basic income, or other ways of justly distributing the labor-saving fruits of AI. Workers can push for better protocols to spot hidden bias. And employees can certainly deliberate on how the machines they create ought to be treated.

It makes sense to at least start thinking in this way, thinking outside the hierarchical box, into seemingly radical avenues of participatory deliberation, because AI itself has the potential to vastly expand the voices of stakeholders in a world where up until now, society has tended to prioritize the voices of shareholders. An Internet of Things and socially responsible big data analytics together have the potential to truly maximize human autonomy.