The AI Debate and Both Sides' Worst-Case Scenarios (and How to Evaluate Them)

What's the best-case scenario for the application of artificial intelligence? What's the worst-case scenario for AI going wrong? There are, of course, speculative answers to these questions, and it's interesting to list them. But there is also a deeper conversation to be had about the nature of risk and the assumptions (and obscure spots) involved in scenario building. We're bringing you this post with support from data append and consumer contact vendor Accurate Append.

Begin with the best- and worst-case scenarios:

Among the promising developments of artificial intelligence: The slowing of disease spread. The elimination or at least radical reduction car crashes. The ability to address a host of environmental crises, including climate change. And the ability to cure cancer and heart disease. On the cardiovascular front specifically, AI allows for "deep learning" so that programs can identify novel genotypes and phenotypes across a wide range of cardiovascular diseases,

Okay, so those are some promising applications. Why be worried? Well, there are two types of "AI bad" scenarios: the apocalyptic "it could be over in minutes" scenarios, and the slow agonizing societal turmoil scenarios. I'll explain the apocalyptic scenarios first. There is the possibility that the more autonomous the systems, the greater the risk of them being deployed either purposely or by accident against innocent life. The psychological distancing of a machine, even a smart one, decreases empathy and increases acceptability of attacks. There is also the possibility that lethal AI warfighting systems could be captured, compromised, or subject to malfunction. Alexy Turchin, researcher with Science for Life Extension Foundation, and David Denkenberger, researcher with Global Catastrophic Risk Institute, developed a system of cataloguing these "global catastrophic risks" and published it in the journal AI & Society in 2018. In the section on viruses, they write: "A narrow AI virus may be intentionally created as a weapon capable of producing extreme damage to enemy infrastructure. However, later it could be used against the full globe, perhaps by accident. A 'multi-pandemic,' in which many AI viruses appear almost simultaneously, is also a possibility, and one that has been discussed in an article about biological multi-pandemics." The further advanced the entire network of AI tech, so in other words "the further into the future such an attack occurs," the worse it will be, including risking human extinction. To put some icing on that cake, the authors point out that multiple viruses, a kind of "AI pandemic" could occur, "affecting billions of sophisticated robots with a large degree of autonomy" and pretty much sealing our fate.

Turchin and Denkenberger even delve into the scenarios wherein such a virus could get past firewalls. Instead of the clumsy and obvious phishing emails we get now, imagine getting an email from someone you nominally know or have exchanged emails before; someone you trust. But it isn't really them—it's a really, really good simulation, the kind created by machines that learn. The speed of that learning is several million times faster than our own. An AI virus could simulate so many aspects of human communication that people would either have to completely stop trusting one another, or eventually someone is going to let the bugs in.

Before we go onto the higher probability and lower magnitude negative impacts of AI, though, I think we should say a few things about risk. First, actual risk is much harder to predict than it seems. We can catalogue worst-case scenarios, but this says nothing about their probability, and probability may be infinitely regressive, frankly, because, as the principle of "Laplace's Demon" holds, we'd have to step outside of the universe to accurately assess probabilities.

But what if Laplace's Demon not only applies to what technology can and cannot predict, but to the development of technology itself? This may mean that the elimination of sole risks inadvertently gives rise to others. But just as flipping heads three times in a row doesn't bear on whether the next coin flip will yield heads or tails, so the elimination of certain risks doesn't make it any more or less likely, in the scheme of things, to create new risks. They just happen.

The problem with the more apocalyptic worst-case scenarios is not that there is no possible world where they could happen, but that in a world where they could happen, any number of other apocalyptic scenarios could also happen. This is because the worst-case scenarios assume a complete lack of regulations, fail-safe measures, or other checks and balances. And while we have reason to fear that the industry will not adequately police itself or allow policing from other entities, it's a bit of a slippery slope from there to imagining no checks whatsoever.

One piece on AI policy from George Mason University discusses the proposal of Gary E. Marchant and Wendell Wallach to form "governance coordinating committees (GCCs) to work together with all the interested stakeholders to monitor technological development and to develop solutions to perceived problems." This is perhaps a nuanced version of industry self-regulation, but it really proposes to work both within existing institutions and for entities to monitor one another, a sort of commons-based approach where producers keep each other honest. "If done properly," the paper concludes, "GCCs, or something like them, could provide appropriate counsel and recommendations without the often-onerous costs of traditional regulatory structures." Combined with public education about the benefits and risks of AI, perhaps cultural practices will grow to preempt concern about worst-case scenarios. But regulators can always step in where needed.

Besides, once the possibility and knowledge sets exist for a particular level of technology, it's virtually impossible to ban itor even to enforce a ban on a particular direction or application for its research. This is why Spyros Makridakis, Rector of Neapolis University, writes in a 2017 paper on AI development that "progress cannot be halted which means that the only rational alternative is to identify the risks involved and devise effective actions to avoid their negative consequences."

As we said earlier, though, there's a more realistic apocalypse we need to face with AI: the loss of massive amounts of jobs (assuming we live in a world approaching full employment ever again post-pandemic and actually have jobs to lose). AI shifts cause massive structural patterns of transitional unemployment, markets will not correct this in a timely manner, and the number of suffering people could be overwhelming

But this ultimately seems like a political question rather than an economic one: Even without the economy transitioning into the accurate definition of socialism, which is democratic control of the means of production, a shift to a universal basic income would preserve some of the basic economic structures and assumptions of capitalism, allow a greater flexibility about defining employment in the first place, and facilitate either transitions into new work or settlement into less work. There's nothing wrong with both dreaming about risks and preparing for inevitable challenges. If AI is a genie we can't put back, we may as well negotiate with it.


'Everything We Do is About Solid Execution and Measurable Results'

Phil Mandelbaum recently interviewed me about leftists organizing and technology activism for herald.news. I got to talk a bit about what makes my digital agency tick.

We specialize in technology projects for left campaigns and causes. Our original slogan was People, Insight, Technology because we like to put together smart teams that solve organizing challenges with infrastructure that scales effort.

I also talked with Phil about my background in data tech, consulting for city, state, and federal campaigns, and working with 175 volunteers to collect more than 14,000 signatures from 46 out of California’s 58 counties to get our Gayle McLaughlin on the ballot in 2018. I talked about my roles with organizing tech Outreach Circle and ActionSprout, Facebook advertising, and data append vendor Accurate Append.

In sum:

Everything we do at The Adriel Hampton Group is about solid execution and measurable results. Whether I’m building a volunteer team or managing a design project, I’m really looking at maximum impact for effort. I have no doubt that running agency projects has helped prepare me to go hard on actions.

Hope you'll give it a read!


Three Non-Obvious Ways the Covid-19 Pandemic Changes Campaigning

Remember, oh, a year ago, what we thought the 2020 election cycle would be like? There'd be unprecedented ground energy for the presidential candidates' campaigns. There'd be intense downballot races and efforts to flip the U.S. Senate and, following the Virginia results, efforts to flip state legislatures. In local races, we'd be knocking on lots of doors, and in national races, we'd be hosting large events.

Now that every state is under at least advisory orders, that physical human contact itself is a hazard and will remain a huge risk zone for at least the next few months, there's no "ground game" in the conventional sense. We aren't knocking on doors. Sensible candidates won't host events for a while and if either party tries to hold an on-site convention, this will be seen as an aberration at best and a deadly foolish move at worst—even, I would guess, in late summer (although Tom Perez has said the Democrats want to do it!). We've already seen legitimate questions asked about some states' decisions to have on-site voting primaries, and what candidates in those primaries should say to voters about them.

A New York Times headline calls the current state of politics "remote mode" and points out that it has especially affected the battle for U.S. Congress and, to an extent, Senate races. They contrast "remote" to "retail," as I've seen other stories do. "Retail" campaigning involves face-to-face interaction, while "remote" reaches people in their homes via technology. But the NYT's use of the terms feels clumsy. "Retail" sounds like commerce, and "remote" sounds like we all live further away from each other. I don't think the pandemic puts good candidates further away from their voters

Instead, I think three interesting things could happen, and in bits and pieces are happening, as a result of having an election during an unprecedented global public health crisis.

1. Good candidates are finding interconnectivity in their communities. We’ve seen candidates in our districts do public health forums instead of stump speeches, be part of networks of public information sharers instead of slingers of mud. Local candidates, especially, must become crisis managers, counsellors, advisers, and organizers. It's no longer enough to have traditional expertise, or traditional credibility. There are a lot of stories about candidates not explicitly asking for votes, and replacing promotional material with public service announcements. Candidates don't want to appear (or be) "selfish." And so, at least alongside and sometimes instead of asking voters or constituents for support, they are "asking them about groceries, picking up prescriptions and responding with mutual aid resources,” in the words of one campaign manager.

It's too early and too chaotic to guess the electoral effects of this change in campaigns. Interestingly, even if candidates were not inclined to shift to communitarian altruism as a central campaign message, they are motivated to do so if other candidates go in that direction. The cost of not being able to "hear the room" while your opponent turns into a paladin is probably much higher than the votes you might lose by appearing cooperative instead of competitive.

2. Doubling down on tech—and a new kind of tech. It's predictable that candidates are learning how to use conferencing platforms and of course texting voters and having a robust messaging schedule was one reason Bernie Sanders did well in his earlier primaries. But we’re thinking about what Wisconsin political consultant Joe Zepecki says in a recent New Yorker piece: he says voters don't live at home, but rather live on their mobile devices. Zepecki reasons from this fact, which has been turned somewhat inside out by Covid-19 but still holds true, that digital organizing should continue at all possible entry points into a voter's phone, including "e-mail, texts, Twitter, Facebook, Words with Friends, etc."

The work we do to ensure that campaigns have the most accurate data from vendors like our client email and cell phone data provider Accurate Append becomes that much more important.

In fact, before the campaign season became pandemic season and everyone cancelled their events and went home, new kinds of technology were taking shape via the "deep canvassing" movement that now have the potential to connect change #1 above, the campaigner-as-community-advocate, and change #2, this deep turn into technology.

Deep canvassing is the phrase used to describe "developing a nonjudgmental, empathetic connection with a voter through 10 to 15 minutes of authentic conversation." Deep canvassing is even being touted as a way to "talk people out of bigotry."

Deep canvassing is a merging of technology and care. The technology component can even be something like checking a voter's registration status online during the conversation, if they want to know it. One canvasser talks of helping an older woman reach a state of "elation when I looked up her registration and showed her she was still registered." But other programs and platforms allow interactive information-sharing, reminders to do follow-up conversations, and more. Imagine the potential of this style of canvassing as people feel trapped and isolated at home. It's soberingly appropriate.

3. Candidates will be able to campaign on more systemic issues. Nobody wants to be an accelerationist about this, but desperate times do call for desperately creative, desperately radical measures. Suddenly, opposition to universal health care not tied to a job seems to make complete sense to almost everyone. The media and mainstream politicians have learned that precarity is unacceptable. Although some conservative candidates and elected officials are irresponsibly calling for the "re-opening" of public life, moderates and leftists are in favor of greater degrees of aid, debt forgiveness, and housing and health guarantees.

Mainstream sources are treating universal basic income as a legitimate policy option, and more progressive groups are outright demanding it. Spain went ahead and implemented it, which will increase perceptions of its policy legitimacy. Congressional Progressive Caucus Co-Chair Rep. Pramilla Jayapal of Washington recently called mass unemployment "a policy choice," and pointed to European countries as having policies in place to either keep people working in safe conditions or keep paying people if they are let go. Jaypal's own proposal includes "payments of salaries of up to $100,000, plus guaranteed retention of health insurance."

Expect elections to continue to spur attempts at deeper communication, deeper technology, and deeper policymaking if we have more of them during pandemics. And, expect us to take many of these new developments back through the looking glass for use in whatever semblance of back-to-normal campaigning we do in the future.


Aesthetics (and Finances) Matter As Space Tourism Takes A Flying Leap

From steampunk and Paleofuture.com to Stanley Kubrick's interpretation of Arthur C. Clarke, the images of space, future, and esoteric  technology have stimulated consumers of speculative fiction. But those images have also influenced actual scientists, tech developers, and planners. The aesthetics of fiction and the implementation of pragmatists are  mutually dependent.

About a year ago, a cluster of articles appeared across various media touting the new aesthetics of space travel. The story was that the utilitarian and spartan designs of Cold War U.S. and Soviet space capsules was giving way to an awareness that space travel would also benefit from comfort and pleasant surroundings. The privatization of space travel promises to change the old aesthetic paradigm, or non-paradigm, into a realm where visual appeal synthesizes with technological function.

This may be a manifestation of critical mass for private space tech corporations. The field of space technology is becoming more crowded in general, and more commercial. SpaceX has its hand in both. Last month SpaceX launched several Starlink satellites, the fifth time they'd done so. There are now over 300 Starlinks in orbit (heavy satellite traffic and the crisis of space debris are subjects for another post). SpaceX eventually wants 42 thousand Starlinks in orbit, a network of internet facilitators that SpaceX envisions filling in all the gaps in the world—which is a laudable goal in the abstract.

Now imagine, rising above all those orbiting machines, a hotel room. We've come a long way from the old paradigm, where NASA pushed back against the idea of space tourism on the then-under-construction ISS. The Russians were far more enthusiastic about the tourism, and the money-making, than the U.S. was back then.

The shift from bare functionality to imaginative aesthetics reminds me of the movie The Right Stuff (I never read the book), both the spartan and uncomfortable experience of being an astronaut in general, and the scene where the astronauts threaten to go on strike, demanding that there be a window in the craft. The field of space tourism, in particular, is one where aesthetics plays a strong role not just in running alongside functionality, but in some ways determining how to think about what is functional. Space tourism companies are even recruiting well-known earthbound artists and designers to guide this progress. Mary Meisenzahl at Business Insider writes that "Space exploration company Axiom is launching a space tourism program to fly tourists to the International Space Station" and is designing hotel rooms for what will eventually be a space resort independent of the ISS. For this purpose, Axiom, working with NASA, "enlisted 71-year-old French designer Philippe Starck to design interiors for these visits, which are planned to start in 2024. Starck has a history in all aspects of unusual design, from hotels to yachts to an individual wind turbine."

The designs are striking: A giant window observatory where passengers can float and look "down" at the earth, or in multiple other directions. The "modules" or guest rooms appear asymmetrically octagonal (some sides bigger than others). They have plush, firm, pillow-like tiles to comically bump into. Starck talks like an Andy Warholesque artist, saying that the overall design approach comes from "a fetal universe." The multidimensionality of the design is an explicit rejection of an up-down world.

Currently, the plan is for the modules to serve a dual purpose: for the sake of everyday space business and international coordination, the facilities will house astronauts from countries that are not ISS members. But the more exciting part is the tourists, who will pay at least $35,000 to visit. And importantly, the visitors will have WiFi up there.

Re-enter SpaceX, which is providing prototype tourism packages with Axiom Space which start at prices much higher than $35,000. The cost of each of the three prototype tourist packages currently up for sale (one seat has already been purchased) is $55 million. For that price, the tourist travelers (who have to train extensively and pass a variety of physical endurance and health tests in case things get all Sandra Bullock up there) will get to "break the world altitude record for private citizen spaceflight."

Business Insider loves the aesthetics and the excitement of space tourism—they have been running stories on it over the past few months like it's their job. And they love the vision—they have run a couple of stories that are just annotated photos. The ideas are all exciting: inflatable rooms, a promise of space Quidditch matches and multiple sunsets every day. Designers speak of a kind of hyper-Disneyworld concept. The renderings of what space hotel exteriors will look like is mind-blowing, like the Von Braun Station, a rotating wheel with several chambers and four large "spokes" into the center. Arthur C. Clarke would be envious.

The ultimate aesthetic experience (because nothing beats natural beauty) will probably be found in a space tourism mission SpaceX announced two years ago: taking a passenger around the Moon. According to Sarag Marquart at Futurism, there are currently two alternatives for going to the moon as a treat. SpaceX offered the opportunity to two billionaires at somewhere between $51 and 81 million dollars. Space Adventures charges $150 million per seat, in a Russian craft, and then a ten-day stay at the ISS.

This brings up a lot of uncomfortable sociological questions, and political questions, of course. To circle back to the theme of this post, what are the aesthetics, who will paint the picture, of a burning, resource-extracted, toxic planet earth, with a series of beautiful spaceships launching upward, filled with billionaires? That will have to be the subject of another post.

AHG in partnership with Accurate Append, a U.S. phone and email data append provider.


The Challenge of AI Regulation—Top-Down or Bottom-Up?

CEOs and the wealthy intelligentsia of technology are calling for regulation in much the same way that Mark Zuckerberg says he'll welcome regulation: as a kind of banner or veneer of legitimacy, designed to decrease risk and increase public ethos. For them, the language of regulation is the language of predictability and a level playing field among the big players.

For them, it's all about risk management, as Natasha Lomas writes at Tech Crunch in reference to Google's Sundar Pichai, who published an op-ed in Financial Times calling for the regulation of artificial intelligence. Lomas recognizes, beneath the taglines of regulation in the public interest, a "suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale"—in other words, a public call for regulation that assumes a world where the tech companies do what they want within that regulatory framework, or even before that framework completely develops.

Lomas also points out that Pichai "downplays" AI's potential negative consequences as simply the cost of doing business, "the inevitable and necessary price of technological progress." There is no discussion of how we might, either as a society or as tech workers, build the ethics and values into our technological development, through democratic deliberation—including foundational deliberation about whether we should be scaling AI technology society-wide. The call for regulation is a kind of blanket placed over the entire discussion of who should make R&D, production, and distribution decisions in the first place, and what framework they/we should use to make those decisions. Let us be responsible for creating monsters, and then you can regulate the monsters.

Perhaps this is the logic that makes Elon Musk fear both apocalyptic AI scenarios and labor unions.

Tech executives get many things about regulation half-right, or kernels of wisdom are found hidden within their "tech inevitable, might as well regulate it" platitudes. Last September Carol Ann Browne, director of communications and external relations at Microsoft, co-wrote a piece in the Atlantic with Microsoft president Brad Smith entitled "Please Regulate Us." The crux of their argument was that since leaders of industry are unelected, regulatory frameworks are preferable because they come from democratically elected officials. "Democratic countries should not cede the future to leaders the public did not elect."

Hard to argue with that. Instead, I am curious about pushing it even a little further. There are many kinds of democracy, and there are many ways to make managerial and ownership positions more accountable. What if, at least in the case of big technology companies (maybe with $100 million or more in assets?), the tech workers themselves were given a vote on things like ethics and social responsibility? Last November, Ohana Bhuiyan had an article at the LA Times about employee walkouts at Google and similar employee-initiated protests at other companies, all in the name of allowing greater employee-based guidelines and decisionmaking. Topics include contracts with Immigrations and Customs Enforcement (ICE), an agency under heightened public scrutiny and criticism, or the decision by Apple "to block an app used by pro-democracy protesters in Hong Kong to avoid police."

Imagine a similar framework emerging in AI development in general, where workers and management could participate in deliberative, open conversations, and workers were given a direct vote in controversial decisions? I enjoy working with small businesses of 20 or fewer employees, such as AHG’s client Accurate Append, a data processing vendor. Imagine instead of a single mammoth company and its close competitors developing self-enriching policy frameworks, you had hundreds or thousands of semi-autonomous creators working openly with society.

AI might be an especially appropriate object for democratization and deliberation given the concerns raised by its development and use. We’re thinking of Sukhayl Niyazov's piece in Towards Data Science just a few months ago, describing the concerns raised by what AI might do to democracy itself. Using mountains of personal data to cook up AI tends to result in "information bubbles," which Niyazov calls "virtual worlds consisting of the familiar . . one-way mirrors reflecting our own views." This artificially engineered echo chamber effect is the opposite of deliberation. So why not invite those who are developing the technology to be deliberative themselves? Yes, the process might be uncomfortable, but many algorithmic tech products are currently developed to reward that very avoidance of "pay[ing] attention to things that are either difficult to understand or unpleasant."

Concern and public outcry over information bubbles, along with other practices, led Facebook late last year to establish a review board for its advertising practices. But surely a company like Facebook could go deeper than that and fold workers directly into internal policymaking.

Back in 2016, the World Economic Forum published a list of the top nine ethical issues in artificial intelligence. The listed issues were:

  • increased unemployment
  • unequal distribution of machine-created wealth
  • humanity: how AI will change human behavior
  • AI making mistakes ("Artificial Stupidity")
  • racism—how AI will duplicate, magnify, and reflect human prejudice
  • security issues
  • unintended consequences ("Siri, eliminate cancer. Wait! Siri, don't eliminate cancer by killing all humans!")
  • ethical treatment of AI itself—treating robots humanely

Internal discussion and at least some degree of worker-level decisionmaking implicates most of these questions directly or indirectly. While increased unemployment may be inevitable in a developing AI universe, workers can push the company to pitch basic income, or other ways of justly distributing the labor-saving fruits of AI. Workers can push for better protocols to spot hidden bias. And employees can certainly deliberate on how the machines they create ought to be treated.

It makes sense to at least start thinking in this way, thinking outside the hierarchical box, into seemingly radical avenues of participatory deliberation, because AI itself has the potential to vastly expand the voices of stakeholders in a world where up until now, society has tended to prioritize the voices of shareholders. An Internet of Things and socially responsible big data analytics together have the potential to truly maximize human autonomy.


7 Problems with predictive policing

For those who either fear or welcome the world of Philip K. Dick's Minority Report, we're getting there and it's time to take stock. Although we aren't talking about actual clairvoyance of crimes and criminals, or about preventative detention based on algorithms, the theory that crime happens not randomly but in "patterned ways," combined with the confidence in big data being used to predict all kinds of social behavior and phenomena, have taken hold in cities looking to spend their federal policing grants on shiny things. This is true even though crime is decreasing overall (and as we see below, although violent crime periodically spikes back up, predictive policing is least effective against it).

And while there are legal limits on law enforcement’s direct use on some data appending products, we’re finding that agencies may use aggregators to get around even the most rigorous civil rights protections.

Not everyone is excited. Here are the most important reasons why:

  1. Policing algorithms reinforce systemic racism 

The simplest iteration of this argument is: most data to be folded into predictive policing comes from police. A lot of it comes from community members. Racism undeniably exists across these populations, as "AI algorithms are only able to analyze the data we give them . . . if human police officers have a racial bias and unintentionally feed skewed reports to a predictive policing system, it may see a threat where there isn’t one." In fact, Anna Johnson, writing for Venture Beat about the failure of predictive policing in New Orleans, says that city's experience basically proved that biased input creates biased results.

  1. Predictive crime analytics produce huge numbers of false positives

Kaiser Fung, founder of Principal Analytics Prep, has a very plainly-spoken and often bitingly funny blog where last month he devoted two posts to "the absurdity of predictive policing."

One thing Fung points out is that certain crimes are "statistically rare" (even if they seem to happen a lot). A predictive model has to generate many more red flags (targets to be investigated) than actual instances of the crime occurring in order to be "accurate."

"Let's say the model points the finger at 1 percent of the list," he writes. "That would mean 1,000 potential burglars. Since there should be only 770 burglars, the first thing we know is that at least 230 innocent people would be flagged by this model." That's a lot of suspects. How many of them will be pressured into confessing to something they didn't do, or at a minimum, have their lives painfully disrupted.

  1. Attributing crime prevention to predictive systems is meaningless: you can't identify things that didn't happen

This is a particularly devastating observation from Fung's posts about predictive policing. If you flag an area or individual as "at risk" and then police that area or individual, you may or may not have prevented anything. You can't prove that the prediction was accurate in the first place, and Fung finds it absurd that sales reps of these systems basically say " Look, it flagged 1,000 people, and subsequently, none of these people committed burglary! Amazing! Genius! Wow!" They can get away with claiming virtually 100% accuracy through this embarrassing rhetorical slight-of-hand. Call it statistical or technological illiteracy. It's also deeply cynical on the part of those promoting the systems.

  1. Predictive analytics falls apart when trying to predict violent crimes or terrorism

One area where predictive policing seems to at least . . . predict the risk of crime, is property crime. When it comes to literally anything more dreadful than burglary, though, the technology doesn't have much to say in its favor. Timme Bisgaard Munk of the University of Copenhagen's school of information science wrote a scathing review in 2017 entitled "100,000 false positives for every real terrorist: Why anti-terror algorithms don't work," and the title does justice to the article. In particular, Munk points out that predictive analytics of terrorist attack risks borrows from prediction work around credit card fraud. But terrorism is "categorically less predictable" than credit card fraud. In general, violent crime is the least predictable kind of crime.

  1. Predictive policing is mostly hype to make a frightened public trust the police

After reviewing many studies and analyses, Munk concluded that European agencies' choices of predictive policing programs is based more on pacifying the public, particularly a European public frightened of terrorism. "The purchase and application of these programs," Munk wrote in the 2017 article, "is based on a political acceptance of the ideology that algorithms can solve social problems by preventing a possible future." This is striking because there is no evidence, certainly no scientific evidence, that predictive counter-terrorism is a thing. And in a more general sense, there's no consensus that any predictive policing technology works.

  1. There's no such thing as neutral tech.

We read a powerful post by Rick Jones, an attorney at Neighborhood Defender Service of Harlem, and president of the National Association of Criminal Defense Lawyers. The post is obviously written from the point of view of a public defender, and written to highlight the public suspicion of policing technology. But a sound argument is a sound argument. Jones reminds us "that seemingly innocuous or objective technologies are not, and are instead subject to the same biases and disparities that exist throughout the rest of our justice system." Jones may be assuming a "garbage in/garbage out" metaphor that doesn't precisely describe what happens when algorithms and data sets synthesize new knowledge "greater than the sum of its parts," de-colonizing that data, "removing" bias from its inputs and practitioners, needs to be proactive at a minimum, and then may not be adequate anyway.

  1. Guess what data these programs rely on? Data from previously over-policed neighborhoods

Attorney Jones specifically talks about a system called "PredPol" which uses data on location, time, and nature of crimes to mark "high-risk" areas for future crime. It calls those areas "hot spots," a stunning display of unoriginality. And speaking of unoriginal, PredPol literally uses the very data that policing—and specifically over-policing, has generated. It's basically incestuous data collection that demonstrates the very thing it needs to prove to justify more overpolicing. It's a "feedback loop" that "enables police to advance discriminatory practices behind the presumed objectivity of technology."


Sci-Fi Shows Us Benevolence and Vulnerability in AI Characters

Benevolent and vulnerable superintelligent robots are notable because they are atypical. In both the real world and in many science fiction stories, there's something rather grey and mundane about AI. In particular, it seems like the stereotype is that AI is either malevolent or neutral-and-waiting-to-be malevolent. When characters break that stereotype through benevolence or inquisitiveness, they become iconic in their transcendence. This is certainly true of Brent Spiner's Data (and Data's "brother" Lore). But there are a few other noteworthy android AI types that exhibit similarly unusual traits. 

With AI about to power "the next generation" of real robots, with tech companies creating "reinforcement learning software for robots" that, in one instance, gets these creations to "pick up objects they’re never encountered before," we are seeing the ongoing "anthropomorphizing" of them as well. Sophia was made an honorary Saudi citizen, but the video of interactions with her leaves one hesitant to declare her "revolutionary" in her approach and immediacy to the world. She's pretty stiff and many of her answers to questions come off as predictable "go-to" subroutines. She's good-looking, though, and not just in the sense that she's an attractive talking mannequin; she also comes off as just the slightest bit curious, wondering what she's doing there, and cleverly self-effacing.  

What are some leading AI fictional characters and what are their distinguishing traits? To bring up Lieutenant Commander Data again, one would have to say "his" distinguishing trait is vulnerability. From being discovered and rescued as the sole "survivor" of an attack on his colony, to his endless struggles with identity formation on the Enterprise, Data is vulnerably honest, vulnerably curious, conscious of his power over, and simultaneous dependence on, the material provisions and benevolence of Starfleet. 

Data has (and loses) an emotion chip. According to the Memory Beta fandom site (which is not canon but in this instance simply cites the series), Lore killed the androids' "father" Dr. Soong and stole the chip, used it to manipulate Data in the TNG episode "Brothers," and eventually Data removed it upon neutralizing Lore. Starfleet eventually ordered its removal from Data but allowed it to be upgraded later when Commander Data was "reborn." Outerplaces reports that; 

"In the hunt to create more helpful, responsive autonomous machines, many robotics companies are working hard to build computers that can empathize with humans and tailor their actions so as to anticipate their owners' needs. One such company is Emoshape, which is building software for robots that will help machines to learn more about humans' moods based on their facial expressions. The company takes a novel approach to this, as engineers work to create an "emotion chip" for machines so that they can approach emotional learning with some degree of understanding as to what it feels to be happy, or sad, or otherwise frustrated."

Data has many existential vulnerabilities: computer viruses, energy discharges, ship malfunctions, and someone reaching his "off switch." But he is also vulnerable to having his feelings hurt, whatever those are. 

Polish sci-fi writer and satirist Stanislaw Lem, who wrote Solaris and who has been called science fiction's Kafka, developed an AI character called Golem, whose main attribute could be called "change," or evolution. Golem begins as a military AI computer but develops self-consciousness, then engineers its own intelligence supplements. Lem's book includes "lectures" written by Golem on the nature of humanity and reads like Olaf Stapleton (whose work is an early, metaphorical foreshadowing of big data—a superintelligent meta-history of humanity and the universe). Golem becomes concerned with understanding and critiquing humanity from a scholarly perspective. The idea of a robot scholar is pretty original. Check out this short film based on the story.

Then there's Ray Bradbury's Grandma, a character whose main trait is certainly benevolence. Grandma emerges in I Sing the Body Electric, as an "electric grandmother" product in this innovative story. A father buys it for his children after their mother passes away, and she quickly becomes an indispensable member of the family, although it takes a while for every last member of the family to learn to love her.

Grandma has unusual traits like being able to run water from a tap in her finger, but she also has the characteristic of being 100% committed to the children, in a way that is clearly compatible with Asimov's robot ethics. At one point, she risks her life to save one of the children. This, of course, reminds us of "DroneSense," a drone software platform that purports to be "used for public safety, although without such drastic scenes as an android racing to save a child from being hit by a truck. One can obviously ask "But does it want to be benevolent?"

The deeper question in the industry, though, is not whether AI will "want" to be benevolent, but whether certain traits in the actual construction of AI will tend toward good or evil. In an article published four years ago, Olivia Solon argues that it is much more likely that artificially intelligent robots will hurt us by accident than intentionally "rising up against us" or turning against any individual humans deliberately. She points out Elon Musk's speculative fear that  "an artificially intelligent hedge fund designed to maximize the value of its portfolio could be incentivized to short consumer stocks, buy long on defence stocks and then start a war" Making the wrong decision in traffic scenarios is always high on the fear list too. The "bumbling fool" AI is less terrifying than the malevolent robot, even if it may end up being a more dangerous scenario. 

It is worth noting that while these are always the top of mind concerns, the vast majority of AI will be intentionally limited by design. Deployed in a neutral way to help feed the new and most important asset in the world - data. It is these AI that are greatly changing the marketing world and how companies like our client Accurate Append, an email, phone, and data vendor, operate within it.


Election 2020: boots on the ground & bits in the cloud

I'm getting excited about the election. I feel my pulse get a tiny bit faster watching political ads, getting text messages, seeing people volunteer. It feels "American" even though I know we don't always live up to our ideals. Not all the Founders of the United States even wanted a popular vote. "If elections were open to all classes of people, the property of landed proprietors would be insecure," James Madison feared during a secret debate in 1787. But he didn't prevail over his colleagues who subscribed to the view of Thomas Paine, the conscience of the Founders if not fully counted among them. Paine wrote: "The right of voting for representatives is the primary right by which other rights are protected. To take away this right is to reduce a man to slavery, for slavery consists in being subject to the will of another, and he that has not a vote in the election of representatives is in this case." 

Today, candidates have to have both a ground game and a digital game. You can judge a ground game by the number of (and location of) campaign offices. Buttigieg and Warren lead in this metric. Or you can count the number of volunteers a candidate has. Bernie has 25,000 in Iowa alone, an impressive number working on those all-important Caucuses. And last February the campaign announced that "more than one million people have now volunteered to support the senator's 2020 bid." Or more precisely, to do volunteer work to support the campaign. The campaign has well over 100,000 active supporters in Pennsylvania, calling across the state and organizing in cities like Pittsburgh.

Speaking of Iowa, and as impressive as Bernie's campaign is doing there with volunteers, Joshua Barr at 538 recently posted a great analysis comparing Barack Obama's fieldwork in Iowa to all of the current Democratic contenders and finding that none of them match Obama's 2008 Iowa ground game. The campaign had field offices in the smallest of towns and rural counties. One wonders how important the candidates feel Iowa is in 2020, although the top tier seem very invested in it. 

There's no doubt that Bernie will have boatloads of volunteers, and one could easily see the scenario where he has more than any other candidate. But "a million" sign-ups might mean only a fraction of actual volunteers showing up—a calculation that all campaign volunteer coordinators have baked into their analysis of what can be done. Volunteers can be fickle and unreliable. But many hands make light work, and operations that make volunteers feel important and appreciated will keep enough of them coming back that a lot of campaign work can be done. 

The Sanders campaign is on to something, as a recent Huff Post piece describes: they have a vision and a method. They empower people to host house parties and deliver stories, they use a lot of texting, the campaign has created "an infrastructure to facilitate the work of its most dedicated supporters." More and more campaigns that are investing in this outreach, especially via SMS messages, are using vendors like our client Accurate Append, an email, and phone contact data quality vendor, to acquire those mobile numbers.  

Far more money is being spent on digital advertising. It's not just for the weird world of mass microtargeting either. Digital ads can also test campaign messages, which can then be transposed into television advertising, which still dominates the elections media, particularly in the two months before election day. But despite that TV focus, by "September, presidential hopefuls had cumulatively spent $60.9 million on Facebook and Google ads compared to $11.4 million on television ads, according to an analysis by the Wesleyan Media Project." Voters also give feedback online. Data, and lately big data, have played a role in processing strategies from social media engagement.

All of this emerged from Barack Obama’s use of Facebook ads in 2008—what people in the field call a "turning point." One expert "predicts that $6 billion will be spent on paid advertising during the 2020 election" with most ultimately going to broadcast and cable television, but at least $1.2 billion on digital ads. 

It's when the two types of campaigning are combined in large order that you know a candidate is serious. The Republicans are often written off as lacking ground games, but that accusation would be laughable in 2020. Whatever Trump's approval numbers, and whatever support he may have shedded from those who did not know what to expect from him, there's no doubt that his supporters will make every effort to be organized and proactive; now they have a president to defend. And a portion of the billionaire class has the money to spend. 

In Michigan, "Republican President Donald Trump’s re-election campaign is training volunteers for what his national press secretary described as the most advanced ground game in modern political history." If Trump wins Michigan, well, it's a ballgame at that point. The Michigan Republican Party is facilitating national training sessions, and the campaign is distributing outreach tools to the states. But the RNC also has a digital game that it's poured $300 million into since 2014. We know the Trump campaign will pay trolls and botmakers and all kinds of craftsmen for social media engagement. 

So these are the thoughts that keep me alert to news of both grassroots campaigning and digital work, including digital shenanigans that make me cringe. The game is afoot, and there will be unprecedented human and monetary capital invested in its outcome. I'm not the only person who feels oddly, perhaps ironically, patriotic about it. In an essay called "Democratic Vistas," Walt Whitman wrote: "I know nothing grander, better exercise, better digestion, more positive proof of the past, the triumphant result of faith in humankind, than a well-contested American national election." Now, I can think of a few grander exercises or more fun ones at least, but there is certainly some humanist pride in the whole enterprise, as corrupted and malleable as it sometimes seems.


What does 2020 Hold for Big Data, AI and Tech?

Forbes predicts "AI, Disinformation, and Human Augmentation" in 2020 and I can't say I disagree, but let's take a deeper dive. I'm especially interested in the way that new technology, and new conversations, are building upon existing ones. 2019 gave us lots of discussion about AI, quantum computing, cryptocurrencies, and unethical political advertising via microtargeting. Yes, Forbes says, these discussions will continue. But here's what I'm looking at. 

Big data does IoT: The most promising technological evolution to continue into the new year is the merging of data analytics with the Internet of Things (IoT). The heraldry of IoT a few years ago has not proven unwarranted. The promise of an integrated material and informational life, with more efficient and appropriate exchange and delivery of everything, is taking shape. The integration of more and better data analytics will take this even further. This is first on Marcel Deer's list of important predictions for big data in 2020: "This time next year, we can expect to have 20 billion IoT devices collecting data for analysis . . . This means we will likely acknowledge more analytical solutions for IoT devices, helping to provide transparency and more relevant data." The business implications of this trove of data will also be interesting to see develop as well. Those in the data appending industry like our client Accurate Append, an email, and phone contact data quality vendor - might see new ways to help businesses better connect with and understand their customers. 

Shortage of science data pros: Regarding IoT analytics and the AI sector in general, Deer also says "around 75 percent of companies might suffer while accomplishing matured benefits of IoT due to a lack of data science professionals." There are a lot of late-in-the-year stories floating around about this now, such as Rainmakrr's coverage of recruiter agency demand in the UK and Upside's prediction that demand will grow in 2020. The Trump administration's extension of its immigration caps on H-1B visas won't help matters, and that's likely to be a political showdown as the administration tries to step up its anti-immigrant red meat efforts to solidify votes in the 2020 election, and Stewart Anderson at Forbes says that those increased restrictions will be a story next year. 

In-Memory Computing: I'm putting quantum computing aside in this post even though it was one of the biggest stories of 2019 and will probably continue to be discussed (but see this post saying it all might come to nothing). Something almost as mind-blowing is happening with in-memory computing, where you can store data in RAM among many computers and implement parallel processing that's 5,000 times faster than processing in individual computers. Deer points out that the "decreasing cost of memory technology" will popularize in-memory computing, augmenting real-time sentiment analysis, machine learning, and a host of AI aspirations. Just to pique your interest further, one system achieved a billion financial transactions per second using 10 commodity servers, tech, and equipment that cost less than $25,000. 

Tech, mental health, and cybernetics: I also wonder about the ongoing discussion on technology and mental health. Two years ago, the Healthy Living Blog cited a Duke University study that aligned with the conventional wisdom of the time—that adolescent use of social media technology was associated with high ADHD symptoms. I've always been a little troubled by the ableism in these kinds of reports, but I found it hard to articulate my suspicions. Something about where you draw the line on the technological enhancement of communication; the fact that people treated telegraphs like we treat social media now, and some other sentiments. 

But that older Healthy Living Blog post also cited studies from the University of Michigan (decreased happiness), University of Gothenburg (depressive symptoms), and still more studies finding psychological withdrawal and "poor mental health" in general.  

Look for new voices to push ahead in the conversation in 2020, raising different concerns, including the ways in which social media can improve mental health. As a foreshadowing of this, in December, Jenna Tsui wrote of the mounds of narrative data rolling in, written by people with mental illness, lauding some platforms for making them "feel less alone by acting as a peer support mechanism." The Dartmouth study analyzed 3000 comments and found clusters of content on feeling less alone and coping with the fear of mental illness. 

I don't think we need to limit that discussion to those with explicit, self-identified or diagnosed mental illnesses either, although those are important. I think these platforms offer peer support, validation, and connectivity in general, and as with any medium, it's important to weigh how they do and how they don't. The Dartmouth research is qualitative and so it's different than the more data-driven findings that raise concerns over adolescent tech use, but it opens up the door to a larger conversation about our cybernetic identity and evolution, and I hope and expect this to be a deeper topic of discussion in 2020—maybe even combined with talk about the need to democratize and increase the transparency of platforms that are currently implicated in spreading false news through microtargeting

Watching the watchers: Finally, 2020 should see continuing concern over surveillance technology like facial recognition technology and large-scale DNA database access. Facial recognition tech is still yielding confirmed results showing racial bias as of December of this year. Concern over police powers is not letting up. Even though the United State Supreme Court is becoming more conservative, two years ago in Carpenter v. United States the Court took the notable step of finding that there were fourth amendment issues in public surveillance, something it hadn't acknowledged before, always buying the police's argument that there's no expectation of privacy in public. And lower courts have weighed in: Last summer the Ninth Circuit held that "the development of a face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests." So I'll be interested to watch the dynamic between leaps in the technology due to big data, and the legal debates that emerge. 

It looks like 2020 will be a race between the good news and the bad news on the computing technology front. Happy New Year and may you live in non-interesting (or at least benevolent) times as much as possible!


Per-Vote Municipal Election Spending and Climate Change: Seattle Questions

Crises, divisions, and battle stakes are all accelerating. That's why it's increasingly important for political candidates to have good information on voters, using vendors like our client Accurate Append, an email, and phone contact data quality vendor to have accurate data for outreach. Despite the undeniable fact that you have to spend money to win elections, the dynamics and optics of that spending are also important. 

You'll find a shorter post I wrote here from a couple of weeks ago, written while results were still being tabbed for the King County (Seattle) elections. There, I pontificated on the folly of Amazon and other corporations spending so much money on these local races—and losing most of them. But now that the King County results are finally tabulated, we can also springboard into a deeper and weirder discussion: what are these corporate stakeholders, and other donors, spending their money on? 

2019 was unprecedented: votes for "Egan Orion and Heidi Wills, two losing candidates who were backed heavily by big businesses like Amazon," cost nearly $59 and more than $50 respectively. That's much higher than the average for the 14 city council campaigns overall, at nearly $29 per vote, but that is still a lot of money. We usually associate big election spending with national races, particularly presidential elections. But in so many ways, the local is more politically real than the national anyway. And as we'll see a bit later, municipal policies are going to make or break communities as the effects of climate change begin taking their toll, particularly in coastal states like Washington. 

Not only do we associate spending with federal elections; we also tend to think more, talk more, and participate more in those national races, and such priorities don't actually serve us. Last year, Lee Drutman wrote an article for Vox lamenting that "America has local political institutions, but nationalized politics. This is a problem." It's a problem, Drutman says, because data indicates people consume far more national than local news, and behave accordingly, despite the fact that only 537 federal elected offices exist, compared to around half a million state and local electeds. That huge abstraction of political energy into a realm where individual votes matter far less than they do in mayoral or county commissioner races means that highly ideological and spectacle-oriented national political parties control public discourse—making it more about drama than actual policy. 

Drutman discusses Daniel Hopkins' recent book on the nationalization of political behavior. Hopkins' argument that the United States prioritizes "place-based voting" is even more provocative now that much of the world is shifting towards a more migratory existence. 

"Climate migrants" (who are not legally considered refugees, although this could change in the future of international legal activity) are those of us who have moved, are moving, or will move in response to weather events, food availability, resource conflicts, and other crises, and the numbers of them are going to grow exponentially in the coming decades. We have no idea how many people will be moving around the world, but we have good reasons to think it will be more than we end up estimating. The movement will take place both from country to country (or, alarmingly, from country to permanent nomadism), and within countries. The number of people completely abandoning their part of the world is likely to be in the hundreds of millions over the next century at the very least. 

In June of this year, the Center for Climate Integrity released a report showing that Washington would bear the highest cost of all West Coast states in protecting and rendering sustainable those communities most likely to suffer from the climate crisis. "Beyond laying out broad cost estimates, the report also questions who will foot the bill for climate adaptation." This debate generally consists of folks on the left saying that fossil fuel companies ought to bear those financial costs, and those on the right continuing to argue against redistributive regulation. 

Seattle's city council recently passed a resolution committing the city to one of the strongest localized Green New Deals in the country, requiring drastic emissions reductions "while increasing affordability for low-income families." Under this vision, the city will be carbon emissions-free by 2030, will invest in neighborhoods that have been historically marginalized and unfairly hit with the worst environmental harms in the past, and will confer with indigenous people and tribal nations on climate policy. We can expect future city council decisions, at least for the foreseeable future, to do more of this boundary-pushing.  

But a curious part of this, one which I don't think has been examined politically or philosophically, is the tension between the U.S. being home to a "politics of place," to paraphrase Hopkins, and the likelihood that people may not be staying long, or much longer at least, in those municipalities and surrounding greater city areas if staying there is financially or physically hazardous because of climate change. Here is where we approach a very weird convergence of local politics, a national anti-corporate zeitgeist, and deeper philosophical questions of the cost and long-term consequences of digging our heels in for or against public spending. Consider just the Seattle race. 

First, consider that it was won by the left in a come-from-behind victory, at least perceptually. A few days before the elections began, and even in analysis of the initial (and ultimately misleading) results, critics of Seattle's "progressive-socialist" coalition government were predicting that the scare tactics and promises of "responsibility" by the "moderate Democrats, neighborhood groups, and public-safety unions"—along with Amazon & Co.'s big bags of money would pay off. But just before the election began, Amazon threw another $1 million to the Chamber of Commerce's political action committee. Then, the race turned into a referendum on corporations buying elections, and that, combined with a base much more loyal than the skeptics had supposed, resulted in the progressives and socialists pretty much running the table.

Second, imagine if Amazon's candidates of choice had won in November 2019. Those candidates generally had more negative views of taxes, and more neoliberal views on which entities (if any) ought to be providing services to the public. A slate of candidates backed by a big business might disfavor the tax aspirations of the current Seattle city government. Consider that this outcome might shift the course of Seattle's long-term population trends. What if Amazon hadn't made that strategic error and had instead genuinely changed the composition of the city government? It's possible a more "business-friendly" municipal government would reverse course on some of the public-oriented climate response policies. And, over the next 50-ish years, that relative diminishing of public climate mitigation actions, now replaced by either wishy-washy private initiatives or nothing at all, might further drive emigration from King County. Then the per-vote-spending becomes even more surreal. Or, those in favor of strong corporate influence might have been proven right, the private sector offering better climate adaptation solutions to the city. 

Third, consider what would happen if voter turnout in local races were to increase. Right now about 1 in 5 people vote in exclusively local elections—a figure much lower than voter turnout in national elections. The collection of voter data, a project taken on by institutes such as Pew Research, can actually increase voter turnout if the research, and contact with research subjects, begin before election day. And there's also some misreporting by respondents who want to appear more engaged than they actually are, and who will report they'd voted when they hadn't.

So we dip into a double irony when we think about how absurd it is for municipal candidates to rely on spending that hits $50 or more per vote gained, but also how such an investment is truly an investment in votes rather than in the residents themselves, who, depending on the policy orientation of local leaders, may end up being literal climate refugees or another category of municipal expat.