Is a Mars Mission Feasible?

In an interview late in his life, Carl Sagan speculated on the number of reasons humans would want to colonize Mars. "I don't know why you're on Mars," he said. "Maybe you're there because we recognize we have to carefully move small asteroids around to avert the possibility of one impacting the Earth . . . maybe we're on Mars because we recognize that if there are human communities on many different worlds, the chances of us being rendered extinct by some catastrophe on one world is much less. Or maybe we're on Mars because of the magnificent science that can be done there, that the gates of the wonder world are opening in our time. Or maybe we're on Mars because we have to be, because there's a deep nomadic impulse built into us by the evolutionary process."

In theory, everybody wants to go to Mars, at least in the sense that it remains the aspirational goal of nearly everyone interested in space travel to send humans to Mars and possibly establish a colony. Even Apollo astronauts like Buzz Aldrin and Michael Collins want us to do it. At one point, the president apparently offered NASA an unlimited budget with the mandate that they stop everything else they're doing and focus entirely on Mars. The consensus (and an obvious conclusion) is that we won't get to Mars during this presidency, but it's not clear the president knows that. Whether he knows it or not, his administration is compromising by encouraging stepped-up "lunar missions seen as vital steps toward sending Americans to Mars by 2033." Congress apparently wishes the administration was moving even  faster to Mars, and the U.S. House of Representatives has continued to pressure the administration to prioritize Mars over more Moon landings.

Numerous private sector forces also give us reason to be optimistic. Those companies will need massive government funding, though. They often operate at fairly marginal profit lines, particularly when they're developing cutting edge technology with high development and production costs. Making the jump from Earth-based demonstrations to the feasibility of replication on other planets is a huge leap of faith that private investors are unlikely to want to take on their own. 

NASA Administrator Jim Bridenstine said last year that he did not "rule out a first human mission to Mars as soon as 2033." He pointed out that NASA is working on such a plan based on already-existing (or near development) technologies used to get to the Moon, and it's that assumption that the technology can be transposed that is responsible for the otherwise audacious speculation that we could actually get to Mars in 13 years. Less optimistic analysis says such a mission could not be conducted before 2037, but for those of us watching from home, a four-year difference in projections seems like splitting hairs. More rovers are planned in order to collect soil samples from Mars to aid in planning human settlements there. 

But the naysayers — and there are many — say that, at present, too many feasibility issues exist. These include cost (which is more of a political question) and technological capacity. The cost has to be looked at as a function not only of the distance to Mars (and thus the need to pack tremendous amounts of supplies for both the journey and the stay), but also of the cost of each individual piece of the project.  "A trip to Mars would take six to eight months each way, plus the time it would take astronauts to explore the planet when they get there," according to experts. Other consultants point to the high cost of transporting things from Earth to Mars, in the magnitude of $1.5 million per pound of instruments, robots, food, etc. 

Then there's the dangers found in both the journey to and the "settlement" of the planet. As George Dvorsky at Gizmodo writes, "a Martian colony would be miserable, with people forced to live in artificially lit underground bases, or in thickly protected surface stations with severely minimized access to the outdoors" — a recipe for sickness, depression and other dysfunction. Mars has no magnetic field, thin air, and therefore major vulnerability to radiation

On the question of radiation, NASA appears to want answers sooner rather than later. The agency is sending radiation sensors on its upcoming lunar launch tests to track exposure levels, which can help scientists calculate the amount of radiation in the much longer journey to Mars. We already know that there would be a lot of radiation exposure on the trip a lot there and back, and that any travelers would be sitting ducks — fried ducks, even — on the planet. Being there would be dangerous unless one were so well-shielded that their body wouldn't be in contact with anything Martian or human. At that point, why not only explore it virtually? 

Orbital missions may have to suffice — assuming the radiation of the journey can be mitigated. Perhaps the orbital station could serve as an interim mission allowing very short trips to the surface, probes to both Martian moons, and more. 

Several years ago, Bethany L Ehlmann, Professor of Planetary Science at the California Institute of Technology, wrote a feasibility study and CBA on a Mars mission. Ehlmann concluded that the mission would be economically feasible, that technology could be developed to overcome the radiation risks to travellers, and that ultimately such a decision is "political" rather than scientific. That may very well be; as with so much of what we take to be "natural," feasibility is in the eye of the beholder and is a question of what we are willing to prioritize in terms of economics and human resources. But given the political and economic playing field as it currently exists, we could not presently get to Mars (at least get to the surface long enough to establish a semi-decent base camp) without cutting some dangerous corners. 

Until then, whenever "then" is — and meaning whenever we make the political decision that the comprehensive mission is feasible — we will likely keep stepping tantalizingly closer. We already know it isn't impossible. In many people's minds, that means it's inevitable. 

 

This post is sponsored by our client, Accurate Append, who offers affordable and high quality email, phone and data appending services.


Can Tech Help Get Out the Vote in a Chaotic Election?

 

Success in getting out the vote — motivating potential voters to register and then participate in mail-in or in-person voting — acts as the canary in the coal mine for American democracy. Voting is something that the least powerful and most marginalized groups in U.S. society don't do nearly as often as their wealthy and more privileged counterparts. Dambisa Moyo, who writes about democracy, pointed out in a New York Times piece last year that "nearly half the people who don’t vote [in the United States] have family incomes below $30,000, and just 19 percent of likely voters come from low-income families. So it’s hardly surprising that the Economist Intelligence Unit's Democracy Index downgraded the United States from a 'full democracy' to a 'flawed democracy' in 2017, based on diminished voter engagement and confidence in the democratic process."

So with a consensus that there's a lot at stake in the 2020 elections, you'd think voter turnout would be at an all-time high. But that thinking doesn't take Covid-19 or true voter suppression efforts into account. Some people are (justifiably) afraid to go out, while others are nefariously discouraged from voting. And this time around — in 2020 — fighting these trends won't be easy. Not only are we experiencing a never-ending pandemic, it's also impossible even under the best circumstances to keep track of all the illegitimate mechanisms some states are using to keep people from voting. 

In normal circumstances, in-person interactions are the key to almost all effective political engagement, especially where motivating people from scratch is concerned; but, as has been established, these are not normal times. We might be able to maintain relationships remotely, but starting those relationships is much more difficult as long as constituents, potential voters, donors and/or activists must remain at technological arm's length. Though some candidate and voter drives are taking place in-person, these events carry a COVID-19 stigma and, even in safe conditions, most people will not show up to things they don't have to show up to. With the election only a handful of days away, technology is all we really have. 

Getting out the vote requires two elements: people need access, and they need enthusiasm. One can explain access remotely but not provide it. And one can transmit enthusiasm, but one better be darn good at it. Tech companies, social media platforms, and app makers are all doing their best. Some companies are providing tools that bolster existing GOTV efforts by connecting campaigns and organizations with potential voters. Accurate Append, for example, provides data append, phone append and email append services that help facilitate the important work of getting out the vote. 

Some tech companies are doing GOTV work themselves. Several platform and content creators in the greater Los Angeles area "are leveraging their influence to encourage voting by Gen Z and millennial audiences as registration deadlines approach." These companies and innovators are creating voting guides and tools that they hope will "motivate first-time voters to cast a ballot." Most GOTV tech apps already begin with the premise that in-person interactions are important, and so a handful of them seem to be making an effort to replicate those interactions, either through peer-to-peer GOTV reminders and canvassing, or through widespread campaigning on the availability of the tech. On that note, Snapchat had registered a million voters by October 1 of this year, and over 60% of those users were between the ages of 18 and 24. 

Why is the 18 to 24 cohort important? Age and adaptability are key factors in the tech transition. Since the last presidential election in 2016, over 15 million people have turned 18, and are thus eligible to vote for the first time. Even a fraction of that number can swing state and local races. In fact — as we saw in Michigan, Wisconsin, and Pennsylvania — a few tens of thousands of people voting or not voting can even swing the electoral college for a presidential contender. 

Events (which can be done in a fun way online, requiring some planning) are still important in building GOTV enthusiasm. Recently at the Massachusetts Institute of Technology, student groups hosted a GOTV Fest focused on motivating communities of color to vote. 1000 people attended online, thanks to the efforts of MITVote, the Asian American Initiative, Latino Cultural Center, American Indian Science and Engineering Society, Chinese Students Club, the South Asian Association of Students and the Black Students’ Union. It was done through Zoom. 

The sweet spot would be finding a way to utilize the spirit of deep canvassing in the service of mobilizing first-time voters. It's already been used (and tested by research) in the context of deliberation designed to increase tolerance and decrease bigotry. That research tried to identify “the secret ingredient that makes deep canvassing work" and examined the role of actual deep communication with people in Tennessee, Central California, and Southern California, all prior to the 2018 elections. These are places where cultural worlds collide, including places with strong nativist and conservative tendencies running smack into the reality of ICE raids and immigrant workers. 

Of course, perhaps the reason tech companies aren't doing more to get out the vote is that those companies themselves aren't particularly good at encouraging their own workers to vote. This seems to be the case with Amazon, a company purporting to have progressive values. Currently, thousands of tech employees at Amazon recently "signed a petition calling for the e-commerce giant to provide paid time off to all of its employees to vote." That's 1.3 million workers (if you include both Amazon and Amazon-owned Whole Foods) total who, if eligible, would benefit from this policy and the resulting accessibility of voting. That's not a small number. 

The petition calls for eight hours of paid leave, and if that seems like too much, consider that some people are waiting in line for longer than that just to be able to vote early. Additionally, the eight hours can be spread out — time to register, time to volunteer, and time to vote. That is a sweet package, and Amazon should grant it. 

Technology’s attempt to keep up with democracy's demands has always been tough, but it's been made tougher because people are rightly afraid of too much social interaction and because the government currently displays no support for voting rights. Thankfully, people step up when institutions fail; and the tech sector — increasingly decentralized and facilitative of grassroots activism — has stepped up impressively.


Ideographs in the Economy of Political Communication

If you were to open up any campaigning handbook to the messaging section, you’d inevitably find exhortations, commands, and reminders to keep your messages short: short slogans, non-complex sentences, memorable short phrases. It’s reminiscent of being lectured on saving money by older folks when I was younger; there’s a similar appeal to the scarcity of resources. 

Anybody who works on campaigns understands that discourse is an economy. Simply put, we always have little time and limited space to say as much as we can, and hope that the right things get remembered. Although AI can create algorithms and we can tip the scales through microtargeting and other data-driven surgical strikes — something that data append vendors like Accurate Append can help support — no candidate can or will try to escape the burden of messaging. It’s just too important. 

Rhetorical scholars study this economy of discourse, the way meaning can be denotatively packed into words and other symbols and signs. For Aristotle, rhetoric was a skill, and the ability to see the available means of persuasion in any given situation. He believed that audiences shared common iconic thoughts and history with speakers, so speakers should naturally use familiar words and phrases to take advantage of those topoi, those common places. 

One of the more interesting analytical tools for studying the way that a few words can speak many more words — or how symbols and words can combine — is the "ideograph," the most comprehensive treatment of which comes from the work of comm-rhetorical scholar Michael Calvin McGee. 

The ideograph is subtly different from the "ideogram" although the two are sometimes used interchangeably. An ideogram is a graphic symbol representing a concept independent of a particular language. The parameters of that definition are a bit fuzzy, but what scholars typically mean are things like Egyptian hieroglyphics, or symbols put in multilingual public spaces like the fifty DOT pictograms conveying things like train stations, hotels, or toilets. Ideograms are symbols that mean words.

In contrast, ideographs — as Michael Calvin McGee explains in his definitive article on the ideograph in 1980 — make symbols out of words. Ideographs are phrases or sentences that create or reinforce political and, ultimately, ideological positions. That these are ideological and not just political is important. Politics are minutiae: policies, one candidate or another, personality disputes. Ideology is systemic, moral, committed. An ideograph saying simply that public transit would save energy would not be very effective. On the other hand, an ideograph that said "public transportation: good for everyone, good for the planet" pushes the idea into the realm of the ideological. It conceptualizes public transportation in terms of the common good, the need for universal infrastructure, a commitment to environmental sustainability.

There's an assumption in McGee's work — and in much study of political rhetoric — that ideology is always going to be characterized by sloganeering of one kind or another. There's always an assumption that things must be simplified, though not because people are unintelligent and cannot understand ideas in all their complexity. The goal is not to be like Snowball, the revolutionary pig in Orwell's Animal Farm, who deliberately simplifies everything for the animals who aren't as smart. Rather, the goal is, in recognizing the ‘economy’ of discourse, to economize our words: our time and resources are scarce, other people's time and resources are scarce, and the demands of the world and the diversity of culture and thought (particularly in a huge country like the United States) force the choices of a finite world. 

Ideographs work because people already (mostly) understand and agree with them when they see them. In containing their unique ideological commitment, McGee argues, ideographs rest on the assumption that everyone in a particular "community" or cluster of belief will understand their complexity and nuance. Consider a well-known joke as an analogy: the purpose isn’t the punchline, it’s getting people into the joke itself. 

In this respect, ideographs often use another rhetorical device called enthymeme. In its classical sense (again developed and explained by Aristotle), the enthymeme is a type of syllogism where one of the premises is hidden or suppressed. The word has later come to mean any kind of argument, formal or informal, where the speaker/writer assumes that the audience already knows part of the information necessary to walk from the introduction to the conclusion. When a Republican operative said in 2007 that they knew the American people weren't going to elect "Barack Hussein Obama" as president and over-enunciated "Hussein," that was an enthymeme. There, the suppressed premise was that the name "Hussein" denoted terrorism, an Iraq despot, and/or Islam (thereby stoking Islamophobia). Enthymemes are natural allies to ideographs, because they reduce the amount of words needed to make a point and don't have to be argumentatively or logically accountable for every word used. 

Trump’s slogan, "Make America Great Again," is a strong ideograph. It's a short and proactive phrase using ordinary language to indicate a virtual Las Vegas buffet of ideological commitment. Its conservatism is found in the word "again," suggesting a return to the past. Its radicalism and populism is found in the word "make," indicating the need for proactive restoration of a golden age. The enthymeme — the argument with the hidden premise — is "America," and implicitly indicates that there are certain forces that have rendered America no-longer great. These forces might include President Obama, hatred of whom is an obsession for Trump. Similarly, they might include the "deep state" that the administration constantly invokes to convey an image of powerful corrupt insiders. 

McGee calls the ideograph "a high-order abstraction representing collective commitment to a particular but equivocal and ill-defined normative goal." That ambiguity, along with its ability to do a lot with a little, gives it utility as a tool of campaigning and mobilization. It creates a common mythos, strengthened by each person’s commitment to an unstated moral imperative. 


Propaganda vs Disinformation: What's the Difference?

The idea that powerful people lie to us to achieve political objectives seems like the bleakest of political truths. There is a silver lining, though: it's the idea that they think they have to lie to us, that they must lie to us because they would be unable to achieve those objectives without lying. If that's true, then it's also possible to understand their lies, why they lie, and why the lies work. Once we do that, the thinking goes, we can fight back against dishonesty..

Casting political lies as a problem and methodological understanding as the solution is pretty modernist, grounded in Enlightenment thinking: lies are a problem, diagnosis and understanding are the beginnings of solutions. I think that an understanding of what we mean by "disinformation" complicates this problem-solution scheme, but not fatally. What we need to do is understand disinformation not so much as "political and economic leaders lie" as "there is always already disinformation."

By "always already" I don't mean, and I hope it doesn't sound like I mean, that every political statement from the elites is a lie, or even that every political entity is involved in disinformation. It's true that there is "spin" in every political statement, but both solidarity (how committed the leader is to her constituents) and motive are important, and just because something is "rhetorical" or even "propaganda" doesn't mean it's disinformation. Some definition of terms:

Propaganda refers to "information, ideas, opinions, or images, often only giving one part of an argument, that are broadcast, published, or in some other way spread with the intention of influencing people's opinions . . ." I like this definition because although it admits to the one-sided nature of propaganda, it stops short of calling propaganda dishonest per se. George Orwell is famous for declaring all propaganda to be lies, but he wasn't technically correct. Propaganda is the production and promulgation of ideological or political rhetoric. We might distinguish propaganda from product marketing and labeled advertising, while including search engine optimization, like the work we do for cell phone and demographic append lead vendor Accurate Append and recommended for the Medicare-for-All movement. Rhetoric is just what we call methodology of persuasion, so it doesn't intrinsically imply dishonesty, and certainly not intentional dishonesty. But hold that thought, because propaganda can be part of disinformation.

Disinformation as a term of art in diplomacy and espionage means “false information, as about a country’s military strength or plans, disseminated by a government or intelligence agency in a hostile act of tactical political subversion.” This is pretty narrow. It's a tacticalthough it can be a tactic used against the general public, rather than diplomats, military leaders, or public officials. But the analysis broadens a little: disinformation can also mean “deliberately misleading or [deliberately] biased information; manipulated narrative or facts; propaganda.” Although this circling back to propaganda makes things imprecise, I think the most accurate way to describe that relationship is that disinformation utilizes propaganda; the two are not the same. Not all manipulation is intentionally disinformation. According to Democracy Reporting International, there were instances of "manipulation of public information in 12 countries in 2019 ahead of or during elections." But not all of that is "disinformation" in the strictest sense of the term.

Disinformation is also often distinguished from misinformation, which is "false information that is spread, regardless of intent to mislead." Disinfo can utilize both misinfo and propaganda. The ubiquity of disinformation is that it lies, always dwelling in amoral political institutions, somewhere between propaganda and deliberate, bad-faith lies, becoming concretely the latter, the big lies, when the elites (of whichever faction) think it's time to deploy such lies. It's always there, it's always ready. But it takes resources, and so it happens with great intention.

Purveyors of disinformation often hide it in other sites or platforms. "Honest" political propaganda and polemic, on the other hand, is found on openly political platforms, where people know they're going to be subject to a variety of (often passionate and combative) political opinions. Call it propaganda with a warning label versus disinformation placed to bombard the consumer without representing itself as a side in a debate or part of a larger conversation.

The Democracy Reporting International research found such strategies in Tunisia and Sri Lanka: "Facebook pages focusing on entertainment with murky affiliation and ownership, which consistently posted and sponsored political messages" and "celebrity-focused pages . . .  sharing misleading political content in the run-up to the 2019 presidential elections."

Propaganda has always been intrinsically linked to news productionand again, not an "exception" to or aberration of news. Alexander the Great had newsmakers accompany him on his epic campaign eastward, and these "embedded reporters" would send messengers home with reports of the conqueror's exploits and victories and even the metaphysical claim that he was son of Zeus. We call that propaganda, even though it probably contained a lot of tall tales. But what Athenian general Themistocles did to the Persian king Xerxes in 480 BC, convincing him to wage a naval battle based on information that the Greeks weren't ready to fight, was disinformation in the term-of-art sense. Propaganda spins. Disinformation creates "from whole cloth," or out of little or nothing.

We can learn something about the distinct power of disinformation by studying the role of the Soviet State Security Committee in the 1980s, and Russia and Chinese agencies currently, concerning pandemic and epidemic disinformation. The Soviet State Security Committee (AKA the KGB), since-publicized internal documents reveal, launched a campaign to convince the world that the AIDS virus was "the result of secret experiments by the USA’s secret services and the Pentagon with new types of biological weapons that have spun out of control." The plot utilized "forged documents and inaccurate testimony from purported experts to suggest that HIV, the virus that causes AIDS, had originated not from infected animals in Africa but from biological warfare research carried out by U.S. military scientists at Fort Detrick in Maryland." This is remarkably specific disinformationcarefully planned and engineered. The project was immensely successful, because its goals were to at least muddy the informational waters and at most turn people completely against the U.S. on false pretenses. Similarly, arguments that COVID-19 "was invented in a lab or brought to China by U.S. soldiers," along with questioning whether various safety protocols actually work, or claiming it doesn't affect tobacco smokers, rely on deliberately constructed false arguments about facts rather than moral sentiments or general impressions more characteristic to ideological propaganda.

Casting doubt, or getting people to disengage, is a top-level disinformation program goal. The objective need not be a vote for your candidate or yes vote in a referendum. It might be influencing people not to vote at all, which is one less vote for the opponent. According to Rafael Goldzweig, Cambridge Analytica successfully influenced the 2016 UK Brexit referendum and the 2016 U.S. elections, using misinformation designed to either influence the vote or get people to not vote.

Understanding the difference between propaganda and disinformation is important as we enter the final months of the 2020 election cycle because many people will conflate the two, and thus be unable to understand the difference between the candidates who are simply good at spin and those actively engaged in the production and distribution of factually wrong, deliberately promulgated information. Evan Halper's recent L.A. Times piece points out that Democrats have become "adept at tracking the origin and spread of the disinformation," but "have yet to find an effective strategy for depriving it of oxygen," especially since so many social media platforms appear to be willing to let some threads of disinformation run their course rather than stopping them at the point of dissemination. Perhaps the distinction between spin and deliberately manufactured untruths can help people understand that, even though disinformation is always around, not all candidates or public officials openly embrace it.


The AI Debate and Both Sides' Worst-Case Scenarios (and How to Evaluate Them)

What's the best-case scenario for the application of artificial intelligence? What's the worst-case scenario for AI going wrong? There are, of course, speculative answers to these questions, and it's interesting to list them. But there is also a deeper conversation to be had about the nature of risk and the assumptions (and obscure spots) involved in scenario building. We're bringing you this post with support from data append and consumer contact vendor Accurate Append.

Begin with the best- and worst-case scenarios:

Among the promising developments of artificial intelligence: The slowing of disease spread. The elimination or at least radical reduction car crashes. The ability to address a host of environmental crises, including climate change. And the ability to cure cancer and heart disease. On the cardiovascular front specifically, AI allows for "deep learning" so that programs can identify novel genotypes and phenotypes across a wide range of cardiovascular diseases,

Okay, so those are some promising applications. Why be worried? Well, there are two types of "AI bad" scenarios: the apocalyptic "it could be over in minutes" scenarios, and the slow agonizing societal turmoil scenarios. I'll explain the apocalyptic scenarios first. There is the possibility that the more autonomous the systems, the greater the risk of them being deployed either purposely or by accident against innocent life. The psychological distancing of a machine, even a smart one, decreases empathy and increases acceptability of attacks. There is also the possibility that lethal AI warfighting systems could be captured, compromised, or subject to malfunction. Alexy Turchin, researcher with Science for Life Extension Foundation, and David Denkenberger, researcher with Global Catastrophic Risk Institute, developed a system of cataloguing these "global catastrophic risks" and published it in the journal AI & Society in 2018. In the section on viruses, they write: "A narrow AI virus may be intentionally created as a weapon capable of producing extreme damage to enemy infrastructure. However, later it could be used against the full globe, perhaps by accident. A 'multi-pandemic,' in which many AI viruses appear almost simultaneously, is also a possibility, and one that has been discussed in an article about biological multi-pandemics." The further advanced the entire network of AI tech, so in other words "the further into the future such an attack occurs," the worse it will be, including risking human extinction. To put some icing on that cake, the authors point out that multiple viruses, a kind of "AI pandemic" could occur, "affecting billions of sophisticated robots with a large degree of autonomy" and pretty much sealing our fate.

Turchin and Denkenberger even delve into the scenarios wherein such a virus could get past firewalls. Instead of the clumsy and obvious phishing emails we get now, imagine getting an email from someone you nominally know or have exchanged emails before; someone you trust. But it isn't really them—it's a really, really good simulation, the kind created by machines that learn. The speed of that learning is several million times faster than our own. An AI virus could simulate so many aspects of human communication that people would either have to completely stop trusting one another, or eventually someone is going to let the bugs in.

Before we go onto the higher probability and lower magnitude negative impacts of AI, though, I think we should say a few things about risk. First, actual risk is much harder to predict than it seems. We can catalogue worst-case scenarios, but this says nothing about their probability, and probability may be infinitely regressive, frankly, because, as the principle of "Laplace's Demon" holds, we'd have to step outside of the universe to accurately assess probabilities.

But what if Laplace's Demon not only applies to what technology can and cannot predict, but to the development of technology itself? This may mean that the elimination of sole risks inadvertently gives rise to others. But just as flipping heads three times in a row doesn't bear on whether the next coin flip will yield heads or tails, so the elimination of certain risks doesn't make it any more or less likely, in the scheme of things, to create new risks. They just happen.

The problem with the more apocalyptic worst-case scenarios is not that there is no possible world where they could happen, but that in a world where they could happen, any number of other apocalyptic scenarios could also happen. This is because the worst-case scenarios assume a complete lack of regulations, fail-safe measures, or other checks and balances. And while we have reason to fear that the industry will not adequately police itself or allow policing from other entities, it's a bit of a slippery slope from there to imagining no checks whatsoever.

One piece on AI policy from George Mason University discusses the proposal of Gary E. Marchant and Wendell Wallach to form "governance coordinating committees (GCCs) to work together with all the interested stakeholders to monitor technological development and to develop solutions to perceived problems." This is perhaps a nuanced version of industry self-regulation, but it really proposes to work both within existing institutions and for entities to monitor one another, a sort of commons-based approach where producers keep each other honest. "If done properly," the paper concludes, "GCCs, or something like them, could provide appropriate counsel and recommendations without the often-onerous costs of traditional regulatory structures." Combined with public education about the benefits and risks of AI, perhaps cultural practices will grow to preempt concern about worst-case scenarios. But regulators can always step in where needed.

Besides, once the possibility and knowledge sets exist for a particular level of technology, it's virtually impossible to ban itor even to enforce a ban on a particular direction or application for its research. This is why Spyros Makridakis, Rector of Neapolis University, writes in a 2017 paper on AI development that "progress cannot be halted which means that the only rational alternative is to identify the risks involved and devise effective actions to avoid their negative consequences."

As we said earlier, though, there's a more realistic apocalypse we need to face with AI: the loss of massive amounts of jobs (assuming we live in a world approaching full employment ever again post-pandemic and actually have jobs to lose). AI shifts cause massive structural patterns of transitional unemployment, markets will not correct this in a timely manner, and the number of suffering people could be overwhelming

But this ultimately seems like a political question rather than an economic one: Even without the economy transitioning into the accurate definition of socialism, which is democratic control of the means of production, a shift to a universal basic income would preserve some of the basic economic structures and assumptions of capitalism, allow a greater flexibility about defining employment in the first place, and facilitate either transitions into new work or settlement into less work. There's nothing wrong with both dreaming about risks and preparing for inevitable challenges. If AI is a genie we can't put back, we may as well negotiate with it.


'Everything We Do is About Solid Execution and Measurable Results'

Phil Mandelbaum recently interviewed me about leftists organizing and technology activism for herald.news. I got to talk a bit about what makes my digital agency tick.

We specialize in technology projects for left campaigns and causes. Our original slogan was People, Insight, Technology because we like to put together smart teams that solve organizing challenges with infrastructure that scales effort.

I also talked with Phil about my background in data tech, consulting for city, state, and federal campaigns, and working with 175 volunteers to collect more than 14,000 signatures from 46 out of California’s 58 counties to get our Gayle McLaughlin on the ballot in 2018. I talked about my roles with organizing tech Outreach Circle and ActionSprout, Facebook advertising, and data append vendor Accurate Append.

In sum:

Everything we do at The Adriel Hampton Group is about solid execution and measurable results. Whether I’m building a volunteer team or managing a design project, I’m really looking at maximum impact for effort. I have no doubt that running agency projects has helped prepare me to go hard on actions.

Hope you'll give it a read!


Three Non-Obvious Ways the Covid-19 Pandemic Changes Campaigning

Remember, oh, a year ago, what we thought the 2020 election cycle would be like? There'd be unprecedented ground energy for the presidential candidates' campaigns. There'd be intense downballot races and efforts to flip the U.S. Senate and, following the Virginia results, efforts to flip state legislatures. In local races, we'd be knocking on lots of doors, and in national races, we'd be hosting large events.

Now that every state is under at least advisory orders, that physical human contact itself is a hazard and will remain a huge risk zone for at least the next few months, there's no "ground game" in the conventional sense. We aren't knocking on doors. Sensible candidates won't host events for a while and if either party tries to hold an on-site convention, this will be seen as an aberration at best and a deadly foolish move at worst—even, I would guess, in late summer (although Tom Perez has said the Democrats want to do it!). We've already seen legitimate questions asked about some states' decisions to have on-site voting primaries, and what candidates in those primaries should say to voters about them.

A New York Times headline calls the current state of politics "remote mode" and points out that it has especially affected the battle for U.S. Congress and, to an extent, Senate races. They contrast "remote" to "retail," as I've seen other stories do. "Retail" campaigning involves face-to-face interaction, while "remote" reaches people in their homes via technology. But the NYT's use of the terms feels clumsy. "Retail" sounds like commerce, and "remote" sounds like we all live further away from each other. I don't think the pandemic puts good candidates further away from their voters

Instead, I think three interesting things could happen, and in bits and pieces are happening, as a result of having an election during an unprecedented global public health crisis.

1. Good candidates are finding interconnectivity in their communities. We’ve seen candidates in our districts do public health forums instead of stump speeches, be part of networks of public information sharers instead of slingers of mud. Local candidates, especially, must become crisis managers, counsellors, advisers, and organizers. It's no longer enough to have traditional expertise, or traditional credibility. There are a lot of stories about candidates not explicitly asking for votes, and replacing promotional material with public service announcements. Candidates don't want to appear (or be) "selfish." And so, at least alongside and sometimes instead of asking voters or constituents for support, they are "asking them about groceries, picking up prescriptions and responding with mutual aid resources,” in the words of one campaign manager.

It's too early and too chaotic to guess the electoral effects of this change in campaigns. Interestingly, even if candidates were not inclined to shift to communitarian altruism as a central campaign message, they are motivated to do so if other candidates go in that direction. The cost of not being able to "hear the room" while your opponent turns into a paladin is probably much higher than the votes you might lose by appearing cooperative instead of competitive.

2. Doubling down on tech—and a new kind of tech. It's predictable that candidates are learning how to use conferencing platforms and of course texting voters and having a robust messaging schedule was one reason Bernie Sanders did well in his earlier primaries. But we’re thinking about what Wisconsin political consultant Joe Zepecki says in a recent New Yorker piece: he says voters don't live at home, but rather live on their mobile devices. Zepecki reasons from this fact, which has been turned somewhat inside out by Covid-19 but still holds true, that digital organizing should continue at all possible entry points into a voter's phone, including "e-mail, texts, Twitter, Facebook, Words with Friends, etc."

The work we do to ensure that campaigns have the most accurate data from vendors like our client email and cell phone data provider Accurate Append becomes that much more important.

In fact, before the campaign season became pandemic season and everyone cancelled their events and went home, new kinds of technology were taking shape via the "deep canvassing" movement that now have the potential to connect change #1 above, the campaigner-as-community-advocate, and change #2, this deep turn into technology.

Deep canvassing is the phrase used to describe "developing a nonjudgmental, empathetic connection with a voter through 10 to 15 minutes of authentic conversation." Deep canvassing is even being touted as a way to "talk people out of bigotry."

Deep canvassing is a merging of technology and care. The technology component can even be something like checking a voter's registration status online during the conversation, if they want to know it. One canvasser talks of helping an older woman reach a state of "elation when I looked up her registration and showed her she was still registered." But other programs and platforms allow interactive information-sharing, reminders to do follow-up conversations, and more. Imagine the potential of this style of canvassing as people feel trapped and isolated at home. It's soberingly appropriate.

3. Candidates will be able to campaign on more systemic issues. Nobody wants to be an accelerationist about this, but desperate times do call for desperately creative, desperately radical measures. Suddenly, opposition to universal health care not tied to a job seems to make complete sense to almost everyone. The media and mainstream politicians have learned that precarity is unacceptable. Although some conservative candidates and elected officials are irresponsibly calling for the "re-opening" of public life, moderates and leftists are in favor of greater degrees of aid, debt forgiveness, and housing and health guarantees.

Mainstream sources are treating universal basic income as a legitimate policy option, and more progressive groups are outright demanding it. Spain went ahead and implemented it, which will increase perceptions of its policy legitimacy. Congressional Progressive Caucus Co-Chair Rep. Pramilla Jayapal of Washington recently called mass unemployment "a policy choice," and pointed to European countries as having policies in place to either keep people working in safe conditions or keep paying people if they are let go. Jaypal's own proposal includes "payments of salaries of up to $100,000, plus guaranteed retention of health insurance."

Expect elections to continue to spur attempts at deeper communication, deeper technology, and deeper policymaking if we have more of them during pandemics. And, expect us to take many of these new developments back through the looking glass for use in whatever semblance of back-to-normal campaigning we do in the future.


Aesthetics (and Finances) Matter As Space Tourism Takes A Flying Leap

From steampunk and Paleofuture.com to Stanley Kubrick's interpretation of Arthur C. Clarke, the images of space, future, and esoteric  technology have stimulated consumers of speculative fiction. But those images have also influenced actual scientists, tech developers, and planners. The aesthetics of fiction and the implementation of pragmatists are  mutually dependent.

About a year ago, a cluster of articles appeared across various media touting the new aesthetics of space travel. The story was that the utilitarian and spartan designs of Cold War U.S. and Soviet space capsules was giving way to an awareness that space travel would also benefit from comfort and pleasant surroundings. The privatization of space travel promises to change the old aesthetic paradigm, or non-paradigm, into a realm where visual appeal synthesizes with technological function.

This may be a manifestation of critical mass for private space tech corporations. The field of space technology is becoming more crowded in general, and more commercial. SpaceX has its hand in both. Last month SpaceX launched several Starlink satellites, the fifth time they'd done so. There are now over 300 Starlinks in orbit (heavy satellite traffic and the crisis of space debris are subjects for another post). SpaceX eventually wants 42 thousand Starlinks in orbit, a network of internet facilitators that SpaceX envisions filling in all the gaps in the world—which is a laudable goal in the abstract.

Now imagine, rising above all those orbiting machines, a hotel room. We've come a long way from the old paradigm, where NASA pushed back against the idea of space tourism on the then-under-construction ISS. The Russians were far more enthusiastic about the tourism, and the money-making, than the U.S. was back then.

The shift from bare functionality to imaginative aesthetics reminds me of the movie The Right Stuff (I never read the book), both the spartan and uncomfortable experience of being an astronaut in general, and the scene where the astronauts threaten to go on strike, demanding that there be a window in the craft. The field of space tourism, in particular, is one where aesthetics plays a strong role not just in running alongside functionality, but in some ways determining how to think about what is functional. Space tourism companies are even recruiting well-known earthbound artists and designers to guide this progress. Mary Meisenzahl at Business Insider writes that "Space exploration company Axiom is launching a space tourism program to fly tourists to the International Space Station" and is designing hotel rooms for what will eventually be a space resort independent of the ISS. For this purpose, Axiom, working with NASA, "enlisted 71-year-old French designer Philippe Starck to design interiors for these visits, which are planned to start in 2024. Starck has a history in all aspects of unusual design, from hotels to yachts to an individual wind turbine."

The designs are striking: A giant window observatory where passengers can float and look "down" at the earth, or in multiple other directions. The "modules" or guest rooms appear asymmetrically octagonal (some sides bigger than others). They have plush, firm, pillow-like tiles to comically bump into. Starck talks like an Andy Warholesque artist, saying that the overall design approach comes from "a fetal universe." The multidimensionality of the design is an explicit rejection of an up-down world.

Currently, the plan is for the modules to serve a dual purpose: for the sake of everyday space business and international coordination, the facilities will house astronauts from countries that are not ISS members. But the more exciting part is the tourists, who will pay at least $35,000 to visit. And importantly, the visitors will have WiFi up there.

Re-enter SpaceX, which is providing prototype tourism packages with Axiom Space which start at prices much higher than $35,000. The cost of each of the three prototype tourist packages currently up for sale (one seat has already been purchased) is $55 million. For that price, the tourist travelers (who have to train extensively and pass a variety of physical endurance and health tests in case things get all Sandra Bullock up there) will get to "break the world altitude record for private citizen spaceflight."

Business Insider loves the aesthetics and the excitement of space tourism—they have been running stories on it over the past few months like it's their job. And they love the vision—they have run a couple of stories that are just annotated photos. The ideas are all exciting: inflatable rooms, a promise of space Quidditch matches and multiple sunsets every day. Designers speak of a kind of hyper-Disneyworld concept. The renderings of what space hotel exteriors will look like is mind-blowing, like the Von Braun Station, a rotating wheel with several chambers and four large "spokes" into the center. Arthur C. Clarke would be envious.

The ultimate aesthetic experience (because nothing beats natural beauty) will probably be found in a space tourism mission SpaceX announced two years ago: taking a passenger around the Moon. According to Sarag Marquart at Futurism, there are currently two alternatives for going to the moon as a treat. SpaceX offered the opportunity to two billionaires at somewhere between $51 and 81 million dollars. Space Adventures charges $150 million per seat, in a Russian craft, and then a ten-day stay at the ISS.

This brings up a lot of uncomfortable sociological questions, and political questions, of course. To circle back to the theme of this post, what are the aesthetics, who will paint the picture, of a burning, resource-extracted, toxic planet earth, with a series of beautiful spaceships launching upward, filled with billionaires? That will have to be the subject of another post.

AHG in partnership with Accurate Append, a U.S. phone and email data append provider.


The Challenge of AI Regulation—Top-Down or Bottom-Up?

CEOs and the wealthy intelligentsia of technology are calling for regulation in much the same way that Mark Zuckerberg says he'll welcome regulation: as a kind of banner or veneer of legitimacy, designed to decrease risk and increase public ethos. For them, the language of regulation is the language of predictability and a level playing field among the big players.

For them, it's all about risk management, as Natasha Lomas writes at Tech Crunch in reference to Google's Sundar Pichai, who published an op-ed in Financial Times calling for the regulation of artificial intelligence. Lomas recognizes, beneath the taglines of regulation in the public interest, a "suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale"—in other words, a public call for regulation that assumes a world where the tech companies do what they want within that regulatory framework, or even before that framework completely develops.

Lomas also points out that Pichai "downplays" AI's potential negative consequences as simply the cost of doing business, "the inevitable and necessary price of technological progress." There is no discussion of how we might, either as a society or as tech workers, build the ethics and values into our technological development, through democratic deliberation—including foundational deliberation about whether we should be scaling AI technology society-wide. The call for regulation is a kind of blanket placed over the entire discussion of who should make R&D, production, and distribution decisions in the first place, and what framework they/we should use to make those decisions. Let us be responsible for creating monsters, and then you can regulate the monsters.

Perhaps this is the logic that makes Elon Musk fear both apocalyptic AI scenarios and labor unions.

Tech executives get many things about regulation half-right, or kernels of wisdom are found hidden within their "tech inevitable, might as well regulate it" platitudes. Last September Carol Ann Browne, director of communications and external relations at Microsoft, co-wrote a piece in the Atlantic with Microsoft president Brad Smith entitled "Please Regulate Us." The crux of their argument was that since leaders of industry are unelected, regulatory frameworks are preferable because they come from democratically elected officials. "Democratic countries should not cede the future to leaders the public did not elect."

Hard to argue with that. Instead, I am curious about pushing it even a little further. There are many kinds of democracy, and there are many ways to make managerial and ownership positions more accountable. What if, at least in the case of big technology companies (maybe with $100 million or more in assets?), the tech workers themselves were given a vote on things like ethics and social responsibility? Last November, Ohana Bhuiyan had an article at the LA Times about employee walkouts at Google and similar employee-initiated protests at other companies, all in the name of allowing greater employee-based guidelines and decisionmaking. Topics include contracts with Immigrations and Customs Enforcement (ICE), an agency under heightened public scrutiny and criticism, or the decision by Apple "to block an app used by pro-democracy protesters in Hong Kong to avoid police."

Imagine a similar framework emerging in AI development in general, where workers and management could participate in deliberative, open conversations, and workers were given a direct vote in controversial decisions? I enjoy working with small businesses of 20 or fewer employees, such as AHG’s client Accurate Append, a data processing vendor. Imagine instead of a single mammoth company and its close competitors developing self-enriching policy frameworks, you had hundreds or thousands of semi-autonomous creators working openly with society.

AI might be an especially appropriate object for democratization and deliberation given the concerns raised by its development and use. We’re thinking of Sukhayl Niyazov's piece in Towards Data Science just a few months ago, describing the concerns raised by what AI might do to democracy itself. Using mountains of personal data to cook up AI tends to result in "information bubbles," which Niyazov calls "virtual worlds consisting of the familiar . . one-way mirrors reflecting our own views." This artificially engineered echo chamber effect is the opposite of deliberation. So why not invite those who are developing the technology to be deliberative themselves? Yes, the process might be uncomfortable, but many algorithmic tech products are currently developed to reward that very avoidance of "pay[ing] attention to things that are either difficult to understand or unpleasant."

Concern and public outcry over information bubbles, along with other practices, led Facebook late last year to establish a review board for its advertising practices. But surely a company like Facebook could go deeper than that and fold workers directly into internal policymaking.

Back in 2016, the World Economic Forum published a list of the top nine ethical issues in artificial intelligence. The listed issues were:

  • increased unemployment
  • unequal distribution of machine-created wealth
  • humanity: how AI will change human behavior
  • AI making mistakes ("Artificial Stupidity")
  • racism—how AI will duplicate, magnify, and reflect human prejudice
  • security issues
  • unintended consequences ("Siri, eliminate cancer. Wait! Siri, don't eliminate cancer by killing all humans!")
  • ethical treatment of AI itself—treating robots humanely

Internal discussion and at least some degree of worker-level decisionmaking implicates most of these questions directly or indirectly. While increased unemployment may be inevitable in a developing AI universe, workers can push the company to pitch basic income, or other ways of justly distributing the labor-saving fruits of AI. Workers can push for better protocols to spot hidden bias. And employees can certainly deliberate on how the machines they create ought to be treated.

It makes sense to at least start thinking in this way, thinking outside the hierarchical box, into seemingly radical avenues of participatory deliberation, because AI itself has the potential to vastly expand the voices of stakeholders in a world where up until now, society has tended to prioritize the voices of shareholders. An Internet of Things and socially responsible big data analytics together have the potential to truly maximize human autonomy.


7 Problems with predictive policing

For those who either fear or welcome the world of Philip K. Dick's Minority Report, we're getting there and it's time to take stock. Although we aren't talking about actual clairvoyance of crimes and criminals, or about preventative detention based on algorithms, the theory that crime happens not randomly but in "patterned ways," combined with the confidence in big data being used to predict all kinds of social behavior and phenomena, have taken hold in cities looking to spend their federal policing grants on shiny things. This is true even though crime is decreasing overall (and as we see below, although violent crime periodically spikes back up, predictive policing is least effective against it).

And while there are legal limits on law enforcement’s direct use on some data appending products, we’re finding that agencies may use aggregators to get around even the most rigorous civil rights protections.

Not everyone is excited. Here are the most important reasons why:

  1. Policing algorithms reinforce systemic racism 

The simplest iteration of this argument is: most data to be folded into predictive policing comes from police. A lot of it comes from community members. Racism undeniably exists across these populations, as "AI algorithms are only able to analyze the data we give them . . . if human police officers have a racial bias and unintentionally feed skewed reports to a predictive policing system, it may see a threat where there isn’t one." In fact, Anna Johnson, writing for Venture Beat about the failure of predictive policing in New Orleans, says that city's experience basically proved that biased input creates biased results.

  1. Predictive crime analytics produce huge numbers of false positives

Kaiser Fung, founder of Principal Analytics Prep, has a very plainly-spoken and often bitingly funny blog where last month he devoted two posts to "the absurdity of predictive policing."

One thing Fung points out is that certain crimes are "statistically rare" (even if they seem to happen a lot). A predictive model has to generate many more red flags (targets to be investigated) than actual instances of the crime occurring in order to be "accurate."

"Let's say the model points the finger at 1 percent of the list," he writes. "That would mean 1,000 potential burglars. Since there should be only 770 burglars, the first thing we know is that at least 230 innocent people would be flagged by this model." That's a lot of suspects. How many of them will be pressured into confessing to something they didn't do, or at a minimum, have their lives painfully disrupted.

  1. Attributing crime prevention to predictive systems is meaningless: you can't identify things that didn't happen

This is a particularly devastating observation from Fung's posts about predictive policing. If you flag an area or individual as "at risk" and then police that area or individual, you may or may not have prevented anything. You can't prove that the prediction was accurate in the first place, and Fung finds it absurd that sales reps of these systems basically say " Look, it flagged 1,000 people, and subsequently, none of these people committed burglary! Amazing! Genius! Wow!" They can get away with claiming virtually 100% accuracy through this embarrassing rhetorical slight-of-hand. Call it statistical or technological illiteracy. It's also deeply cynical on the part of those promoting the systems.

  1. Predictive analytics falls apart when trying to predict violent crimes or terrorism

One area where predictive policing seems to at least . . . predict the risk of crime, is property crime. When it comes to literally anything more dreadful than burglary, though, the technology doesn't have much to say in its favor. Timme Bisgaard Munk of the University of Copenhagen's school of information science wrote a scathing review in 2017 entitled "100,000 false positives for every real terrorist: Why anti-terror algorithms don't work," and the title does justice to the article. In particular, Munk points out that predictive analytics of terrorist attack risks borrows from prediction work around credit card fraud. But terrorism is "categorically less predictable" than credit card fraud. In general, violent crime is the least predictable kind of crime.

  1. Predictive policing is mostly hype to make a frightened public trust the police

After reviewing many studies and analyses, Munk concluded that European agencies' choices of predictive policing programs is based more on pacifying the public, particularly a European public frightened of terrorism. "The purchase and application of these programs," Munk wrote in the 2017 article, "is based on a political acceptance of the ideology that algorithms can solve social problems by preventing a possible future." This is striking because there is no evidence, certainly no scientific evidence, that predictive counter-terrorism is a thing. And in a more general sense, there's no consensus that any predictive policing technology works.

  1. There's no such thing as neutral tech.

We read a powerful post by Rick Jones, an attorney at Neighborhood Defender Service of Harlem, and president of the National Association of Criminal Defense Lawyers. The post is obviously written from the point of view of a public defender, and written to highlight the public suspicion of policing technology. But a sound argument is a sound argument. Jones reminds us "that seemingly innocuous or objective technologies are not, and are instead subject to the same biases and disparities that exist throughout the rest of our justice system." Jones may be assuming a "garbage in/garbage out" metaphor that doesn't precisely describe what happens when algorithms and data sets synthesize new knowledge "greater than the sum of its parts," de-colonizing that data, "removing" bias from its inputs and practitioners, needs to be proactive at a minimum, and then may not be adequate anyway.

  1. Guess what data these programs rely on? Data from previously over-policed neighborhoods

Attorney Jones specifically talks about a system called "PredPol" which uses data on location, time, and nature of crimes to mark "high-risk" areas for future crime. It calls those areas "hot spots," a stunning display of unoriginality. And speaking of unoriginal, PredPol literally uses the very data that policing—and specifically over-policing, has generated. It's basically incestuous data collection that demonstrates the very thing it needs to prove to justify more overpolicing. It's a "feedback loop" that "enables police to advance discriminatory practices behind the presumed objectivity of technology."