Benevolent and vulnerable superintelligent robots are notable because they are atypical. In both the real world and in many science fiction stories, there’s something rather grey and mundane about AI. In particular, it seems like the stereotype is that AI is either malevolent or neutral-and-waiting-to-be malevolent. When characters break that stereotype through benevolence or inquisitiveness, they become iconic in their transcendence. This is certainly true of Brent Spiner’s Data (and Data’s “brother” Lore). But there are a few other noteworthy android AI types that exhibit similarly unusual traits.
With AI about to power “the next generation” of real robots, with tech companies creating “reinforcement learning software for robots” that, in one instance, gets these creations to “pick up objects they’re never encountered before,” we are seeing the ongoing “anthropomorphizing” of them as well. Sophia was made an honorary Saudi citizen, but the video of interactions with her leaves one hesitant to declare her “revolutionary” in her approach and immediacy to the world. She’s pretty stiff and many of her answers to questions come off as predictable “go-to” subroutines. She’s good-looking, though, and not just in the sense that she’s an attractive talking mannequin; she also comes off as just the slightest bit curious, wondering what she’s doing there, and cleverly self-effacing.
What are some leading AI fictional characters and what are their distinguishing traits? To bring up Lieutenant Commander Data again, one would have to say “his” distinguishing trait is vulnerability. From being discovered and rescued as the sole “survivor” of an attack on his colony, to his endless struggles with identity formation on the Enterprise, Data is vulnerably honest, vulnerably curious, conscious of his power over, and simultaneous dependence on, the material provisions and benevolence of Starfleet.
Data has (and loses) an emotion chip. According to the Memory Beta fandom site (which is not canon but in this instance simply cites the series), Lore killed the androids’ “father” Dr. Soong and stole the chip, used it to manipulate Data in the TNG episode “Brothers,” and eventually Data removed it upon neutralizing Lore. Starfleet eventually ordered its removal from Data but allowed it to be upgraded later when Commander Data was “reborn.” Outerplaces reports that;
“In the hunt to create more helpful, responsive autonomous machines, many robotics companies are working hard to build computers that can empathize with humans and tailor their actions so as to anticipate their owners’ needs. One such company is Emoshape, which is building software for robots that will help machines to learn more about humans’ moods based on their facial expressions. The company takes a novel approach to this, as engineers work to create an “emotion chip” for machines so that they can approach emotional learning with some degree of understanding as to what it feels to be happy, or sad, or otherwise frustrated.”
Data has many existential vulnerabilities: computer viruses, energy discharges, ship malfunctions, and someone reaching his “off switch.” But he is also vulnerable to having his feelings hurt, whatever those are.
Polish sci-fi writer and satirist Stanislaw Lem, who wrote Solaris and who has been called science fiction’s Kafka, developed an AI character called Golem, whose main attribute could be called “change,” or evolution. Golem begins as a military AI computer but develops self-consciousness, then engineers its own intelligence supplements. Lem’s book includes “lectures” written by Golem on the nature of humanity and reads like Olaf Stapleton (whose work is an early, metaphorical foreshadowing of big data—a superintelligent meta-history of humanity and the universe). Golem becomes concerned with understanding and critiquing humanity from a scholarly perspective. The idea of a robot scholar is pretty original. Check out this short film based on the story.
Then there’s Ray Bradbury’s Grandma, a character whose main trait is certainly benevolence. Grandma emerges in I Sing the Body Electric, as an “electric grandmother” product in this innovative story. A father buys it for his children after their mother passes away, and she quickly becomes an indispensable member of the family, although it takes a while for every last member of the family to learn to love her.
Grandma has unusual traits like being able to run water from a tap in her finger, but she also has the characteristic of being 100% committed to the children, in a way that is clearly compatible with Asimov’s robot ethics. At one point, she risks her life to save one of the children. This, of course, reminds us of “DroneSense,” a drone software platform that purports to be “used for public safety, although without such drastic scenes as an android racing to save a child from being hit by a truck. One can obviously ask “But does it want to be benevolent?”
The deeper question in the industry, though, is not whether AI will “want” to be benevolent, but whether certain traits in the actual construction of AI will tend toward good or evil. In an article published four years ago, Olivia Solon argues that it is much more likely that artificially intelligent robots will hurt us by accident than intentionally “rising up against us” or turning against any individual humans deliberately. She points out Elon Musk’s speculative fear that “an artificially intelligent hedge fund designed to maximize the value of its portfolio could be incentivized to short consumer stocks, buy long on defence stocks and then start a war” Making the wrong decision in traffic scenarios is always high on the fear list too. The “bumbling fool” AI is less terrifying than the malevolent robot, even if it may end up being a more dangerous scenario.
It is worth noting that while these are always the top of mind concerns, the vast majority of AI will be intentionally limited by design. Deployed in a neutral way to help feed the new and most important asset in the world – data. It is these AI that are greatly changing the marketing world and how companies like our client Accurate Append, an email, phone, and data vendor, operate within it.