Forging a Manifesto to Protect Humans from Aggressive AI
Science fiction presents an abundance of cautionary tales about the rise of artificial intelligence, or AI.
Whether it’s Skynet taking over our defense systems in the Terminator franchise, or Ava outwitting the genius behind her coding in Ex Machina, we’re essentially told this looming — and, frankly, inevitable — human achievement could result in our downfall.
Outside of film, notable tech innovators such as Elon Musk have donated significant time and money to portray AI as a threat and to figure out new ways to advance technology without turning over all the keys to the machines. With AI’s ability to grow exponentially, and hypothetically improve upon itself, at what point will intelligence surpass its creators and become impossible to control?
Following Rackspace’s participation in Telecom Exchange, an annual conference that brings together leaders in telecom and tech, I was tapped to chair an action committee with a mandate to prepare a manifesto that delineates safeguards humans must implement to prevent AI from supplanting us, as modern-day fairytales predict. I enlisted Rackspace’s Chief Network Architect, Aaron Hackney, and other industry experts to join the cause.
During our first meeting, we drew inspiration from science fiction’s very core and revisited our favorite works, dissecting the implications of AI. Aaron suggested our manifesto double as a compliment to Isaac Asimov’s Three Laws of Robotics. In short, the legendary science fiction writer’s three (eventually four) rules act as behavioral restrictions that must be coded into all artificial life to prevent it from destroying mankind.
But we also agreed putting restrictions on robots is not enough to ensure our survival. Humans must also arrive at laws that govern our behavior toward machines. In order for the singularity to work in our favor, it’s prudent we behave in ways that maximize our chances for survival.
Some questions we’ve been pondering:
- What network safeguards do we implement to prevent AI from applying their superior computational ability beyond the data or situation we want them to synthesize?
- How should we treat machines? (We took note that humans are often depicted as more brutal than robots in science fiction.)
- What rights, if any, should AI be afforded, especially if they pass the Turing Test?
- If these rights are put in place, should they be the same that humans are provided (noting here that not all human rights are created equal)?
- Also, is there an imperative for us to preserve the need for human labor, or is a world where we’re lazing about while machines take on the workloads, as depicted in Pixar’s Wall-E, acceptable? Is that a life we humans want to live?
Learn more about Rackspace.