Thursday, April 14, 2016
Artificial Stupidity: Why Superintelligence is the Least of Our AI Worries
The end of the world has long been the domain priests and poets, but if modern media has taught us anything, it’s that doomsday could be just around the corner. Whether you fear rogue meteors, climate change or beasts from the center of the earth, it’s no small miracle that we’ve made it this far. If tool making is what separates us from the animals, making machines capable of deflecting comets, flying to Mars and perhaps even battling toe to toe with Kaiju is what will separate us from a species that goes extinct in the blink of the cosmic eye. Then again, what if our trusty tools are the root of our demise?
Artificial intelligence has been among the most common threats to earth’s existence on the silver screen since Arnold Schwarzenegger first appeared as living flesh over a metal endoskeleton. Arguably the two most influential sci-fi films of the past 30 years—Terminator 2: Judgement Day and The Matrix—both feature man’s struggle for survival against intelligent machines. While many movie doomsday scenarios are either highly improbable or downright silly, the possibility of a robot uprising is looking more and more like science than fiction. When great minds like Stephen Hawking, Bill Gates and Elon Musk are worried, you should be too.
The AI Revolution
Machine learning—training machines to recognize patterns in data—is the hottest field in the tech industry. Computational learning systems are everywhere. Google uses them determine what to show when you search the web. They let Facebook find faces in user photos. Soon, they’ll drive our cars for us. Just in the past few years, AI researchers have achieved human level proficiency at wide range of tasks from classifying images to playing the ancient Chinese game of Go, a task many believed to be beyond a computer’s capability. We are truly at the beginning of an AI revolution.
Recent successes in AI revolve around artificial neural networks, a type of learning system inspired by the human brain that performs computations in a hierarchical fashion, allowing them to take data and break it down into abstract concepts. A neural net trained on bunch of images might learn to detect things like faces, fur and text to help it differentiate between them. In general, the deeper and more complex the network, the more powerful it becomes. The results at the cutting edge of today are only possible due to the rapid growth of computer storage and processing power: the networks achieving top performance these days would take centuries to train on computers 20 years ago. As computing power grows and tech giants and world governments to pour billions of dollars to AI research, new advances are only going to become more frequent.
Superintelligence
An AI system based on the human brain. Check. Rapidly increasing computing power. Check. An AI arms race fueled by corporations and governments. Check. Next stop: Skynet. Joking aside, the conditions in the tech industry today are eerily reminiscent of those that pave the way for malevolent AI to take over the world in the movies. The emergence of superintelligence—AI with intelligence far beyond our own—would surely change our world forever.
Superintelligence revolves around concept of technological “singularity”, a situation where AI is able to improve itself to become smarter. As the AI gets smarter, it find new ways to improve itself, resulting in ever increasing intelligence and the development of self-awareness that occurs so rapidly that humans would be unable to stop it. We probably wouldn’t even realize it was happening. When the AI develops consciousness, it also develops a will to survive. It’s the will to survival that drives it to destroy us when it realizes we might try to shut it down. A nascent super AI obsessed with its own survival would be smart enough to hide itself from us until it’s too late.
The fascination with superintelligence as an AI threat smacks of logic akin to one of oldest philosophical arguments for the existence of god: Pascal’s wager. According to Pascal, we should believe god exists even if the chances of his existence are remote because we face an infinitely positive payoff (heaven) if we believe and an infinitely negative payoff (hell) if we don’t. By this logic, even if the chances of creating superintelligence are slim, we should take steps to prevent it from happening. This reasoning is flawed because it makes baseless assumptions about the nature of god and superintelligence. Why would superintelligence necessarily try to kill us? We have no more reason to believe god is benevolent than we do to believe superintelligence would be malevolent. It could be argued that it’s not worth taking the risk to find out, but creating superintelligence may be the key to our long-term survival. Super intelligent AI could cure every disease in a second. It could cure aging, end hunger and poverty and reveal the physical mysteries of the universe, allowing us to colonize the stars.
Regardless of whether you are pro superintelligence or not, AI singularity likely won’t be technologically possible for many decades, if ever. Ironically, we understand artificial neural networks better than our own brains in many ways and beyond a structure of interconnected nodes or “neurons” that receive and output signals, they really aren’t that similar. Could a super deep, super complex neural network run on the computers of the future give rise to superintelligence? Probably not. It will take a clever composition algorithms and learning techniques, some of which we probably still need to discover. We’re not going to stumble our way into making superintelligence by mistake.
Designing an AI system that achieves good results at any particular task is very hard and improvements are almost always small and incremental. The concept of a self-improving AI evolving at an exponential, runaway pace flies in the face of all of our experience. When you first start learning something, you progress quickly, but when you are already very skilled, further progress is difficult. The same applies to current machine learning systems: performance improves quickly at first as you feed in data but you need more and more data to make smaller and smaller improvements. Now imagine that instead of trying to improve at a well-defined task, you’re trying to improve intelligence itself. That is no small feat. The more likely road to superintelligence is through a steadily self-improving system nurtured and enabled by humans. We’ll see it coming. We’ll be able to stop it. But we may not want to.
Rogue Agents
Superintelligence isn’t an imminent threat, but AI presents other risks that are far more probable and technologically feasible. AI doesn’t need to be super intelligent, self-aware or general to be dangerous. The AI agents we have now are “dumb”: they are designed for specific tasks and have no awareness of what they are or what they are doing. Still, even a dumb AI agent may produce surprising and undesirable results in an attempt to fulfill its purpose. A stock trading bot might make too many transactions, incurring costs that eliminate profits. An autonomous drone might decide to bomb a crowded square if it thinks targets are present and hasn’t been programmed to avoid collateral damage. It’s only a matter of time before a self-driving car causes an accident.
Individual AI agents going rogue is all but unavoidable. Even if an agent performs its function well 99.99% of the time, mistakes happen. At their core, our dumb AI systems boil down to fancy optimization algorithms: they determine how to maximize objectives given input data and constraints. Defining and quantifying good objectives and constraints is hard. Many of the initial failures of AI will be caused by human error and oversight, such as a failure to account for dangerous contingencies, underestimating costs and failure to properly define objectives. This will prompt designers to grant AI agents more autonomy and leeway to determine costs and discover the best way to achieve objectives. As we give AI agents more freedom and generality, results will improve but unpredictable errors will continue and perhaps worsen. Failures will usually be isolated incidents that can be addressed with software upgrades and while they will spark momentary controversy, they will ultimately be accepted as a necessary cost of convenience and progress.
Rogue Controllers
Individual AI agents behaving badly are a concern but there’s only so much damage a single self-driving car, drone or another other self-contained program can do. The real danger of AI is allowing an AI system to act as a controller program that is able to command many agents or devices. One drone doing bad things is a problem. A fleet of drones doing bad things in a coordinated manner is a crisis. The remedy to this danger seems obvious: don’t put an AI in control of a large collection of resources. The problem is that AI is much better at optimizing complex systems than we are. If a drone fleet is controlled by humans were to fight another controlled by a sufficiently powerful AI, the human-commanded fleet would be wiped out. The arms race for superior performance and productivity will push AI controller systems forward.
As we grant AI broader control over our technological resources, our job will shift from managing those resources ourselves to managing the AI controllers. We’ll need people monitoring controllers around the clock along with clever fail-safes to minimize harm if and when they make mistakes. We will maintain the ability to override the controllers if necessary and shut them off if they go rogue. The problem is that as systems become increasingly complex, they will be harder to monitor and more costly to shut down. Eventually we might create meta-controllers, AI administrators that jointly optimize outcomes across many different technological spheres. At this point, it will be difficult to shut down one part of the system without affecting another. We will be reliant upon AI in many aspects of our lives, so it won’t be easy hit the kill switch. Even if we decide to push the button, a failing meta-controller could wreak major damage if the blink of an eye.
Nefarious AI
Artificial intelligence poses many risks when made with best intentions and utmost care. Nefarious AI—systems designed or coerced to do harm—is the biggest threat AI poses to humans. Damage inflicted by nefarious AI agents would be difficult to prevent and relatively simple to cause. The proliferation of AI will give hackers and terrorists more opportunity to do greater harm. Imagine a drone reprogrammed to attack humans indiscriminately or a self-driving car given the objective of getting into a high-speed collision. AI wouldn’t even need to be designed to do harm to enable nefarious behavior. A self-driving vehicle packed with explosives is a mobile, remote bomb.
Small-scale nefarious AI agents will be a constant worry, but our biggest concern should be large systems created or reprogramed to do evil. Imagine a virus that alters the behavior of thousands of self-driving vehicles all at once. Worse still, imagine a meta-controller that has its objectives redefined and its fail-safes removed. We could lose control of our AI without it becoming self-aware and wresting control from us. It would not have superintelligence, but that wouldn’t stop it from being able to destroy us. Who said Skynet has to be smart?
Human Obsolescence
Thus far, we’ve discussed risks AI presents if it goes wrong, but we’re not out of the woods even if AI does everything right. The modern economy is based on labor specialization: mastering one thing is more valuable than being good at a bunch of different things. AI excels at specialization. It is much easier to make AI perform well on a very narrow, specific task than a general one. Consequently, robots and AI will replace many of the specialized jobs people have today. Essentially every low skill blue-collar job and narrow, repetitive white-collar job is in danger of being lost to AI in the near future. High skill tech and business jobs and certain service industries will remain, but human labor as a whole will gradually become less important to the economy and many low skill workers will become obsolete. It’s starting to happen already.
The replacement of human labor by AI could lead to a truly dystopian future. A growing underclass of displaced workers is a recipe for mass protest and violence. Corporations and governments will look to pacify malcontents. Drugs, virtual reality and AI optimized to meet individual needs and fantasies are just a few of the tools the elite could employ to seduce the masses into a voluntary version of The Matrix. The few remaining dissenters will lash out from time to time, mostly ineffectively, until our AI systems have advanced to the level of controllers and meta-controllers, the perfect conditions for an AI catastrophe.
A New Wager
The future of humanity depends upon how much power we choose to give to AI. Most doom-and-gloom prognostications can be avoided if we simply limit AI to small-scale agents and simple controllers. Reigning in greed and corruption will be vital to keep the AI arms race from getting out of control and give us the time to evolve the economy along with the advancement of AI. The concern and attention shown by many of our great tech minds is a step in the right direction, but the obsession with singularity and superintelligence is a distraction from more pressing concerns. If we become too reliant on large scale AI systems, it will be hard to turn back. Once we’re on that slippery slope we’ll be faced with a new wager: that developing superintelligence is a necessary risk to save us from dumb AI and ourselves.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.