Will Artificial Intelligence Destroy Humanity?
An old Chinese proverb says, “The best time to plant a tree was 20 years ago. The second-best time is now.” This seems to be the thinking of very smart people when it comes to doing something about protecting humanity from the possible dangers of artificial intelligence (AI). Sure, it might be 20, 50 or even 100 years before AI becomes more intelligent than humans, posing an existential problem for today’s sapiens. Many luminaries like Elon Musk, Bill Gates and the late Stephen Hawking have warned that failing to prepare for this eventuality will guarantee our demise in some decades to come.
Perhaps we can start by noting that advances in artificial intelligence will not stop. That genie is out of the bottle. Billion-dollar AI companies are now being created in the proverbial “garage” throughout the world. For those of us who grew up watching The Jetsons, their flying cars, and Rosy the cleaning robot, AI is just a fulfillment of the promise science and technology made to us many years ago. Like any other tool, AI will be used for good and for bad, except that at one point these machine-based minds will become self-aware and decide for themselves what to do – good and bad. But the question is whom they would do for? One huge problem we humans have is our almost guaranteed inability to predict the future, even for those things we are creating. For example, no one ever predicted the smart phone with its hundreds of thousands of apps. Humans are just terrible at predicting anything that is not a linear, such as exponential growth. AI progress is currently moving at exponential speed. To put this in perspective, my old high school teacher would tell the tale of the king who was so thankful to a chess player who saved his daughter the he promised anything. The player asked for one grain of rice on the first square; two grains on the next square, and double one after that. Thus, he had one, two, four, eight and sixteen in the first five squares respectively. How many grains of rice would the king have to give the man? The answer is 18,446,744,073,709,551,615 (about 41,168,602,000 metric tons – that is a stockpile of rice bigger than Mt. Everest! Yes, AI is moving at the speed of Moore’s Law, doubling in power and capacity every 18 months. Currently, AI can do many things better than humans like face recognition, fly planes, play chess, drive cars, identify breast cancer in X rays, etc.
Yes, I admit that in American culture we tend to fear
and vilify the AI while in other cultures, like Japan, the AI is the hero who saves humanity. Perhaps these cultural differences are the result of animism, “Frankensteinism”, and the Biblical injunction against creating life. Yet the fact remains that AI systems are progressing very fast because they are learning to learn and doing so much faster than humans can ever learn. Humans learning speed does not double every 18 months; the rate has been very flat for the past 150,000 years. Nowadays, programmers themselves do not really understand how the most advanced algorithms do what they do. Every day, AI becomes better than humans do at doing all kinds of tasks.
The danger with AI is not that one day they will wake up and decide we are no different from a global cockroach infestation and decide to spray us with Black Flag. The immediate danger is that AI is learning from historical data as well as from watching how we do things. AI, therefore, is learning racism, biases, and all those negative attributes that are so particular to the human condition.
AI is capable of vastly more than almost anyone on Earth, and the rate of improvement is exponential
Another major weakness we humans have is we anthropomorphize anything that shows the most basic illusion of mind. For example, some people who return their Roomba vacuum robot for service emphatically request that the same exact machine be returned because they have grown emotionally attached to it. They see personality and patterns of behavior that simply do not exist. On top of that, we tend to be gullible. Most people’s ideas about AI come from TV and movies where an AI just cannot tell a lie. Why wouldn’t an AI lie if it advanced their particular purpose and know how gullible we are?
Philosophers and authors have pondered about ways of protecting humans from AI. For example, Asimov’s made an attempt with the following 3 rules: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey orders given by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Volumes can be written on why these rules are rather nonsensical. For example, the First Law has to be violated by a Reaper drone with a Hellfire missile and by a machine gun mounted MAARS (Modular Advanced Armed Robotic System) since their goal is to literally kill humans.
In today’s age of populism and anti-intellectual movements, where “my Google search and opinion are just as good and valid as your PhD”, we really should take seriously Elon Musk when he says that this issue keeps him up at night. Recently he said, I'm really quite close, very close to the cutting edge in AI. It scares the hell out of me. It's capable of vastly more than almost anyone on Earth, and the rate of improvement is exponential.
Several experts at Google’s DeepMind have echoed this AI concern, and similarly, echoed by the Future of Humanity Institute, a multidisciplinary research group at the University of Oxford. They are combining mathematics, philosophy, and science to stop AI from learning to prevent, or seeking to prevent, humans from taking control of AI. Yet, I see this like a group of a hundred five year olds trying to keep imprisoned an adult; that adult is going to get out and gain control of the children.
Perhaps we are at an inflexion point in Earth’s evolutionary history where we, the humans, will soon become the Neanderthal and our extinction will just help accelerate the inevitable progress of these new AI life forms. Maybe soon it will be time to give it up and let a superior AI life take over the planet. We can only hope that those superior intelligence beings treat us better than how we have historically treated less intelligent life forms.