You’re ordering tickets to a play or a big sports event online. You’re almost done when that annoying Captcha screen comes up and makes you type some blurry letters and numbers into a box. This step, as most people know, is to ensure that you’re just a person buying tickets and not a computer programme deployed to illicitly grab up a bunch of seats.
But why can’t a computer that can perform calculations astronomically faster than humans identify the letter B just because it’s in a fancy font with a strikethrough, or the number 5 in a fuzzy photo of a front door? Why is it so easily baffled by something the average second-grader can handle?
The answer lies in understanding the current state of artificial intelligence (AI) — what it’s capable of, what is still beyond its grasp, and how we may be rocketing toward an increasingly intelligent technology without enough thought about the implications for ourselves and our planet. This is according to Tim Urban, author of the quirky, stick-figure-illustrated, popular blog Wait But Why, which counts Tesla CEO Elon Musk and Facebook CEO Mark Zuckerberg among its fans. He spoke recently at the McNulty Leadership programme’s Authors@Wharton speaker series.
George Washington and the “Die Progress Unit”
Imagine you brought George Washington here in a time machine from the year 1750, said Urban. He reminded the audience that in Washington’s world, “the power was always out.” If you needed to get around, it would have to be by walking, running, riding a horse or traveling by ship. For communication, you could talk, yell, send a letter, or fire a cannon.
He described what it would be like for Washington to witness the technology of our time: cars, airplanes, the international space station. You could tell him about the large hadron collider and the theory of relativity, said Urban, and play him music that was recorded 50 years ago. “And this is all before he’s seen the internet,” said Urban, “the magical wizard rectangle in my pocket that can do a trillion crazy levels of sorcery, like pull open a map that can show where we are on it with a paranormal blue dot.” Or let you hold a conversation with someone in Japan, on the other side of the world.
“I don’t think George would just be surprised, or shocked. I think he would die,” said Urban. How far do you have to go into the future in order to die from the level of progress achieved? “I call that the “Die Progress Unit.” Tongue-in-cheek but not entirely, Urban used the Die Progress Unit, or DPU, to illustrate how fast technology has been advancing. It has gone from linear progress to exponential progress, he said.
For example, if George Washington wanted to perform a similar time-machine experiment on Leonardo da Vinci at around 1500 and bring him to 1750, “I have a hard time thinking da Vinci would die,” said Urban. In order to create an extreme level of shock, Washington would have to go back before the Agricultural Revolution and find a hunter-gatherer to bring to 1750. “This is someone who had never seen a large group of people together in one place before. Suddenly there’s huge cities and towering churches and ocean-crossing ships. I think that guy might die.”
Moreover, Urban asserted that the exponential progress of the past 200 years makes our time unusual compared to the rest of human history. “The DPUs are getting dramatically shorter. It suggests we’re living in a not-very-normal time.” Plus, the invention of the internet and all the technological developments around it represents a tiny sliver of time, about 25 years. What does this mean for the future?
Urban said that people instinctively resist the idea that the world is changing exponentially. “You’re probably saying ‘Naah’ … Humans are cognitively biased to think, ‘nothing that crazy is happening, because why would it be happening now?’” But he said that according to experts, it is happening.
Limitations of today’s AI
The main driver of exponential change in the world is AI, according to Urban. He explained that AI is not necessarily a robot as some people might think, but any software designed to make intelligent decisions or accurate predictions about certain problems. Most of the apps on a smartphone, such as Siri, contain artificial intelligence.
And AI’s use is increasing. A recent Computerworld article said that the installation rates for AI in cars, including infotainment and advanced driver assistance systems, is predicted to go from 8% in 2015 to 109% in 2025. And The New York Times reported recently that “the United States has put artificial intelligence at the centre of its defense strategy, with weapons that can identify targets and make decisions.” Observed Urban: “We live in a total world of AI right now.”
However, AI has obvious limitations by its inability to perform some simple tasks even kids can do. It falls down on the job when asked to do things like distinguish a small pointy-eared dog from a cat, realise that a three-dimensional drawing is supposed to represent a three-dimensional object, or recognise human faces. According to Urban, distinguished computer scientist Donald Knuth summed it up well: “‘AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking.’”
Experts classify today’s AI as Artificial Narrow Intelligence (ANI) because it is programmmed to solve particular types of problems but doesn’t have the breadth of human intelligence. For example, said Urban, Pandora can recommend music you might like based on your preferences, but “if you ask it for dating advice, it will look at you blankly.”
What if computers do acquire a breadth of intelligence beyond being programmmed to specialise in one task? A subset of AI known as machine learning is headed that way, according to Urban. AI may progress from ANI to AGI, or Artificial General Intelligence. “It’s not [just] picking out your music, helping you search on Google, or helping an airline set ticket prices. It’s … smart, generally.”
According to many experts, said Urban, if AGI is developed, then the third category of AI won’t be far behind: artificial superintelligence, or ASI. This is software that theoretically would be smarter than humans.
Urban asked the audience to consider the implications. Computers may end up above us on the evolutionary ladder, as we are above chimpanzees. There’s only a small amount of DNA that is different between humans and chimps, said Urban, yet we are their complete masters. Would ASI then become our master? “Not only will we not be able to do what ASI can do, we might not even be able to understand what it did,” said Urban. For example, he said, imagine trying to explain to a chimp how you built a skyscraper.
He added, “Think about how amazing ASI is going to be at computer science, at making itself better, at re-coding its own architecture, at understanding nanotechnology, at all the other things that can go into helping it improve itself.” Very quickly, you could have it jumping up an evolutionary step per hour, said Urban.
But he cautioned the audience against the tendency to anthropomorphise: to imagine as in many movies and books that “the robot becomes evil, and wants to take over.” That’s not how it works, he said. It’s still very scary, but for a different reason.
The future: paradise or paper clips?
Urban said many AI experts are concerned not that the machines will turn on us deliberately, but that we may simply get in the way of their plans or become irrelevant to them. Unintended consequences could occur if they are poorly programmmed.
He described the funny but ultimately terrifying “paper clip scenario” that is bandied about in AI circles. “You have an AI and some really high-tech lab like Google X,” he said. “You say, I want this AI to get smarter and train itself to be smarter. The metric we’re going to use is it will take this raw material and turn it into paper clips.”
Then one day it starts to get smarter than humans, said Urban. It may not announce this to us, and now it only cares about one thing: making paper clips. “And if it really wants to create a lot of paper clips, it may need a lot of atoms, including the atoms in our bodies. In 300 years, the whole galaxy is paper clips.”
Some experts, on the other hand, see a potential upside to a superintelligent AI. It might be able to help solve global ills like war, disease, poverty, and climate change. Urban paraphrased AI thinker Eliezer Yudkowsky as saying that to an advanced form of intelligence, the solutions to these seemingly intractable problems could be obvious.
Many experts fall into the “anxious” category, said Urban, worrying that not enough attention is paid to the implications of what we are creating. AI development is largely in the hands of startup companies focused on “glory, changing the world, changing humanity,” he noted. “Most of the AI money is in entrepreneurship and development. It’s not in AI safety. That’s not a sexy thing to invest in.”
How close are we to ASI becoming a reality? Pretty close, according to many experts. Urban quoted statistics saying that the median expert prediction for AGI is 2040 and for ASI, 2060. That’s only about 45 years from now, within our children’s and grandchildren’s time.
IMAGE CREDITS: http://assets2.bigthink.com/