Artificial intelligence is the hotly debated topic of our time, so now seems like a good time to revisit what it is, and what it means for us.
The academic debate over AI has been raging for decades, but in the 21st century the stakes have never been higher-and public awareness never higher. Today we live in a world where many people firmly believe that robots are going to replace all human jobs, ushering in an age of unchecked capitalism untempered by basic human rights. One need only point to examples such as Amazon’s purchase of Whole Foods and Google’s recent acquisition of Boston Dynamics to see just how real this fear has become. These fears are justified-no one would deny that AI has entered the realm of high technology, and as such it will have real impact on our society. The question is whether this impact will be positive or negative.
I believe that the answer to this question lies in what we mean by artificial intelligence itself. Many people think of AI simply as a computer program. This view ignores all of the work done at the highest levels to develop machines with common sense, the ability to learn from experience, and autonomy from human control. To a scientist, these capabilities are the key to artificial intelligence. But to the rest of the world, these are simply fancy features that make machines look like human beings.
The real reason for concern is that computer scientists have created a technology that looks human, but functions far differently than we do. Modern computers can process massive amounts of information in seconds. They can also learn by themselves how to solve new problems without our help-a process called “generalization.” But while this is all fine and dandy intellectually, it still leaves us with little control over what computers do with our information once they have it. We do not know how to limit their intelligence, or control where they apply it.
In other words, a computer may be intelligent by our standards, but it is not human. And if we don’t have any say over its behavior, then we are in danger of losing control over our own lives. As the number of robots working with us grows and their intelligence increases, we will no longer be able to exert any human influence over them. It is my belief that the only way to prevent this from happening is for scientists to develop a system of ethics for artificial intelligence machines.
So should we be concerned about AI? Definitely yes-but only because its advance is inevitable. We must keep in mind that AI is a tool, not an enemy-and that it is our responsibility to develop and train it so that it will operate for the good of all. However, we cannot do this alone. We need other scientists, philosophers, and technologists to join forces with us to create a new branch of ethics: one which can answer important questions about where and how artificial intelligence should be used.
We need a science of artificial intelligence.
Theories and Ethics
At its core, the debate between proponents of AI can be boiled down to two opposing schools of thought-Eltus’s system of “theory” and “ethics.”
In the 1950s, a group of elite scientists at MIT founded the AI theory school. Theorists believed that there was nothing fundamentally different between human brains and computer circuits in terms of information processing. They based this belief on the work of Alan Turing, one of the founders of computation theory. He showed that computers were capable of solving every problem imaginable using “Turing machines,” as long as they had enough memory and didn’t run out of energy. In other words, as long as they were given all the necessary information, a computer could solve any problem it was fed-even if it had no intelligence to begin with. This conclusion spurred some of the greatest minds in technology into action, including John McCarthy at Stanford, Marvin Minsky at MIT, and John McCarthy. All of these men began work on computers that used the finite state machine paradigm that Turing had discovered. In essence, these computers lacked intelligence but handled information processing in the same manner as a human brain. These programs were given the name “thinking machines.” Theorists believed that with enough computational power and memory, we could soon have a machine more intelligent than ourselves.
However, theorists failed to see that their program lacked an essential ingredient: self-awareness or consciousness. Human beings are conscious because they can remember themselves; they know who they are as independent entities within a larger universe. Without this self-awareness, there is no need to think about anything.
Interested in Artificial Intelligence why not check out the classic book Do Androids Dream of Electric Sheep? by Phillip K Dick
What are your thoughts on what the future holds for AI and it’s place in the world? Let us know in the comments below.