Artificial Intelligence is a broad field of research and development that deals with simulating human intelligence in machines. With AI, computers can perform tasks that normally require human intellect. The history of AI has many milestones that have led to the current state of research and innovation. This article covers everything you need to know about the birth of AI, The Turing Test and the history of AI.
What is the Turing Test?
The Turing Test is a test used to determine whether a computer can be considered intelligent. It was proposed in 1950 by the British mathematician and logician Alan Turing in his paper “Computing Machinery and Intelligence”. During the test, one human interrogates another human and a computer on the topics that are meant to reveal human nature. If the interrogator cannot determine which of the two partners is human, then the AI is considered intelligent. For Turing, the test was not a measure of how intelligent a machine can be, but rather a way to measure how far we have come in our ability to create a machine that can be mistaken for a human.
The birth of AI
There have been many attempts to create machines that can think like humans. But, only a few milestones are worth mentioning in the history of AI. Nelson Goodman created the idea of “Artificial Intelligence” in the 1950s. In 1956, Alan Turing published his paper on Artificial Intelligence, in which he proposed a test based on imitation - the Turing Test. In 1958, the first modern AI program was written. This was the Logic Theorist, which proved the first theorem in the Principia Mathematica. In 1960, the first AI program to learn how to play chess was developed. This was the IBM computer, Deep Blue. And, in 1968, the first AI program was created to understand language. This was the SHRDLU program, which was designed to make virtual objects in a “world” out of simple shapes that are put together to create more complex shapes.
The rise of Machine Learning and AI research
With the introduction of the digital computer, researchers began to study and design programs that would use the computer to help people solve problems. This was the beginning of AI research in the 1950s. The first generation of AI research focused on creating knowledge bases and expert systems. The programs were mainly rule-based and tried to understand the world by building large bodies of knowledge. The approach, however, failed due to the inability to scale. The second generation of AI research focused on creating intelligent programs by building knowledge-based systems. The approach, however, failed due to the inability of the computer to understand the knowledge within the system. The third generation of AI research focused on creating intelligent programs by building systems that could learn from experience. The approach, however, failed due to the inability to scale.
The Dark Age of AI
The Dark Age of AI refers to the period between the early 1990s and 2006 when the interest and research in AI shrank to almost nothing. There were two reasons behind this. The first reason was the failure of third-generation AI. In the 1980s, the researchers were focused on creating intelligent systems that could learn from experience. The approach, however, failed due to the inability to scale. The computers of the time were not powerful enough to enable the approach. The second reason was the lack of funding for AI research. The government funding for AI research had been decreasing since the 1980s. With the end of the Cold War, AI became a lesser priority for the government. The investment in AI shrank to almost nothing in the early 1990s.
Advancements in Deep Learning
The rise of Deep Learning was an advancement in AI research. Deep Learning is a machine learning method that uses neural networks to learn representations of data. Neural networks are systems that are inspired by the way the human brain works. The first neural network was created in the 1940s. Neural networks, however, failed due to the limitations of computing power. The research in neural networks was revived in the 1980s with the emergence of powerful computers. The first major advancement in neural networks was the Connectionist Machine created in 1986 by John Hopfield. But, it failed due to the limitations of computing power. The next advancement was in 2012 when Google’s neural network was trained to identify cats in pictures. The algorithm was so effective that it was better than humans in identifying cats in pictures.
ML Kit by Google: An introduction to the advancement in ML Research
ML Kit is a new set of tools created to simplify the implementation of Machine Learning in your apps. It is a lightweight tool that simplifies the process of training, evaluating and deploying machine learning models. This is the first step toward democratizing ML and making it accessible to millions of app developers. It makes the cumbersome process of creating the algorithms simple and easy. ML Kit by Google is an advancement in the research of AI. It is a new tool that simplifies the process of training, evaluating and deploying machine learning models. The tool has a simple user interface which makes the implementation of machine learning easy.
A Timeline of AI Development
- The birth of AI - 1950
- The first generation of AI - 1950-1964
- The second generation of AI - 1964-1986
- The third generation of AI - 1986-2006
- The fourth generation of AI - 2006-Now
In the 1950s, a number of researchers in both mathematics and computer science began to explore what would come to be known as artificial intelligence (AI). In 1956, Alan Turing published his seminal paper "Computing Machinery and Intelligence" in which he proposed what is now called the Turing test as a criterion of intelligence. This paper contributed to an intense debate about whether machines could think.
In 1957, both Herbert Simon and Allen Newell developed programs that tried to find formal proofs for mathematical theorems; these are often viewed as some of the earliest examples of AI research. Also in 1957, John McCarthy organized a symposium at Dartmouth College where he coined the term "artificial intelligence." The field continued to grow with publications such as Marvin Minsky's "Steps Toward Artificial Intelligence". (1959) and Arthur Burks's "The Artificial Intelligence Project" (1960). In 1959, Joseph Weizenbaum created ELIZA, a computer program that imitated natural language conversation. It proved popular in psychotherapy sessions but was also subject to criticism when people realized it was not actually understanding them. This work led to the founding of a discipline called artificial intelligence psychology which is still ongoing.
In 1961, Alan Turing published a landmark paper titled "Computing Machinery and Intelligence" in which he introduced the Turing test as a criterion for artificial intelligence. The Turing test is often said to be an imitation of the way a human would determine if another human is intelligent, or "thinking". The idea behind this is that, if a computer program can fool a human into thinking it is another person, it has achieved artificial intelligence. Some have also suggested that it should be taken as an indication that the programming language used to create the AI had reached its limits and was unable to create a human-like intelligence.
In his book "Artificial Intelligence: A Modern Approach", Stuart Russell and Peter Norvig argue that the Turing test is not an indication of whether or not a computer program can be considered intelligent or even "computationally capable" as it does not measure what AI is supposed to measure. In particular, they argue that the test does not measure intelligence in terms of "general wisdom". They also point out that the test was designed in 1950, before computers became powerful in the modern sense of the word.
The Turing Test is a specific example of a more general concept called "AI test" proposed by Herbert A. Simon in 1956, which was defined as "a procedure for determining whether or not a given machine has achieved intelligence". These tests are also referred to as "Simon's triarchies".Jürgen Schmidhuber defines an AI test as follows: An AI test is an experiment that, encodes human intelligence in a computer to test if it can perform tasks as well or better than humans. The concept of AI tests has evolved over time to include more and more different types of tasks. Some examples are: determining whether a machine can learn, plan, understand natural language and make inferences; recognizing objects in images; mastering chess, go and poker; playing Atari games. A long-standing test for measuring the ability of a machine to master an environment is the AIX -CMAI test.
A key subtype of machine learning is reinforcement learning, where an agent learns through trial and error to maximize some goal, also referred to as a utility function. This allows a machine to train itself like humans do. Reinforcement learning is closely related to evolutionary computation, a method of computer programming that uses Darwinian principles from biology such as survival of the fittest and inheritance of traits through natural selection in order to design computational programs that perform some specific task.