posted by Jitesh Dundas on Wed 23rd Sep 2009 08:00 UTC
IconCan computers win the Turing Test? Imagine a day when a machine will say, "Move over Turing! You can no longer consider machines to be less smart than humans! After all, we can think too. We do all the thinking and processing and you take all the credit, just because you are our creator! ". That would be an awkward and exciting situation. To be honest, there is a valid argument here in this imaginary conversation. As naive as it may sound for now, let me assure you that such a scenario is not far away. Applications are becoming more and more logic-oriented and increasingly intelligent.

No matter in which domain they are present, computer-based systems are becoming more and more intelligent with their lightning speed and logic-oriented control. For many years now, computers have been able to simulate intelligence in areas such as chess-playing using brute-force calculation of all possible options, using their massive calculation speed advantage. At the kind of jobs that require massive calculation proficiency, computers have proven irreplaceable. Web-applications, automated control systems in cars, military, medical applications, and space odysseys are just a few examples of how good machines have become in their work.

However, computers are still behind in some aspects, like emotional intelligence and creative thinking, but they are catching up fast. The Loebner Prize tests the capabilities of computers to pass the Turing Test. The bronze medal (the highest award given to date) went to Fred Roberts of Germany, whose Elbot program was able to convince three of the judges in the panel that the messages it was sending were from a human.

Tim-Berners Lee envisioned the world wide web to become more and more intelligent, with the ability to understand and interpret data. Semantic web is widely touted as the next big thing in the world of internet. It is touted as a new paradigm shift in the way data is displayed on the internet. It is a standardized way of representing information on the web. There are several web standards that allow a uniformity in semantic web content. It allows a person to start from a point in the web and then connectto all types of information on with different sets of data on the net. RDF, Internet Business Logic are evidence that semantic web is not far away. Resource Description Framework (RDF) is a format that allows us to connect information from libraries, directories, catalogs for semantic web representation. Internet Business Logics is a design that allows us to write and execute logic on browsers in Normal english. It does not need English dictionary and is very simple to use.

Virtual Reality has applications that involve AI concepts. It was first defined by Jaron Lanier (Founder, VPL Research) in 1998 as "Immersive Virtual Reality". Here, the user is completely immersed in a 3-dimensional artificial environment that is completely generated by a computer. The user feels like he is actually living in a different world. Second Life is a video game that simulates the real world and has embedded many real world objects into it.

Amazon.com has attempted to use AI with Mechanical Turk, their task management service. It is like a decision-support system that allows the software programs to coordinate human intelligence in order to perform tasks that computers cannot do. It seems they are trying to create an AI-driven Manager Person that can manage the human like tasks with the help of computers. Some companies like Numenta, Hakia and Powerset are using AI techniques like "Search 2.0," neural networks and cellular automata.

Ryan Tonkens in his article, A Challenge for Machine Ethics", mentions the imminent development of Machine Ethics. He stresses on the implementation of a moral framework that can be followed by robots for ethical intelligence. A. Nijholt et al., in their article "Social intelligence design in ambient intelligence", have emphasized on the creation of a social intelligence design (SID) for artificial intelligence.

It is the need of the hour that machines become socially intelligent. Kerstin Dautenhahn in her paper "A Paradigm Shift in Artificial Intelligence: Why Social Intelligence Matters in the Design and Development of Robots with Human-Like Intelligence" has emphasized the need of robots to become socially intelligent in the presence of complex human social behavior. She rightly mentions an example where a robot waiter in a restaurant needs to be able to predict where people will go and sit in a crowded place.

Selmer Bringsjord, Paul Bello and David Ferrucci, in the paper "Creativity, the Turing Test, and the (Better) Lovelace Test" have noted that the Turing test is leading to a race for creating robots that trick or fool people. Such negative actions could have severe impact on the goals of Artificial Intelligence, i.e., to make the lives of humans easier. Thus, they proposed the creation of Lovelace Test. They support this test saying "A better test is one that insists on a certain restrictive epistemic relation between an artificial agent (or system) A, its output o, and the human architect H of A ? a relation which, roughly speaking, obtains when H cannot account for how A produced o". Why do we need to test computers in doing something negative. How about testing the computer in doing something positive? A very good observation and worth thinking on. I wonder what Mr. Turing would say to this?

Humans have two hemispheres of their brain, the left brain and the right brain. The left brain performs the short-term assignments like any task that we take up at a time. The right brain is for storing information for long-term. The human brain is divided into many parts with each part taking care of different type of jobs. For e.g. speech, memory, emotion. Computers currently have brains that really only do one thing, and the software is still not sophisticated enough to simulate other types of brainpower. The human brain has some features that machines don't at present.

Researchers at the Neurosciences Institute have designed a machine that thinks. This robot's brain, codenamed 'Darwin', has been developed under the Nobel laureate Gerald Edelman (M.D. Ph.D.). The 'Darwin' series of thinking brains began in 1980s. The latest robot, Darwin 6, consists of a realistically designed simulation of the nervous system housed in a mobile platform called NOMAD (Neurally Organized Mobile Adaptive Device). NOMAD is an autonomous being that starts naive and then learns from experience. It has a preference for light and a specific taste. Its activity is controlled by its simulated braincells. It can interact with the environment by sensing light and taste and by moving around and grabbing play blocks with striped and spotted patterns. NOMAD is being used to learn more about how the human brain works and understand how it shapes our behavior.

Researchers are also proposing cars, made up of nanorobots, that can change their shape as per the situation and need on the road. For example, the car will change its shape to avoid accidents and give optimum efficiency. The use of nanorobots in the manufacturing of cars will make them even stronger and smarter. Hopefully the number of deaths will decrease in future due to such cars. Cars today have sensor radar systems ('eyes' of the car) and automated brake system that can detect any problems in our driving and look for emergency situations. If the car feels any such emergency situation is underway, then it will take control of the car and apply brakes on its own. Nissan has created a robot car namedPivo that can move and see in all four directions. It has cars that can rotate 360 degrees and maneuver through pedestrian paths. With its sophisticated intelligence it can suggest the best parking spaces and suggest the cheapest and most reliable ones too.

Table of contents
  1. Can Computers Win the Turing Test?, 1/2
  2. Can Computers Win the Turing Test?, 2/2
e p (2)    46 Comment(s)

Technology White Papers

See More