Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

What is artificial intelligence?

Kris Hammond | April 13, 2015
What is artificial intelligence (AI), and what is the difference between general AI and narrow AI?

Artificial intelligence computer brain circuits electronics grid

What is artificial intelligence (AI), and what is the difference between general AI and narrow AI?

There seems to be a lot of disagreement and confusion around artificial intelligence right now.

We're seeing ongoing discussion around evaluating AI systems with the Turing Test, warnings that hyper-intelligent machines are going to slaughter us and equally frightening, if less dire, warnings that AI and robots are going to take all of our jobs.  

In parallel we have also seen the emergence of systems such as IBM Watson, Google's Deep Learning, and conversational assistants such as Apple's Siri, Google Now and Microsoft's Cortana. Mixed into all this has been crosstalk about whether building truly intelligent systems is even possible.

A lot of noise.

To get to the signal we need to understand the answer to a simple question:  What is AI?

AI: A textbook definition

The starting point is easy.  Simply put, artificial intelligence is a sub-field of computer science. Its goal is to enable the development of computers that are able to do things normally done by people -- in particular, things associated with people acting intelligently.

Stanford researcher John McCarthy coined the term in 1956 during what is now called The Dartmouth Conference, where the core mission of the AI field was defined.

If we start with this definition, any program can be considered AI if it does something that we would normally think of as intelligent in humans.  How the program does it is not the issue, just that is able to do it at all. That is, it is AI if it is smart, but it doesn't have to be smart like us.

Strong AI, weak AI and everything in between

It turns out that people have very different goals with regard to building AI systems, and they tend to fall into three camps, based on how close the machines they are building line up with how people work.

For some, the goal is to build systems that think exactly the same way that people do. Others just want to get the job done and don't care if the computation has anything to do with human thought. And some are in-between, using human reasoning as a model that can inform and inspire but not as the final target for imitation.

The work aimed at genuinely simulating human reasoning tends to be called "strong AI," in that any result can be used to not only build systems that think but also to explain how humans think as well. However, we have yet to see a real model of strong AI or systems that are actual simulations of human cognition, as this is a very difficult problem to solve. When that time comes, the researchers involved will certainly pop some champagne, toast the future and call it a day.


1  2  3  Next Page 

Sign up for Computerworld eNewsletters.