Jul 16, 2013

Artificial intelligence has the verbal skills of a four-year-old, still no common sense



Ever since computers came about, futurists and science fiction authors have been imagining a world in which thinking machines can best the frail human brain. Over the decades it has become apparent that simply throwing more processor cycles at the problem of true artificial intelligence isn’t going to cut it. A brain is orders of magnitude more complex than any AI system developed thus far, but some are getting closer. ConceptNet 4, an AI developed at MIT, was recently put through a standard IQ test given to young children. The result? ConceptNet 4 is a slightly odd four-year-old child.

ConceptNet is a semantic network containing a large store of information that is used to teach the system about concepts. For example, ConceptNet would know that a saxophone is a musical instrument in the same way any other computer on the planet would “know” that. However, ConceptNet is designed to process the relationships between things. So ConceptNet would also know that a saxophone is used extensively in jazz music, for example. It connects those two concepts and can answer questions about the relationship.

Researchers at the University of Illinois at Chicago decided to take ConceptNet for a spin to see how it compared to children. They used the Wechsler Preschool and Primary Scale of Intelligence Test, which is one of the common assessments used on small children. ConceptNet passed the test, scoring on par with a four-year-old in overall IQ. However, the team points out it would be worrisome to find a real child with lopsided scores like those received by the AI. ConceptNet’s results reveal some ongoing problems with artificial intelligence.

The system performed above average on parts of the test that have to do with vocabulary and recognizing the similarities between two items. In this section, the examiner might ask, “What do apples and bananas have in common?” This is right up ConceptNet’s alley. The computer did significantly worse on the comprehension questions, which test a little one’s ability to understand practical concepts based on learned information. A question from this section could be, “Why do we shake hands?”



This is the missing piece of the puzzle for ConceptNet. An artificial intelligence like this one might have access to a lot of data, but it can’t draw on it to make rational judgements by leveraging implicit facts — things that we all know, but are so obvious we wouldn’t even consider them relevant information. ConceptNet might know that water freezes at 32 degrees, but it doesn’t know how to get from that concept to the idea that ice is cold. This is basically common sense — humans (even children) have it and computers don’t.

There’s no easy way to build implicit information and common sense into an AI system. This is more than even IBM’s Watson trivia computer is capable of. Comprehension is more than finding the right facts, but actually using the facts to come to conclusions. It may be years before science develops a program capable of taking vast databases of information and using them to arrive at new ideas.

This is simply one approach of many seeking to crack the code on AI. The MIT team is already hard at work on ConceptNet 5, which is open source and available on GitHub. Maybe one day we’ll find that common sense is an emergent property at a certain level of complexity, or maybe it will have to be painstakingly replicated in code. Until we figure it out, AI is going to be held back in preschool.

0 comments:

Post a Comment

 

Copyright © 2014 Vivarams. All rights reserved.

Back to Top