Many AIs that appear to understand language and that score better than humans on a common set of comprehension tasks don’t notice when the words in a sentence are jumbled up, which shows that they don’t really understand language at all. The problem lies in the way natural-language processing (NLP) systems are trained; it also points to a way to make them better.
Researchers at Auburn University in Alabama and Adobe Research discovered the flaw when they tried to get an NLP system to generate explanations for its behavior, such as why it claimed different sentences meant the same thing. When they tested their approach, they realized that shuffling words in a sentence made no difference to the explanations. “This is a general problem to all NLP models,” says Anh Nguyen at Auburn University, who led the work.
The team looked at several state-of-the-art NLP systems based on BERT (a language model developed by Google that underpins many of the latest systems, including GPT-3). All of these systems score better than humans on GLUE (General Language Understanding Evaluation), a standard set of tasks designed to test language comprehension, such as spotting paraphrases, judging if a sentence expresses positive or negative sentiments, and verbal reasoning.
To read more on the article titled “Jumbled-up sentences show that AIs still don’t really understand language” click here
Menu

Hot News
ADS
Recent Vacancies
© 2016 SANGONeT – Unless otherwise stated, content on the NGO Pulse website is licenced under
Creative Commons Attribution-Noncommercial-No Derivative Works.