From conceptual thinking in academia to practical approaches being adopted by the leading technology companies, we bring you a curated set of perspectives. We explore how Natural Language Understanding and related Understanding Systems will enable Artificial General Intelligence sometime in the future.
In the meantime, overcoming key Understanding challenges such as reasoning, inference, temporal analysis and math operations has helped us at AUI Systems apply Understanding to industry solutions.
Published online in Quanta Magazine on December 16th, 2021
Author – Melanie Mitchell
It’s simple enough for AI to seem to comprehend data, but devising a true test of a machine’s knowledge has proved difficult. Melanie Mitchell, who is the Davis Professor of Complexity at the Santa Fe Institute and the author of Artificial Intelligence: A Guide for Thinking Humans, discusses the challenges faced by AI systems like Watson and GPT-3 in understanding language. She opines that understanding language requires understanding the world, and a machine exposed only to language cannot gain such an understanding. The limitations of ‘language models’ gets exposed in tackling Google-Proof tests like WinoGrande.
READ THE ORIGINAL ARTICLEPublished online in Towards Data Science on 09 Jun 2021
Author – Gadi Singer
Gadi Singer is VP at Intel Labs and a global thought leaders & influencers. In this article, in continuation of a series on the choices for capturing information and using knowledge in AI systems, he takes ahead the concept of an information-centric classification of AI systems. This article delves into machine understanding, context-based decision making, and other aspects of higher machine intelligence. A key aspect of utilizing knowledge is the interplay between knowledge representation and reasoning. Endowing AI with the ability to understand and operate at a higher level of intelligence is associated with deeply structured knowledge.
READ THE ORIGINAL ARTICLEPublished online on Elemental Cognition’s Medium page on 15 Feb 2021
Author – Elemental Cognition
The Elemental Cognition team explains how mental models are fundamental to human understanding, even more than the language used to communicate them. These shared models include spatial and temporal understanding, are built through learning and inference, and represent how the world works and humans draw upon them when understanding language. Today’s statistical-based language models struggle to understand and infer from even kindergarten-level stories – due to missing mental models.
READ THE ORIGINAL ARTICLEPublished online in Knowable Magazine on 14 Oct 2020
Author – Anil Ananthaswamy
An innovative, hybrid approach called neuro-symbolic AI is highlighted in this article. Researchers at IBM and MIT are finding that combining neural networks with “good old-fashioned AI” or symbolic AI can provide a breakthrough in AI’s current limitations. While neural nets can process huge amounts of data, they struggle to figure out abstract relations between objects and reason about them. Symbolic AI fills this gap - laying the ground for a hybrid AI system that significantly reduces the amount of training data needed and provides AI explainability.
READ THE ORIGINAL ARTICLEPublished online in The Gradient on 25 Jan 2020
Author – Gary Marcus
Gary Marcus is a scientist and the CEO of Robust.AI. He provides insights on innateness and empiricism – two classic hypotheses on the development of language and cognition. Empiricism has been tested by building massive neural networks. One of these, GPT-2 is trained on a massive data set using Deep Learning, but how does it perform when language and sentences are modeled as vectors ? As it turns out, GPT-2 struggles to truly Understand language, or to perform human-like reasoning and logic. Current systems can not really Understand who did what to whom, when, where; or relate temporal data and causality.
READ THE ORIGINAL ARTICLEPublished online in The New York Times on 05 Nov 2018
Author – Melanie Mitchell
Melanie Mitchell, who is a Professor of Computer Science at Portland State University, opines on the formidable capabilities of today’s A.I. to solve data-driven problems such as financial fraud, while challenging the conventional predictions that human-level A.I. will be achieved in the next few years. Today’s system lack real Understanding of the inputs they process or the outputs they produce. Understanding based on common-sense knowledge is the key to taking machines toward human cognition.
READ THE ORIGINAL ARTICLEPublished online in WIRED on 02 Feb 2018
Author – Jason Pontin
This article delves into how the remarkable advances in A.I. such as Deep Learning, but natural language appears beyond such techniques. Statistical techniques such as Deep Learning are very good at pattern recognition-type problems but it’s a challenge to scale it up to solve different problems such as Understanding language. Where young children pick up the rules of language and the common-sense knowledge learnt through observing causal relationships, there today’s A.I seems to have failed for now.
READ THE ORIGINAL ARTICLEPublished online in The Economist, Technology Quarterly on 01 May 2017
Author –
This fascinating article traces the history of communication between humans and machines, beginning with the iconic scene in “2001: A Space Odyssey” where HAL turns on his human companion. From the work begun by IBM in the 1950’s on machine translation, to the rules-based approach to understanding language, to the Deep Learning methods of today, the article chronicles the approaches and their strengths and shortcomings. Successes in language translation and speech-to-text technology have occurred, yet machines do not yet Understand the nuances of language, meanings of sentences, or gather the common knowledge required to provide context.
READ THE ORIGINAL ARTICLE