The GLP is committed to full transparency. Download and review our just-released 2019 Annual Report.

Machines now read faster than we can. But do they understand the words?

| | November 1, 2019

In an April 2018 paper coauthored with collaborators from the University of Washington and DeepMind, the Google-owned artificial intelligence company, [computational linguist Sam] Bowman introduced a battery of nine reading-comprehension tasks for computers called GLUE (General Language Understanding Evaluation). The test was designed as “a fairly representative sample of what the research community thought were interesting challenges,” said Bowman, but also “pretty straightforward for humans.”

The machines bombed.  Even state-of-the-art neural networks scored no higher than 69 out of 100 across all nine tasks: a D-plus, in letter grade terms.

Their appraisal would be short-lived. In October of 2018, Google introduced a new method nicknamed BERT (Bidirectional Encoder Representations from Transformers). It produced a GLUE score of 80.5. On this brand-new benchmark designed to measure machines’ real understanding of natural language — or to expose their lack thereof — the machines had jumped from a D-plus to a B-minus in just six months.

Related article:  Video: Innovative brain-mapping techniques could unlock neuroscience secrets

But is AI actually starting to understand our language — or is it just getting better at gaming our systems?

“We know we’re somewhere in the gray area between solving language in a very boring, narrow sense, and solving AI,” Bowman said.

Read full, original post: Machines Beat Humans on a Reading Test. But Do They Understand?

The GLP aggregated and excerpted this article to reflect the diversity of news, opinion, and analysis. Click the link above to read the full, original article.
News on human & agricultural genetics and biotechnology delivered to your inbox.
Optional. Mail on special occasions.

Send this to a friend