The GLP is committed to full transparency. Download and review our Annual Report.

The GLP is committed to full transparency. Download and review our Annual Report.

Neuroevolution: Artificial intelligence learns by adapting and evolving

| | January 16, 2018

[An] old idea—improving neural networks not through teaching, but through evolution—is revealing its potential. Five new papers from Uber in San Francisco, California, demonstrate the power of so-called neuroevolution to play video games, solve mazes, and even make a simulated robot walk. Neuroevolution [is] a process of mutating and selecting the best neural networks.

At Uber, such applications might include driving autonomous cars, setting customer prices, or routing vehicles to passengers. But the team, part of a broad research effort, had no specific uses in mind when doing the work. In part, they merely wanted to challenge what Jeff Clune, another Uber co-author, calls “the modern darlings” of machine learning: algorithms that use something called “gradient descent,”

The most novel Uber paper uses a completely different approach that tries many solutions at once. A large collection of randomly programmed neural networks is tested (on, say, an Atari game), and the best are copied, with slight random mutations, replacing the previous generation. The new networks play the game, the best are copied and mutated, and so on for several generations.

[G]oing forward, the best solutions might involve hybrids of existing techniques, which each have unique strengths. Evolution is good for finding diverse solutions, and gradient descent is good for refining them.

Read full, original post: Artificial intelligence can ‘evolve’ to solve problems

The GLP aggregated and excerpted this article to reflect the diversity of news, opinion, and analysis. Click the link above to read the full, original article.
News on human & agricultural genetics and biotechnology delivered to your inbox.
Optional. Mail on special occasions.

Send this to a friend