The GLP is committed to full transparency. Download and review our 2019 Annual Report

Storing and sorting Big Data in messy DNA memory

| | March 5, 2013

The GLP posts this article or excerpt as part of a daily curated selection of biotechnology-related news, opinion and analysis.

The following is an edited excerpt.

The Library of Congress contains 35 million books and documents. Its Web Capture team has claimed that “As of April 2011, the Library has collected about 235 terabytes of data,” and that it adds about five terabytes per month. That’s only half the number of bits stored in one single gram of DNA

Even the entire British Library—150 million items—can be archived in just two grams of DNA.

When magnetic tape was the basic storage medium, searches were time-consuming, because tape was slow and the data were stored linearly. Searching a tangled mess of DNA takes time, too. So we’ll need something really special to index and search our DNA memory.

Read the full article here: Storing And Sorting Big Data, In Messy DNA Memory

News on human & agricultural genetics and biotechnology delivered to your inbox.
Optional. Mail on special occasions.

Send this to a friend