“The Signal and the Noise” in pharmacogenomics

Nate Silver, author of the 538 blog at the New York Times, has made statistics and quantitative modeling “cool” with his dead-on political projections. He outlined how we can distinguish true informational signals from the universe of noisy data in his bestselling book, The Signal and the Noise, explaining how we can apply probability theory to politics, the economy weather, gambling, sports—literally anything. That includes, it appears, human genomics.

A number of news outlets picked up a fascinating news release on Ramy Arnaout, MD, DPhil, a founding member of the Genomic Medicine Initiative at Beth Israel Deaconess Medical Center (BIDMC), who is using cost-benefit analysis and quantitative modeling to analyze which drug prescriptions can be better matched to a person’s genome. Arnaout and his team published the results of their analysis in a recent issue of Clinical Chemistry.

There is lots of money at stake—it’s estimated that drug-related adverse outcomes cost the health-care system upwards of $80 billion a year. Arnaout is convinced that applying “Monte Carlo modeling” to choosing and dosing drug prescriptions according to a person’s genome could save billions of dollars each year.

The blood-thinning drug warfarin is a prime example. In some cases, patients’ genomes contain variants that make the standard dose of warfarin too high for them and those individuals are likely to experience bleeding, an extremely dangerous side effect. According to Arnaout, three-quarters of the variability in warfarin dosing requirement is due to genomic variants. Scientists have already identified a set of variants in six specific genes that explain two-thirds of the variability. 


A lot of work remains to be done. The BIDMC team has developed a model to estimate how much it would cost to further develop and implement a pharmacogenomics system to cut these adverse outcomes in half. While considerable, it is a drop in the bucket relative to the savings; they estimate the cost at less than $10 billion spread out over approximately 20 years. 

“If you look across medicine, you can see specific places here and there where genomics is really starting to change things, but it’s been hard to know how it all adds up in the big picture,” says Arnaout, who is also an Assistant Professor of Pathology at Harvard Medical School.

“Quantitative modeling is a standard approach for forecasting and setting expectations in many fields as we all remember from the recent presidential election and from the hurricane season. Genomics is so important and is so often on the minds of our patients, students and staff, that it seemed like a good idea to use modeling to get some hard numbers on where we’re headed.”

As data for the model, the authors selected eight associations involving six prescription drugs—clopidogrel, warfarin, escitalopram, carbamazepine, the nicotine-replacement patch and abacavir—and one drug class, the statin class of anticholesterol drugs.


“The results were surprising,” says Arnaout. “Before we did this work, I couldn’t have told you whether it would take a million dollars or a trillion dollars or whether it would take five years or a hundred years. But now, we’ve got a basis for thinking that we’re looking at single-digit billions of dollars and a couple of decades. That may sound like a lot or a little, depending on your point of view. But with these numbers, we can now have a more informed conversation about planning for the future of genomic medicine.”

Jon Entine, executive director of the Genetic Literacy Project, is senior fellow at the Center for Health & Risk Communication and STATS at George Mason University.

News on human & agricultural genetics and biotechnology delivered to your inbox.
glp menu logo outlined

Newsletter Subscription

Optional. Mail on special occasions.