Oxford genetic ethicist chronicles breach of her own genetic information

c d z
(Credit: Kevin Dooley/Flickr)

Last week, online photo printer Shutterfly sent a mass email to all its female customers in the U.S., congratulating them on the recent births of their new children. Twenty four hours later, the company apologized for the mistake, but not before freaking out a lot of customers about the reliability of the organization’s information management.

In a similar, but more sensitive, snafu, the UK’s Personal Genome Project spammed a list of recent volunteers, including genetic ethicist Paula Boddington. Despite her extensive knowledge about the field, she made some of the same assumptions many people might:

I’d expressed an interest in taking part in this project which aims to sequence the genomes of hundreds of thousands of people and make these available, together with trait information, to researchers. There are clear potential worries about privacy here, as there is a potential to identify individuals from such a rich source of information. Nonetheless, I was excited to take part. After all, many of the people I know and love the most would not be alive today were it not for advances in medical science which have helped to treat diseases such as cancer and type 1 diabetes. In the past, many have risked life and limb for medical science. What was the potential of a little breach of privacy to worry about? Besides which, there has been considerable attention to ethics, privacy, and security around this project. There’s a whole ethics crew. Presumably they only hire the crème de la crème of data and IT experts. Surely these guys could be trusted to use our information wisely, and to do all they could to prevent irresponsible use?

And yet, they weren’t. Although no leak of actual genetic information was made, people’s names and identities were released. If the purpose of the project is anonymity, this was a big failure and violated the basic premises of ethical trust, as Boddington defines them:

There is a simple lesson to be learned from this – the fragility of trust. There are two basic strategies for ensuring ethical conduct of research: one, through robust regulations and practices which protect participants; two, through the trustworthiness of those managing research – that they have the competence and virtues necessary to handle the information entrusted to them, for the betterment of humanity whilst protecting the individual.

Then, unfortunately, she circles back saying that everyone who volunteered should have at least had the possibility of a security breach, but that the motivation to benefit humanity through participation would outweigh the privacy hesitation.

Regardless of whether participants could have a reasonable expectation of privacy or not, the endeavor has been seriously damaged by the email list issues. Boddington is regretful:

[The real problem is] that trust evaporated overnight with this idiotic breach of security. The entire point of this project depends upon something utterly crucial to the heart of it: the trust that those running it know how to manage data. The project is all about good data management. And they can’t even do that. A glitch in how the email list is managed can be sorted out in a minute or two. A breach of trust takes much, much longer.

Sources:

Additional Resources:

 

{{ reviewsTotal }}{{ options.labels.singularReviewCountLabel }}
{{ reviewsTotal }}{{ options.labels.pluralReviewCountLabel }}
{{ options.labels.newReviewButton }}
{{ userData.canReview.message }}
screenshot at  pm

Are pesticide residues on food something to worry about?

In 1962, Rachel Carson’s Silent Spring drew attention to pesticides and their possible dangers to humans, birds, mammals and the ...
glp menu logo outlined

Newsletter Subscription

* indicates required
Email Lists
glp menu logo outlined

Get news on human & agricultural genetics and biotechnology delivered to your inbox.