The GLP is committed to full transparency. Download and review our just-released 2019 Annual Report.

AI ethics can’t come from human medicine: Principles that guide doctors would ‘make no sense’ to a machine

| | November 21, 2019

The four core principles of medical ethics are respect for autonomy (patients should have control over how they are treated), beneficence (doctors should act in the best interest of patients), non-maleficence (doctors should avoid causing harm) and justice (healthcare resources should be distributed fairly).

The more than 80 AI ethics reports published are far from homogeneous, but similar themes of respect, autonomy, fairness, and prevention of harm run through most. And these seem like reasonable principles to apply to the development of AI. The problem, says [Brent] Mittelstadt, is that while principles are an effective tool in the context of a discipline like medicine, they simply don’t make sense for AI.

Related article:  Viewpoint: AI may boost our diagnostic abilities, but it's not ready to replace human doctors

AI has no equivalent to a patient, and the goals and priorities of AI developers can be very different depending on how they are applying AI and whether they are working in the private or public sphere.

As a result, AI ethics has focused on high-level principles, but at this level of abstraction they are too vague to actually guide action. Ideas like fairness or dignity are not universally agreed on, and therefore each practitioner is left to decide how to implement them.

Read full, original post: Where Should AI Ethics Come From? Not Medicine, New Study Says

The GLP aggregated and excerpted this article to reflect the diversity of news, opinion, and analysis. Click the link above to read the full, original article.
News on human & agricultural genetics and biotechnology delivered to your inbox.
Optional. Mail on special occasions.

Send this to a friend