The Food and Drug Administration has issued new guidelines on how it will regulate mobile health software and products that use artificial intelligence to help doctors decide how to treat patients.
The guidelines, contained in a pair of documents released [September 26], clarify the agency’s intent to focus its oversight powers on AI decision-support products that are meant to guide treatment of serious or critical conditions, but whose rationale cannot be independently evaluated by doctors.
To further define the types of products that will require greater scrutiny, the FDA gave the example of a clinical decision support (CDS) tool that, without explaining its rationale, identifies hospitalized type 1 diabetic patients at high risk of severe heart problems following surgery. If such a product were to give an inappropriate recommendation, the agency said, it could result in serious harm to the patient.
The guidelines seek to strike a delicate balance between ensuring safety and supporting innovation at a time of accelerating experimentation and investment in artificial intelligence and digital health products. The lack of clear ground rules to date has created a regulatory gray area.
Read full, original post: FDA clarifies how it will regulate digital health and artificial intelligence