A recently published study found that physicians who follow recommendations from artificial intelligence (AI) software may be more shielded from liability than previously thought. Past research has indicated that laypeople (i.e., the general public) are resistant to AI, but this study shows that potential jurors are actually not strongly opposed to a physician’s acceptance of AI recommendations.
The study, published in the Journal of Nuclear Medicine, found that physicians who accept or follow the advice of AI technology may find themselves at a lower risk of medical malpractice liability. Using an online survey of 2,000 Americans, the study set out to see how potential jurors would judge malpractice liability in cases where medical professionals used AI for diagnosis and decisions.
Using the researchers’ own example:
Imagine that a woman has recently been diagnosed with ovarian cancer. To help determine the dosage of a chemotherapy drug, the treating hospital has adopted routine use of an artificial intelligence (AI) precision medicine tool. The AI tool advises, on the basis of the patient’s file, that a nonstandard dosage is most likely to succeed. But what if something goes wrong as a result of the treatment? Will the physician be judged harshly for accepting unorthodox treatment advice from a computer? Or might the physician be judged even more harshly for rejecting advice from a state-of-the-art tool?
About the research
For the study, each of the 2,000 participants was asked to read one of four case studies. In each study, an AI system offered a drug dosage recommendation for a patient, and subsequent doctor decisions caused harm to the patient. Each case study included both standard and non-standard medication usage, as well as doctor’s decisions to follow or reject the AI’s treatment recommendation.
The participants then assessed the physician’s decision in each case study – whether that choice would have been made by most reasonable physicians in similar circumstances. A higher score meant a higher amount of agreement with the physician and therefore a lower amount of liability regarding a patient’s injuries.
Researchers found two main factors affected the participants’ judgement of liability:
- Following a reasonable standard of care
- Following the recommendation of AI tools
The study authors did note, however, that they did not find a similar liability shield when physicians rejected AI recommendations for nonstandard care to provide standard care.
Regardless, researchers believe that these results “provide guidance to physicians who seek to reduce liability, as well as a response to recent concerns that the risk of liability in tort law may slow the use of AI in precision medicine. Contrary to the predictions of those legal theories, the experiments suggest that the view of the jury pool is surprisingly favorable to the use of AI in precision medicine.”