The healthcare sector is one of the most predominant deployers of artificial intelligence (“AI”) tools. The delivery of healthcare is changing rapidly as AI expands and becomes better at making predictions. To take just a few examples, AI is now used in medical imaging to assist specialists in identifying time-critical findings (everything from stroke large-vessel occlusion alerts to highlighting suspicious regions on a biopsy for prostate cancer to performing whole body scans for detecting skin cancer by dermatologists). AI is also being deployed in wearables for arrhythmia detection and is being used for healthcare delivery and operations (e.g., AI medical scribes for “listening” to office visits and drafting visit notes and automating prior-authorization workflows).
As the use of AI expands, a key question healthcare systems and providers are confronting is who is responsible if an AI-assisted recommendation leads to patient harm? There is no clear-cut answer, and the courts have not yet provided significant guidance.
Historically, if a product injures a patient, litigants will look to product liability and tort doctrines to determine who is responsible. Generally, under product liability doctrine a plaintiff must show that there was a defect in the product (either design, manufacture, or a warning defect), that the product was used in a reasonably foreseeable way, and the defect caused the injury. In a negligence case, the plaintiff must show that the defendant owed a duty of care to the plaintiff and breached that duty resulting in injury.
However, AI applications in healthcare are relatively new and there is limited caselaw pertaining directly to AI-related liability in the healthcare sector. Therefore, courts have looked to software liability cases for guidance. But software is not a physical object, so courts have been hesitant to apply product liability doctrines to AI-related injury claims. In their paper entitled “Understanding Liability Risk from Healthcare AI,” authors Michelle M. Mello and Neel Guha identity three main trends for cases related to medical software or AI:
- Cases where defects in software used to manage care or resources cause harm to patients, who in turn sue the developer of the software and/or the hospital for negligently maintaining it.
- Patients sue when harm occurs after physicians consult software to make care decisions, such as a technician screening patients for conditions or a doctor generating medication regiments.
- Situations where software embedded within devices malfunctions.
See Stanford University Human-Centered Artificial intelligence, Policy Brief, HAI Policy & Society, Feb. 2024, “Understanding Liability Risk from Healthcare AI” by Michelle M. Mello and Neel Guha; see also Michelle M. Mello and Neel Guha, “Understanding Liability Risk from Using Healthcare Artificial Intelligence Tools,” New England Journal of Medicine, 390 (January 2024): 271-287.
Health systems and providers can reduce potential exposure by transferring risk from themselves to the provider of the AI program. This will mainly be accomplished by having risk transfer language in licensing agreements and contracts. Licensing agreements “should require developers to provide information that allows healthcare organizations to effectively assess and monitor risk, including information on assumptions regarding the model’s ingested data, validation processes, and recommendations for auditing model performance.” See Stanford University Human-Centered Artificial intelligence, Policy Brief, HAI Policy & Society, Feb. 2024, “Understanding Liability Risk from Healthcare AI” by Michelle M. Mello and Neel Guha. Contacts and licensing agreements should contain indemnification provisions that favor the healthcare system or provider for errors in the AI outputs.
As AI becomes more embedded in healthcare technology, I expect to see these types of cases to increase in volume. The key for healthcare systems and providers is to remember that AI is a tool to assist humans making treatment decisions and have policies and procedures in place for the responsible and ethical use of AI in patient treatment.
By Jason Scott, AI Advisory Regulatory Affairs
