No longer a hypothetical tool, artificial intelligence (A.I.) has planted its feet in health care globally. It is already being used by Google in some of India’s hospitals to fight diabetic blindness, and the Food and Drug Administration (FDA) approved the first machine learning (ML) algorithm to measure how much blood flows through the heart in 2018.
After explaining how artificial intelligence and machine learning works in layman’s terms, we continue our series on artificial intelligence by presenting the debate over the technology’s role in health care.
Many have extolled A.I. as a way to fix a broken health care system by enhancing a doctor’s ability to treat patients more effectively and efficiently. Others fear it could further break it by entrenching the problems already present.
According to The Atlantic, medicine has historically struggled with “a lack of diversity in health studies and clinical trials,” despite researchers knowing that different ethnicities have genetic mutations that increase their risk for diseases and affect their response to medicine. African American children, for example, have died from asthma at ten times the rate of non-Hispanic white children.
Since A.I. learns from datasets, this lack of representation poses a problem for A.I.’s efficacy with underrepresented groups, potentially worsening health disparities. For instance, an A.I. trained to diagnose melanoma from a primarily Caucasian dataset could lead to harmful conclusions for black patients.
Dhruv Khullar, doctor and assistant professor at Weill Cornell Medicine, further argues that because A.I. is trained on real-world data, it risks “perpetuating the economic and social biases that contribute to health disparities in the first place.” If, for example, poorer patients do worse after receiving chemotherapy for cancer, algorithms may conclude such patients are less likely to benefit from further treatment and recommend against it. And if such algorithms are left unchecked, these biases can become automated and invisible in the A.I.’s black box.
While A.I. could be harmful for underrepresented populations, it has already proved helpful for some underserved patient populations. Nicholson Price, assistant professor of law at University of Michigan, argues that the replication and democratization of expertise via A.I. could make capabilities currently limited to a relatively small number of specialists available to a broader set of patients.
Google’s A.I. system detecting diabetic blindness works to this end. According to The New York Times, nearly 70 million Indians are diabetic, and all are at risk of blindness. The country does not have enough doctors to properly screen them all; for every one million people In India, there are only 11 eye doctors. The system screens patients in Madurai, one of the largest cities in southern India, and its surrounding villages, where few if any eye doctors are available.
Price also argues that by enhancing precision and personalization, A.I could help to push the frontiers of medicine. He points to an “artificial pancreas” device powered by A.I. that learns the glucose response of a particular patient and tailors insulin dosage to keep glucose levels within safe limits over the course of a day. By providing a more personalized treatment, this technology offers more than earlier options and could result in better patient care.
In finding patterns and relationships not easily captured by doctors, A.I. also can improve a doctor’s capability in “identifying illnesses, making prognoses and suggesting treatment,” according to Price. Researchers at M.I.T. created an A.I. system to improve the detection and diagnosis of lesions seen on mammograms. Current tools make it difficult to know whether a lesion is harmful, especially if the patient has dense breast tissue, sometimes leading to false positive results. The system, which is in use at Massachusetts General hospital, detects similarities between a patient’s breast and a database of 70,000 images for which the diagnosis was known.
In addition, a study published in Nature Medicine recently reported that a Google A.I. system detected 5% more lung cancers and cut false positives by 11%, also performing on par with radiologists when prior images of patients were also included in the evaluation.
A.I.’s pushing the envelope, however, is making it even more difficult for regulation to keep pace. Given that an A.I. system’s performance constantly changes based on exposure to new data, regulation would be “trying to hit a moving target,” according to Stat. Attempting to make sense of this new space, the FDA released last month a whitepaper describing the criteria the agency proposes to use to determine when medical products relying on A.I. will require FDA review before being commercialized. According to NPR, their new approach experiments with a system called precertification that puts more emphasis on examining the process that companies use to develop their products, and less on examining each new tweak. The system also includes continued monitoring.
A.I. also poses new questions in malpractice regulation. As Shailin Thomas for Harvard Law School’s Petrie-Flom Center asks who should be responsible when a doctor provides erroneous care at the suggestion of an A.I. diagnostic tool? And how does one trace its mistake if it operates in a black box? If the A.I. has a higher accuracy rate than the doctor, as Google’s A.I. system for lung cancer detection seems to show will become the norm, it might be improper to blame the doctor. It would be difficult to argue that following the statistically better option would be negligent. Like the FDA, traditional medical malpractice laws might need reform.
A.I. could, however, prevent malpractice lawsuits by automating drudgery and, therefore, improving accuracy. According to Price, physicians spend nearly half of their time overall on desk work and electronic health record work. A.I. has the potential to automate some of this work, using natural language processing to identify relevant information from documents. These interventions could also return physician time to seeing patients and decrease physician burn out.
Eric Topol, cardiologist and founder of and director of the Scripps Research Translational Institute, believes this “keyboard liberation” is one of the areas where A.I. in medicine shows the most promise: “What I’m most excited about is using the future to bring back the past: to restore the care in health care. By giving both the gift of time to clinicians, who are at peak levels ever recorded for burnout and depression, and empowerment to patients, for those who want it, this will ultimately be possible.”
Topol also agrees with Price that A.I. has the capacity to control some of health care’s costs by allocating resources more efficiently. According to Topol, “The No. 1 line item of health care cost in America is human resources, which has recently grown — as of December 2017 towering over retail — to be the leading job source for our economy.” By enhancing human capabilities, A.I. has the potential to improve productivity for doctors.
Price also believes A.I. can direct existing scarce resources to do the most good and save costs. He cites an example of a hospital using a resource-triage A.I. system to allocate inpatient beds when patient demand exceeds availability.
Price, however, warns of the hospital administrators using the resource-triage A.I. to maximize revenue rather than effectiveness of care. This speaks to a larger fear of “adversarial attacks” where manipulations can change the behavior of A.I. systems using tiny pieces of digital data, as raised in a paper published in March by the journal Science. Doctors, hospitals and other organizations could manipulate A.I. in billing or insurance software to maximize revenue, exacerbating the health care industry’s issue of “bilking the system by subtly changing billing codes and other data in computer systems that track health care visits,” as The New York Times reports.
If an insurance company uses A.I. to evaluate medical scans, for example, a hospital could manipulate scans to boost profits. As these manipulations become codified in the patient record, they could inform future decisions and harm patients.
A.I. does have great potential to do good when placed in the right hands. Google’s work detecting diabetic blindness and Massachusetts General’s implementation of the system diagnosing breast lesions are just the tip of what could come. It is, therefore, even more important to interrogate and mitigate its drawbacks.
Keep an eye out for our upcoming piece on responsible A.I. to round out our A.I. blog series.
Sources:
- https://www.nytimes.com/2019/03/10/technology/artificial-intelligence-eye-hospital-india.html?module=inline
- https://www.prnewswire.com/news-releases/arterys-receives-fda-clearance-for-the-first-zero-footprint-medical-imaging-analytics-cloud-software-with-deep-learning-for-cardiac-mri-300387880.html
- https://www.theatlantic.com/health/archive/2016/06/why-are-health-studies-so-white/487046/
- https://minorityhealth.hhs.gov/omh/browse.aspx?lvl=4&lvlid=15
- https://www.nytimes.com/2019/01/31/opinion/ai-bias-healthcare.html?searchResultPosition=265
- https://balkin.blogspot.com/2018/10/four-roles-for-artificial-intelligence.html
- https://www.nytimes.com/2019/02/21/business/medical-technology-ai-tests.html?module=inline
- https://www.statnews.com/2019/05/20/googles-ai-improves-accuracy-of-lung-cancer-diagnosis-study-shows/
- https://www.statnews.com/2019/04/02/fda-new-rules-for-artificial-intelligence-in-medicine/
- https://www.npr.org/sections/health-shots/2019/04/14/711775543/how-can-we-be-sure-artificial-intelligence-is-safe-for-medical-use
- http://blog.petrieflom.law.harvard.edu/2017/01/26/artificial-intelligence-medical-malpractice-and-the-end-of-defensive-medicine/
- https://www.nytimes.com/2019/03/11/well/live/how-artificial-intelligence-could-transform-medicine.html?searchResultPosition=179
- https://www.nytimes.com/2019/03/21/science/health-medicine-artificial-intelligence.html
- https://www.regulations.gov/document?D=FDA-2019-N-1185-0001
The views and opinions expressed by the authors on this blog website and those providing comments are theirs alone, and do not reflect the opinions of Softheon. Please direct any questions or comments to research@softheon.com