Convened by Congress, Expert Panel Considers the Future of Artificial Intelligence in Health Care
June 13, 2021
UH Innovations | Summer 2021
Last spring, as Americans grappled with the early days of the coronavirus pandemic, a congressional committee assembled virtually to guide the future implementation and governance of artificial intelligence (AI) in health care.
The National Academy of Sciences, in collaboration with the U.S. Government Accountability Office (GAO), gathered a panel of experts to advise on the burgeoning field of AI and its potential to transform health care delivery and management. Included were leaders from academia, law, medicine, business, technology and governmental agencies. An extensive report outlining the committee's analysis and recommendations was published in November.1
One of the appointees was Maulik P. Purohit, MD, MPH, Associate Chief Medical Information Officer and Clinical Innovations Lead for Transformation at University Hospitals, and Clinical Assistant Professor, Case Western Reserve University School of Medicine. “Recognizing the ubiquitous role of AI in everyday life, how can we leverage its capabilities to create systems of care that add value for both patients and practitioners?” he says.
Simply put, AI comprises a range of computer processing capabilities completing tasks that have historically depended on human intelligence. Examples include:
- Expert system: Software programmed to emulate human decision-making based on databases of expert knowledge
- Machine learning: Systems that extract and interpret information from large data sets and act on them in real time, improving predictive ability as more data is processed
- Natural language processing: Computer applications that analyze speech, find patterns and make predictions to understand and communicate human language
- Neural network: Computer system modeled on the human brain and nervous system that processes complex data on multiple levels
“To be impactful, data need to be accessible, digestible and actionable,” Dr. Purohit says. “AI-enabled solutions automate the laborious and repetitive task of processing massive amounts of information. Raw data on its own is very difficult to use. However, if we can present refined data in a meaningful way to clinicians, then they can focus on how to use the data better.”
Growth in healthcare AI has exploded in recent years, paving the way for augmented patient care, streamlined workflow and reduced revenue output. In fact, according to industry estimates, AI has the potential to reduce health care costs by $150 billion annually by the year 2026.2 Paradoxically, as these computer applications are more widely deployed, they also have the potential to foster human connection by freeing practitioners of administrative burdens so that they can spend more time with their patients.
“If you automate time-consuming tasks like documenting notes or capturing vital signs, you empower professionals to work at the top of their license to provide transformational care,” says Dr. Purohit. “Also, with AI synthesizing patient data and flagging risks, physicians have another tool at their disposal to make informed decisions that personalize treatment to each individual.”
Currently, AI is showing promise to positively impact clinical as well as administrative tasks:
Projecting length of stay, readmissions, mortality
Guiding surgical procedures
Monitoring patients and recommending treatments
Facilitating appointment systems and reducing wait time
Digitizing clinical notes
Providing predictive modeling for hospital staffing needs
However, there are caveats to consider. In their report, the GAO committee members highlighted several challenges to widespread adoption of AI and developed policy options to ensure interdisciplinary collaboration and oversight to address key concerns, notably:
- Developers require access to tremendous amounts of high-quality data to train and test the algorithms
- The quality of data in healthcare is difficult because of the variability in data capture
- Scaling AI for widespread use is challenging because tools need to accept data across disparate healthcare systems
- There is risk of bias in data sampling, which can adversely impact diverse patient populations
- Privacy and security concerns are mounting as copious amounts of patient data are gathered to build and run AI applications
- Uncertainties over the legality of AI may slow implementation
Most importantly, policymakers need to ensure the safety of AI tools. “We are stewards of our patients, and they are trusting us with their care,” Dr. Purohit says. “We have an obligation to ensure that AI methods are rigorously tested, validated and monitored.”
For more information, please contact Dr. Purohit at Maulik.Purohit@UHhospitals.org.
1. Artificial intelligence in healthcare. United States Government Accountability Office, November 2020.
2. Artificial intelligence: healthcare's new nervous system. Accenture, 2017.