Artificial intelligence and Generative AI could touch virtually every industry, but the technology could pose unique challenges for health care and life sciences. The use of AI in diagnostics, decision-making, claims processing, or coverage decisions, for example, could lead to new risks related to patient care, safety, and discrimination.
During an April 30 webcast, four of Deloitte’s AI thought leaders outlined some of the key issues that are shaping the development and use of AI in health care and life sciences (click here for a replay of the presentation, Emerging artificial intelligence policy).
Instilling sustainable trust in AI could be a significant hurdle. Issues related to trust have historically slowed the adoption of new technologies—from the start of the industrial revolution in the 1700s to the computing revolution in the last century, explained Asif Dhar, M.D., US Life Sciences & Health Care leader, Deloitte Consulting LLP. Without trust, consumers, clinicians, and organizations are unlikely to maximize Generative AI solutions (see From code to cure, how generative AI can reshape the health frontier).
Bill Fera, M.D., principal, Deloitte Consulting LLP, agreed and said industries often don’t pay enough attention to the impact of trust when it comes to accepting new technology (see Overcoming generative AI implementation blind spots in health care). Generative AI, he told attendees, holds the promise of deepening and restoring trust in health care, but it also has the potential to exacerbate mistrust and introduce new skepticism among consumers and other health care stakeholders. For example, if the data used to train AI models is biased or not balanced, the information being generated might not be reliable. The technology has also been shown to “hallucinate” and generate false information if it hasn’t been trained on an appropriate data set or tuned for context to generate accurate information.
Bill encouraged health care and life sciences organizations to establish a center of excellence with appropriate governance structures and trustworthy frameworks. Organizations could combine or augment traditional governance constructs (e.g., policy, accountability) with differential ones such as ethics review, bias testing, and surveillance. However, a recent Deloitte survey of industry executives found that only about 60% said they had developed an overall governance framework, and 45% are prioritizing the building of trust among consumers to share and use their data (From Fax machines to GenAI, are health systems ready?)
Lawmakers, regulators are building AI rules
Establishing guardrails for the safe and appropriate use of AI is a priority for the White House, Congress, states, and multiple federal agencies,1 as well as for governments around the world.2 Policy and regulations can help stimulate the creation and the adoption of AI frameworks and encourage trust. The technology, however, is still a couple steps ahead of policy makers, according to Anne Phelps, the US Health Care Regulatory leader for Deloitte.
Last October, President Biden signed an executive order (EO) aimed at setting some parameters for the use of AI across all industries, including the creation of a national privacy law.3 However, Anne stated that health care is somewhat unique given that the industry has been governed by a national privacy law—the Health Insurance Portability and Accountability Act (HIPAA)—for nearly two decades. She explained that HIPAA provides a framework for when patient information can be used and when patient consent is needed.4 “As Congress debates a possible national privacy law, it will be interesting to see how well it builds off of the HIPAA framework,” she said. In addition, Anne discussed other critical policy issues such as creating transparency for consumers on how AI tools are being used. She also talked about removing bias in data and defining levels of risk, such as when human intervention should be involved for issues related to patient safety and care.
The Department of Health and Human Services (HHS) recently finalized a rule that will require more transparency around AI and machine learning.5 The Centers for Medicare & Medicaid Services has clarified that Medicare Advantage (MA) organizations can use AI and related technologies to assist in making coverage determinations. But such technologies cannot override standards related to medical necessity and other coverage determinations.6
HHS and CMS tend to be seen as the agencies with the most direct impact on health care and life sciences. But other agencies, including some that have not historically regulated health care, are beginning to exert enforcement authority over health information as it relates to AI. Here are a few examples:
The U.S. Food & Drug Administration (FDA): The agency issued draft guidance on AI and machine learning last year and continues to evaluate the use of AI and its applications for drugs, biologics, and medical devices.7 The number of applications coming into the FDA using AI has increased significantly.8
The Office of Civil Rights (OCR): HHS’s OCR issued a Final Rule on April 26 to strengthen nondiscrimination protections, address biases in health technology, and protect patients when AI is being used in health care. The rule clarifies that “nondiscrimination in health programs and activities continues to apply to the use of AI, clinical algorithms, predictive analytics, and other tools,” according to OCR. 9
The Office of the National Coordinator (ONC) for Health IT: In January, the ONC published a Final Rule that included requirements for AI and other predictive algorithms.10 Health IT developers would be required to make information available on the development, evaluation, fairness, effectiveness, and ongoing monitoring for predictive decision-support technologies that interface with electronic health records.
The Federal Trade Commission (FTC): The FTC has taken an active role on health care privacy and AI. The Commission has not yet issued formal rulemaking on AI but has provided blog posts and guidance signaling potential future enforcement actions.11
Anne noted that many federal agencies are trying to keep up with the rapidly evolving technology by hiring technologists that can help them understand the deployment of AI on a variety of different levels.12 Both the House and Senate have held hearings on the use of AI, and more legislation is likely to be introduced.13 In addition, at least 16 states have enacted AI-related laws.14 The European Union has also been developing legislation and frameworks.15 Asif agreed that laws and policies could help instill trust and help ensure AI continues to advance safely. But he noted that unlike other products and services that are regulated, AI is undergoing constant change and continuous improvements.
Mitigating biases and addressing health equity
Algorithms built on biased data sets could generate inaccurate predictions or perpetuate health inequities by age, ethnicity, gender, or race. Policymakers say they are concerned with identifying and mitigating bias from underlying data sets and protecting consumers from AI being used to perpetuate discrimination or health care inequities.
AI-enabled technologies are often built off data that is generated by humans who have built in individual and systemic biases, explained Jay Bhatt, D.O., managing director of the Deloitte Health Equity Institute and the Deloitte Center for Health Solutions. While technology has advanced, health care data itself has remained largely the same. Training AI on biased or incomplete data related to age, ethnicity, gender, or race age could lead to decisions that may have unintended consequences. Inequities in the US health system costs approximately $320 billion a year, an amount that could grow to $1 trillion by 2040 if left unaddressed (see US health can’t afford health inequities). While AI has the potential to make health care more equitable, eight out of 10 health equity leaders are not at the table when AI strategy is being developed, according to the results of a recent Deloitte survey (see our 2024 Outlook for Health Equity). That is likely to change if AI becomes a more integrated part of health care delivery, he said.