By Charles Fisher, Sarah Benson-Konforty, and Rafael Rosengarten

Recently, the fervor over artificial intelligence has given way to expressions of fears of the unknown. The excitement of the democratization of technology has also given way to calls for regulation to “control this growing AI beast.” The Federal Trade Commission has even opened an investigation into OpenAI. as well as the opening of investigations into some of the players. We are just beginning an important worldwide societal debate about the untapped potential of AI and its risks. 

But discussions about how to both integrate AI into society and regulate it are often missing voices from a crucial field: healthcare.  

AI’s potential applications in healthcare — such as helping create new, more effective drugs with fewer side effects; guiding physicians to optimal treatments for their patients; and robot-assisted surgeries — could redefine our access to treatment, our understanding of diseases, and even our ability to create groundbreaking medicines. It promises to increase accessibility, improve quality, and reduce costs — all critically needed advancements in a country where healthcare costs are escalating and life expectancies are dropping.

Recently, the CEOs of key AI organizations like Alphabet, Microsoft, and OpenAI were invited to the White House to discuss the technology’s potential implications and the need for regulations, with a focus on generative AI technologies. Other experts in AI testified before Congress on the same topic. According to reports, President Biden and Vice President Harris have been organizing meetings with technology industry stakeholders, many of whom are fierce critics (and sometimes veterans) of the technology industries. These meetings are a crucial step toward acknowledging the widespread influence of AI and educating policymakers about the many facets they will need to consider if AI is to come under regulatory scrutiny. 

But where are representatives and stakeholders of the healthcare sector in the conversations with policymakers? To the best of our knowledge, so far the only person with a stake in healthcare innovation in these high-level government meetings has been professor Jennifer Doudna of University of California-Berkeley, Nobel laureate, and co-discoverer of CRISPR technology. Doudna, who took part in a meeting with President Biden in San Francisco in June, possesses genuine bona fides in leading public dialogs on health tech ethics, particularly in human gene editing. She has also helped found drug discovery and diagnostic companies, all of which undoubtedly are adopting AI in various parts of their workflow. We applaud her inclusion in these meetings. 

But one expert voice on the intersection of AI and healthcare simply isn’t enough. We need more. 

This isn’t the only way discussions about AI are overlooking healthcare. In June, Senate Majority Leader Chuck Schumer announced the SAFE Innovation framework to “support responsible systems in the areas of misinformation, bias, copyright, liability, and intellectual property.” But his introduction to this major policy framework proposal did not mention healthcare even though it is also susceptible to these concerns. 

But at least there are signs that the House is considering it. Reps. Ted Lieu, a Democrat from California,  and Ken Buck, a Republican from Colorado, are now cosponsoring a bill to create a blue-ribbon committee on artificial intelligence. Lieu told the Washington Post that AI “can be disruptive to society, from the arts to medicine to architecture to so many different fields.” (Emphasis ours). 

Both congressional initiatives would do all of us a service to include medicine and healthcare as a major focus area. Simply wrapping applications of AI technology in an overarching manner that includes life science, healthcare, and medicine is truly life-critical. By including more varied representatives and stakeholders from the sector, policymakers can better understand the considerations of AI that are more relevant in healthcare. This will help shape effective and responsible regulations that foster innovation while safeguarding patient well-being. One good place to start would be the Alliance for AI in Healthcare. (We are admittedly a little biased here: Two of us, Sarah and Rafael, are on the AAIH board of directors; all three of us work for companies that are members of the alliance.) 

Healthcare deserves particular consideration because it presents a much broader spectrum of risks than most other uses of AI. While a hallucination by a consumer-facing chatbot may cause a student to get the wrong answer on their homework, an error by an AI that is used to diagnose or treat a disease could cause physical harm to a patient, even death. How should such systems be tested? And how should the risks associated with their use be communicated to doctors and patients? Even the training of AI systems presents different risks in healthcare. While some artists are rightfully upset about the their artwork being used to train AI systems without their permission, that’s nothing compared with how patients will feel about companies training AIs on their private health information. 

Fortunately, sound regulatory frameworks already exist for new medical technologies. In most cases, the FDA oversees these technologies and ensures they are safe and effective for their intended use. AI-based technologies designed to solve medical problems should fall under the same regulatory purview as traditionally discovered medicines, diagnostics, and devices. The principles of do-no-harm, and the goals of expanding access and improving healthcare outcomes, can be equally applied to AI-based healthcare advances. Increasingly there are calls for the creation of new regulatory bodies to oversee general AI systems. If these bodies are given purview over AI applied to healthcare, the approach could create unnecessary complications by creating gray areas around jurisdiction and definitions. If a large model is primarily trained on healthcare data, should it be considered a general-purpose model? If a general-purpose model is applied to a healthcare problem, is it now a medical device? Who should regulate these systems – the FDA or a new AI regulatory agency? 

Determining the regulatory scope of AI models trained on healthcare data and their application to medical problems requires careful consideration. Healthcare does not need a blunt ax regulatory framework designed for the general purpose but rather a concerted effort to educate stakeholders and to extend regulation to contemplate the nuances of AI-based innovation. Instead of spending resources on creating new federal agencies, we should empower existing bodies like the FDA to regulate these new medical technologies effectively and collaborate with organizations such as the AAIH to work for data standardization and adequate policy work to avoid pitfalls. This approach would ensure the safety and efficacy of AI applications in healthcare without stifling innovation, thus ultimately benefiting patients.

It’s crucial that policymakers incorporate the voices of healthcare stakeholders into AI policy conversations to ensure that any new regulations support responsible innovation. While this certainly includes experts such as nurses and physicians and representatives from biotechnology, digital health, and pharmaceutical industries, it must also include the most important group of stakeholders in healthcare: patients themselves.

Charles Fisher, Ph.D., is CEO of Unlearn.ai.
Sarah Benson-Konforty, MD, is managing partner at 1010VC and advisor to Pepticom.
Rafael Rosengarten, Ph.D., is CEO of Genialis.

Share this story, choose your platform!