fb tracking

Promise and Peril:

Artificial Intelligence in Health Care

Artificial Intelligence May Mean Big Changes for Health Care and Big Money for Corporations

The fragmented and profit-driven nature of the U.S. health care system already threatens the health of Americans and limits their access to necessary care. Adding artificial intelligence (AI), whether algorithmic or generative, without sufficient oversight and regulations increases the risks of harm significantly. AI is already in use in some medical systems and even has some promising uses, but huge risks remain and the lack of sufficient oversight places patients and providers at risk.

This report explores emerging AI-enabled technologies that companies are already promising will improve health care for patients, reduce burdens for providers, and bring down health care costs. We focus on three areas of growing AI use: AI-enabled programs intended to assist medical professionals and institutions with largely administrative tasks, AI-enabled technologies that are intended to augment the practice of medicine, and the growing number of AI-enabled mental health support programs. All of these technologies have potential risks, especially given the lax oversight regime that currently exists for their use. Among other things, we found:

  • Allegations of Medicare Advantage plans using AI to deny beneficiaries medically necessary services.
  • Concerns about the potential for AI-enabled technology to reduce or exacerbate racial and ethnic health disparities and about how equitably AI will be able to be taken up by providers and hospitals, particularly in lower resource and rural health settings.
  • Apprehension about the potential for companies to take advantage of the growing use of AI in health care to generate excessive revenue while putting profit ahead of patients.
  • Efforts to use AI to augment the practice of medicine, in both large and small ways, often with untested promises that could spell danger for patients.
  • Thousands of AI-enabled mental health apps (mobile applications available for personal use)—some targeting teens and being marketed to school systems—operating in a legal gray area despite evidence that some such applications have been reported to have contributed to self-harm.
  • Potential uses of AI-enabled technology to assist providers and institutions with discrete administrative tasks.
  • Relevant regulations and recommendations for improving oversight of AI-enabled technologies in the health sector.

The rapid uptake of AI-enabled technology in health care is leading to important questions about what role the private sector, state governments, and federal agencies should play in overseeing the use of AI in health care while fostering innovation.[1] However, given the distressing uses of AI-enabled technology in health care already, including the use of AI by Medicare Advantage plans to deny beneficiaries medically necessary services, and significant worries about AI reinforcing or even worsening racial bias in health and health care, oversight cannot be left solely to the private sector.[2]

Finally, the costs of implementing AI across the health care system are likely to be significant, not to mention the broader energy costs.[3] Companies are already beginning to lobby Congress and the federal government on issues related to AI in health care, including how oversight will be conducted and how federally funded health programs will reimburse hospitals and providers for using AI-enabled technologies. Public Citizen recently released a report that identified how widespread lobbying already is on AI in general, with many of the companies cited having connections to the health care sector. [4]

I. AI in Health Care Has Significant Implications for Equity, Cost and Corporate Profits

As corporations roll out AI across the health care system, there are already questions about how AI will impact racial and ethnic health disparities, how equitably AI will be able to be taken up by providers and hospitals, particularly in lower resource or rural health settings, and the extent to which corporations will seek to put profit ahead of patients.[5]

The Potential for AI to Exacerbate Racial and Ethnic Health Disparities Requires More Attention

The evidence on the impact of AI-enabled technologies on racial and ethnic health disparities has been mixed so far, highlighting the need to proceed with caution.[6] Most uses of AI in health care have significant implications for racial and ethnic disparities.[7] Implementing AI-enabled technologies with a specific eye toward reducing bias requires particular focus and targeted efforts.

Bias can be introduced into AI-enabled technology through a number of potential avenues. One source can be a lack of sufficient diversity in the data on which an AI is being trained.[8] For example, the AI may overly interpret racial disparities in the data as part of the baseline or may make inappropriate inferences from a limited number of cases. For example, training an AI-enabled technology on historical data that contains differences due to bias in care provided to white patients versus patients of color without correcting for that bias may lead the AI to reproduce racial biases in its recommendations for care.[9]  In addition, developer biases may make their way into AI-enabled technologies through the parameters they code into their technologies.[10] These biases may not be initially evident to providers or patients making use of the technology.

There is also the possibility of an AI-enabled medical device, such as a device that diagnoses diseases based on scans or provides recommendations for treatment—particularly one using a large language model—to drift over time with the potential to exacerbate disparities. Model specification, the process by which an AI is trained to recognize patterns or make decisions,  is extremely important but also very complex and so any generative AI-enabled technology will require oversight, including assessing how it may be impacting provider or patient behavior and tracking untended consequences, over the course of its use to reduce the risk of exacerbating any biases it may have.[11]

Questions Remain About Potential Costs of AI in Health Care and Undue Corporate Profiteering

While many health systems are partnering with companies to implement AI-enabled technologies that are still in testing, the eventual cost of many such technologies is likely to be significant, at least initially.[12] In addition, reimbursement for the implementation of AI-enabled technologies, whether from government programs or private payers, is still being explored, which will affect the ability of many institutions and providers to afford the technologies and provide adequate training for staff. This may also lead to increased costs for patients, if providers are acting on AI recommendations for care when a particular treatment plan is not fully covered by their health insurance.[13]

Some doctors report optimism about the use of AI in health care, particularly as it relates to administrative tasks, even though there has been limited uptake among practices so far.[14] However, even clinicians that are seeing some benefits from the use of AI are also raising questions about how to scale up the use of AI and how payers will compensate providers for the use of AI, given that AI-enabled technology implementation and training can be expensive.[15] Early adopting institutions are often contracting or partnering directly with AI developers to use their proprietary technology and so uptake across the health care industry and sectors currently varies widely.[16]

Even some of the early use of AI in health care is reiterating just how much profit dominates the U.S. health care system. Some companies have already been accused of including self-serving code in AI-enabled applications to increase revenue. For example, the Justice Department has subpoenaed at least three companies – GSK, AstraZeneca, and Merck – as they investigate whether AI embedded in patient records is being used to prompt doctors to recommend revenue-generating treatments that may not have been medically necessary.[17]

Another potential danger with growing use of AI in health care is the role of venture capital and private equity firms.[18] Private equity’s involvement in health care can lead to shocking safety lapses, rising prices, and price gouging, among other negative consequences.[19] Private equity firms have been reported to push treatments and investments to drive up profits while neglecting patient care.[20] Given that AI startups in health care have received over $13 billion in venture capital funding over the last decade, it is understandable that the industry’s growing role has given some stakeholders pause.[21]

II. AI to Assist Medical Professionals and Institutions With Discrete Administrative Tasks Show Early Promise but Concerns Remain

Hospital systems and provider networks are using AI to attempt to improve administrative workflows, including scheduling workers, booking clinics, and scheduling operating rooms, and in some outreach to patients, such as scheduling procedures, providing information, or conducting follow up.[22]

The use of AI-enabled technologies to help with administration tasks has potential benefits, though not without potential drawbacks.[23] The literature on the outcomes of such technologies is understandably limited, but that hasn’t stopped companies and even trade groups from already engaging in advocacy to increase the technology’s use, whether pushing for the uptake of AI in various settings or lobbying on regulations related to overseeing AI in health care.[24]

Some health systems are using AI to reduce administrative burdens for staff. One example of this is an AI voice assistant that calls patients and advises of potential preventative screening. The goal is to see if patients are interested in a particular service and, if so, the AI tool will sign them up for procedures, provide advice, and even start the process of sending any necessary testing equipment.[25] WellSpan Health identified the goal of reducing workforce burdens and cited that AI could reach out to patients in a growing number of languages. [26] The company that developed the software implemented by WellSpan, Hippocratic AI, is  linked with other AI-enabled technologies and has received funding from Nvidia to expand its work.[27]

However, it can be difficult to assess the accuracy of claims made about some of these technologies, as four Texas hospitals found out.[28] The Texas Attorney General is alleging that Pieces Technologies made misleading claims about the accuracy of its AI-enabled technology, including the likelihood of severe hallucinations by their tool that is marketed as being able to use generative AI to provide summaries of patients’ conditions and potential treatments to providers. [29] Generative AI are known to sometimes provide incorrect or misleading information, often referred to as AI hallucinations, which can be confusing for users of such technologies.[30]

Some hospitals and providers are identifying potential uses of AI-enabled technologies, including data management, workflow optimization, and resource allocation; clinical functions, such as diagnostics and treatment planning; and to assist with patient engagement.[31] However, each of these potential benefits requires significant investment to implement the necessary AI-enabled technology, sufficient testing to ensure safety and efficacy, and ongoing oversight to ensure the technology is working as intended.

III. AI Intended to Augment the Practice of Medicine is Already Raising Concerns

AI-enabled technologies that would change the way providers practice medicine are already being developed and deployed. Companies are launching AI-enabled programs to take medical notes for providers and summarize them, with the goal of freeing up their time and creating more accurate case notes; programs to analyze medical information, including diagnostic images, to help in the diagnosis of illness and disease; and programs to help develop treatment plans and to increase patient adherence to treatments.

Hospital systems are considering ways to change how they provide care and how health care providers practice medicine, including replacing nurses with AI-enabled technology under certain circumstances. Some of these efforts are being put forth as ways to cut costs and improve the monitoring of a patient, but these technologies threaten to sell patients and nurses short.

Early Pilots of AI to Improve Oversight of Operating Rooms Show Promise but Can Be Controversial

One potential area where AI-enabled technology is already finding growing use is in the operating room.[32] Proponents and companies marketing relevant AI-enabled technologies claim that AI can assist providers ahead of surgery, during surgical procedures, and can help provide feedback, including areas for improvement, after surgeries are complete.[33] However, these potential benefits are not without controversy.[34]

The use of AI-enabled technology in assisting surgeons has been presented as a way to improve the safety and accuracy of surgeries.[35] The goal of the technology is to record surgeries, via microphones and cameras, and, through the use of AI, analyze where improvements could be made in future procedures. The AI aspect of the program assesses how well protocols for a given procedure are followed and provides feedback on what went well and what went wrong. Unsurprisingly, some institutions have reported pushback on the use of such technologies, including sabotage, such as unplugging cameras or turning them to face away from the patient, by some providers.[36] Presumably, these providers are concerned about enhanced oversight of their surgeries and potential consequences for any mistakes.

AI Threatens to Supplant Nurses in Concerning Ways

When it comes to the use of AI-enabled applications to replace nursing, there is not a lot of evidence of benefits, but there are significant potential problems. Replacing nurses with AI-enabled devices, such as providing patients with an iPad that tracks their vitals instead of being visited regularly by a nurse, are being considered as a way of potentially lowering the cost of care. Such cuts are likely to come at the expense of patient safety and wellbeing.  Appropriate nurse-to-patient ratios reduces costs when compared with understaffing of nurses, which could be one of the dangers of overreliance on AI-enabled tools instead of hiring a sufficient number of nurses.[37] Much of the previous research on AI-based technologies for nursing has focused on AI development as opposed to implementation.[38] Further, the potential benefits of the technology have failed to be tested and where testing has been done, such technologies have not worked as intended.[39]

Some companies are already working on an AI-enabled technology that appears to be intended to undercut nurses and even claim that it can outperform human nurses.[40] Nvidia, which is collaborating with Hippocratic AI, claims that its technology can provide many nursing services at a fraction of the cost of an actual nurse.[41] Such marketing talking points are being countered by nurses themselves, who are expressing apprehension about the use of AI in health care, particularly when it comes to replacing nurses, arguing that—because of the complex nature of nursing and the importance of human experience and insight—AI cannot serve patients in the same ways that nurses can.[42]

There are also significant ethical implications of the use of such technologies to replace nurses. One study, which both reviewed recent literature and explored potential approaches for integrating AI into nursing, found that for AI to be used ethically to augment nursing it must be in line with the central tenets of nursing, must not attempt to undertake roles that should be done by humans, and should be used to improve the experience of nurses instead of undermining them.[43]

A recent survey, which had responses from around 7,200 nurses practicing in the U.S. between May and October 2023, found that nurses express the need for caution regarding the implementation of AI in health care.[44] Some of the top negative sentiments among nurses surveyed included a lack of trust in the accuracy of what AI-enabled technologies were generating, the lack of human interaction, limited understanding of how to use these AI technologies,  patient safety and data privacy, the elimination of nursing jobs, and fears of AI-enabled technologies exacerbating existing biases in the health care system. [45]

AI Provider Aids Are Already Raising Red Flags

Some hospitals and providers are experimenting with different AI technologies, including generative AI that creates notes for providers based on patient visits.[46] Given the potential for AI hallucinations in outputs – a drawback of generative AI with potential significant implications when it comes to patient care – it is understandable that many providers and institutions are proceeding cautiously.[47]

There is also the potential for overreliance on AI, which may lead to less well-prepared providers when it comes to clinical decision making. There is already some evidence that providers are offloading some of their work to AI in a way that may hinder their attention to patient care.[48] Application designers report having to add prompts to slow down provider approval of AI generated products (e.g., providers at least having to scroll to the bottom of an AI-generated form before approving).

Some patient contacts with AI-enabled tools are generating incorrect information. One system, MyChart, which is already reported to be in use by more than 150 health systems and around 15,000 doctors, has already been found to have provided incorrect information or made dangerous errors in answering patients’ questions.[49] For example, one physician reported AI generating a draft incorrectly indicating that a patient was up to date with a particular vaccination—and even reported made-up vaccination dates—when the AI had no access to the patient’s vaccination records.[50]

IV. Growing Number AI Mental Health Support Programs Present Risks for Patients Seeking Mental Health Care

Many patients are accessing mental health care via AI-enabled chat therapy apps, often at a lower price (many are free, at least initially) and with fewer barriers to entry than an actual mental health provider. However, even though it is challenging for many Americans to get the mental health care they need, it is hard to claim that the currently available AI-enabled chat therapy is a sufficient substitute. While there may be a role for some AI-enabled mental health support, such technologies would require significant testing and regulation to help allay legitimate fears about their use.

The companies developing and marketing these AI-enabled chatbots focus on how cheap, accessible, and easy to use these applications are. They emphasize that while they may not be the same thing as communicating with an actual therapist, they can provide some relief to people experiencing a variety of mental health issues.

Most of these AI-enabled applications exist in a legal gray area where – as long as they don’t explicitly claim to be diagnosing or treating patients’ medical conditions – they are likely not regulated by the Food and Drug Administration.[51]

There are currently thousands of mental health AI applications and many are targeted at specific populations, including young adults or seniors.[52] Companies are approaching states and school districts to offer the services of their chatbots, including asking the state to provide immunity in the case of adverse events associated with the use of the chatbot.[53]

Quickly instituting guardrails for their use is important because there is likely to be significant growth in the use of such applications. While mental health chatbots are relatively new, a 2021 study found that more than 1 in 5 adults reported using a mental health chatbot and that nearly half of adults surveyed said they would be interested in using such an application.[54]

One potential problem is that patients, particularly young adults or patients with limited means, may use a mental health chatbot to replace mental health services provided by a human. Social stigma around sharing mental health issues with a professional may impact this tendency and that users may become even less comfortable sharing their mental health woes with an actual human provider.[55]  In addition, a poor experience with an AI-enabled therapy chatbot may sour a patient on other potential mental health interventions.[56]

The ubiquitous nature of chatbots (they are generally available at any time) may lead to overattachment to the detriment of real-life relationships.[57] Some research has already found that some users, particularly those experiencing emotional problems, can become emotionally reliant on chatbots.[58]

AI-enabled mental health applications may simply be unable to effectively take on a task as complex as providing mental health support, given limitations of AI beyond performing well-delineated and clearly constrained tasks.[59] There are also concerns about AI companies experimenting on people in vulnerable positions without offering them sufficient warnings of potential adverse experiences.[60]

In an example of a particularly acute problem, many chat therapy bots were unable to identify when a patient was in a crisis situation.[61] This is compounded by the fact that one study found that potential users of AI-enabled mental health programs are more susceptible to crises when compared with the general population.[62]

In just their brief existence, some AI-enabled chatbot applications have been charged with creating dangerous situations, including being reported to have been responsible for multiple examples of self-harm and even someone taking their own life.[63] While not all AI-enabled chat bots are marketed as mental health applications, they may experience similar hallucinations to other generative AI-enabled technologies.[64]

V. AI in Health Care Needs Significant Additional Oversight and Regulation

The regulatory environment for AI in health care remains nascent and much work needs to be done to bring it up to speed. While some companies are banding together in the hopes of providing some guardrails through private sector collaboration, these standards cannot be the primary way that AI in health care is regulated.[65] Generative AI-enabled medical devices require particular focus, given that even though AI evolves over time, companies currently don’t have to re-apply for FDA clearance for these technologies as the underlying large language models (or similar technologies) change.[66] Special scrutiny for AI-enabled devices that use large language models, including consumer health-related tools and applications, should requires either presumptively designating these devices as Class III, or high-risk devices, requiring pre-market FDA approval for safety and efficacy, including compliance with the Department of Health and Human Services (HHS)  standards for trustworthy artificial intelligence, or the establishment of a new and more stringent pre-market approval systems for such AI-enabled devices that are not designated as Class III.

In addition, significant privacy protections are necessary as patients may be divulging highly sensitive or protected information to AI-enabled technologies, which may be subject to hacking or improper disclosure, a danger highlighted by recent cyberattacks on health care institutions.[67] Many of the relevant government regulations focus on technologies of the past and do not sufficiently speak to the booming AI field.[68]

One of the key steps undertaken by the Biden Administration in response to the growth of AI in general was the issuance of the Executive Order on Artificial Intelligence, which included health-related steps for relevant federal agencies. HHS was charged with much of the coordination role across various agencies through the Office of the National Coordinator (ONC). In addition, HHS has begun to reorganize internally to add additional bandwidth for oversight of AI and health data more generally.[69] Public Citizen previously provided a series of recommendations for HHS to consider as it implements the AI EO requirements. [70] With much of the implementation of these requirements now falling to the incoming Trump administration, we can anticipate an emphasis on spurring innovation in AI over regulation, potentially at the expense of patients and providers.[71]

Further, we published a list of policy recommendations for Congress and relevant agencies for regulating and overseeing AI-enabled health technologies:[72]

  • Require disclosure and transparency when AI is used: If AI is being used in a health care process, it should be disclosed to patients and providers in a clear and understandable way.
  • Enact guardrails to protect patients: Whenever an AI system is used to make health decisions that may have an impact on a patient, the patient and their physician should have the right to an understandable explanation of the decision, the right to request human review, and the right to have the decision appealed to a human.
  • Guarantee privacy protections: Companies and regulators must maintain patient privacy requirements around the development, testing, and ongoing evaluation of AI in health care.
  • Prevent discrimination and reduce bias: AI databases being used for training generative AI systems must be reflective of the patient population that it is intended to serve. AI algorithms should be focused on improving equity instead of just reproducing current patterns and biases in our health care system. Because AI systems are susceptible to bias from the data they are being trained on, it is important that federal agencies exercise care in their use of training data and continuously monitor AI systems, including seeking as much information as possible about inputs into generative AI systems.
  • Ensure consumer protections: Patients should not have to sign away—including via forced arbitration clauses in contracts—their private right of action, individually or on a class basis, to seek compensation for harms caused or confounded by AI in health care.
  • Implement data minimization: Regulators should require AI tools to collect only task-necessary data and delete it promptly after use.
  • Improve consistency of AI use: HHS and CMS should develop meaningful use standards for AI in Medicare, Medicaid, and other health care programs to protect patients, help providers and institutions to better use AI, and improve the opportunities for oversight and accountability. This must also include technical assistance, particularly for low-resource settings, to ensure that uptake of AI in health care is as equitable as possible.
  • Ensure accountability for bad actors: There must be clear procedures for the suspension and debarment of companies found violating an agency’s rules and requirements on AI. Companies that are found to have knowingly concealed harms or significant potential harms should face felony criminal prosecution, for the company as well as top-level responsible corporate officers.
  • No immunity provisions for use of AI: Health care providers must be liable for harms caused by their use of and reliance on AI tools, with no special “AI immunity.” Similarly, companies that provide AI health care tools must also be held liable. The allocation of liability between providers and AI companies should be worked out on a case-by-case basis, but never at the expense of injured patients.
  • Require special scrutiny for health-related generative AI tools: Either all consumer health-related AI tools and apps should be designated presumptively as Class III devices requiring pre-market FDA approval for safety and efficacy, including compliance with the Department of Health and Human Services standards for trustworthy AI, or a new and more stringent pre-market approval system should be created. Generative AI consumer health tools should be required to be tested and approved before being deployed.
  • Special attention for AI therapeutic tools: Chatbots and generative AI tools that claim or imply therapeutic benefit require special attention. Users must always understand that they are engaging with AI, not a person. General purpose AI tools must state clearly that they do not provide therapy. Privacy protections and FDA approval standards should be especially stringent for therapeutic AI tools.

Congressional action on AI in general and on health care specifically has been tentative, given the developing landscape regarding the use of such products. Members of Congress are already considering potential approaches but consensus has yet to be found.[73] Lobbying on AI generally is already growing rapidly, as our recent report found.[74] Some key issues in health care include how will the use of AI in health care settings be reimbursed by Medicare, Medicaid and other federal programs.[75] Some of these technologies can be very expensive and efforts are already underway by both health systems and AI technology companies to lobby for significant reimbursement.

Until there is a better understanding of the potential benefits and risks of AI in health care, it is important to proceed cautiously and to ensure that there is adequate oversight of all relevant technologies. As with any new technology, it behooves us to put those most likely to experience harms, in this case patients and providers, first as we make decisions about how to rapidly to pursue innovation. And given the dangerous role of greed in our health care system, we must ensure that companies are not cutting corners and putting patients at risk for harm.

Sources

[1]Melissa Suran and Yulin Hswen, How Do Policymakers Regulate AI and Accommodate Innovation in Research and Medicine?, 331 JAMA 185, 185-187 (2024).

[2]Lyla Saxena, Center for Medicare Advocacy, The Role of AI-Powered Decision-Making Technology in Medicare Coverage Determinations 1 (January 2022), http://bit.ly/3YwzhhQ.

Casey Ross and Bob Herman, Stop Allowing MA Plans to Use AI to Deny Care Without Review, Lawmakers Urge CMS, Stat (June 25, 2024), https://bit.ly/4hBZnsy.

Ziad Obermeyer et al., Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations, 366 Science 447, 447-453 (2019).

[3]James Vincent, How Much Electricity Does AI Consume, The Verge (February 16, 2024),  https://bit.ly/4eHoZ4u.

[4]Luyi Cheng and Mike Tanglis, Public Citizen, Artificial Intelligence Lobbyists Descend on Washington DC 12 (May 2024), https://bit.ly/3ChqDw9.

[5]Miles Meline, Health Care Algorithms Can Improve or Worsen Racial and Ethnic Disparities, Penn LDI (April 24, 2024), https://bit.ly/48I4zH5.

Laura Dyrda, Is AI Helping or Hurting Rural Healthcare?, Becker’s Hospital Review (September 25, 2024), https://bit.ly/3NXsR6w.

Ritu Agarwal and Guodong Gao, Toward an “Equitable” Assimilation of Artificial Intelligence and Machine Learning into Our Health Care System, 85 North Carolina Medical Journal 246, 246-249 (2024).

[6]Miles Meline, Health Care Algorithms Can Improve or Worsen Racial and Ethnic Disparities, Penn LDI (April 24, 2024), https://bit.ly/48I4zH5.

[7]Irene Dankwa-Mullan, Health Equity and Ethical Considerations in Using Artificial Intelligence in Public Health and Medicine, 21 Preventing Chronic Disease 1, 1-6 (2024).

[8]Caleb J. Colón-Rodríguez, Shedding Light on Healthcare Algorithmic and Artificial Intelligence Bias, HHS OMH News (July 12, 2023), https://bit.ly/48JFXxD.

[9]Ryan Levi, AI in Medicine Needs to Be Carefully Deployed to Counter Bias – and Not Entrench It, National Public Radio (June 6, 2023), https://bit.ly/3Cz0oRW.

[10]Caleb J. Colón-Rodríguez, Shedding Light on Healthcare Algorithmic and Artificial Intelligence Bias, HHS OMH News (July 12, 2023), https://bit.ly/48JFXxD.

[11]Food and Drug Administration, Total Product Lifecycle Considerations for Generative AI-Enabled Devices 7 (2024), https://bit.ly/3Co0jAM.

[12]Andrei Kasyanau, Balancing The Cost Of AI In Healthcare: Future Savings Vs. Current Spending, Forbes (April 17, 2024), https://bit.ly/3NZQknz.

[13]Sneha S. Jain et al., Avoiding Financial Toxicity for Patients from Clinicians’ Use of AI, 391 NEJM 1171, 1171-1173 (2024).

[14]Andis Robeznieks, Big Majority of Doctors See Upsides to Using Health Care AI, AMA News (January 12, 2024), https://bit.ly/3CiJp6B.

[15]Id.

[16]Naomi Diaz, Why a Health System is Partnering With OpenAI, Becker’s Hospital Review (September 23, 2024), https://bit.ly/3UESrRG.

Hospital System Embraces Artificial Intelligence, MCG Health (viewed on November 8, 2024), https://bit.ly/3UIK2fW.

[17]Ben Penn, DOJ’s Healthcare Probes of AI Tools Rooted in Purdue Pharma Case, Bloomberg Law (January 29, 2024), https://bit.ly/4ekJZhh.

[18]Healthcare IT Spending: Innovation, Integration, and AI, Bain & Company (viewed on November 8, 2024), https://bit.ly/3CinPyW.

Heather Landi, 2024 Shaping Up to Be a Big Year for Healthcare AI Companies. But Some Investors Remain Cautious, Fierce Healthcare (June 12, 2024), https://bit.ly/4fiex4v.

[19]Eagan Kemp, Public Citizen, Private Equity’s Path of Destruction in Health Care Continues to Spread 4 (March 2023), https://bit.ly/3vrMe26.

[20]Id.

[21]Gabriel Perna, Where AI in Healthcare is Receiving Venture Capital Investment, Modern Healthcare (September 10, 2024), https://bit.ly/4emEJd6.

Press Release, Public Citizen, Action on Predatory Private Equity in Health Care ‘Needed, Stat’ Says Public Citizen (March 22, 2023), https://bit.ly/3NXQnAr.

Press Release, Rep. Pramila Jayapal, Jayapal Introduces Legislation to Protect Seniors in Nursing Homes from Corporate Greed (March 1, 2022), https://bit.ly/40zjzoQ.

[22]Shiva Maleki Varnosfaderani and Mohamad Forouzanfar, The Role of AI in Hospitals and Clinics: Transforming Healthcare in the 21st Century, 11 Bioengineering 337, 337-341 (2024).

Giles Bruce, The ‘long-term vision’ of AI at Mass General Brigham, Becker’s Hospital Review (September 19, 2024), https://bit.ly/3AzSHdK.

[23]Shiva Maleki Varnosfaderani and Mohamad Forouzanfar, The Role of AI in Hospitals and Clinics: Transforming Healthcare in the 21st Century, 11 Bioengineering 337, 337-341 (2024).

[24]Ben Leonard and Chelsea Cirruzzo, Health Care Groups: Regulate AI, but Not Too Much, Politico (May 15, 2024), https://bit.ly/3ACaOjb.

Artificial Intelligence (AI), American Hospital Association (viewed on November 8, 2024), https://bit.ly/48H8ynj.

About Us, Alliance for Artificial Intelligence in Healthcare (viewed on November 8, 2024), https://bit.ly/40DXFko.

Our Purpose, Coalition for Health AI (viewed on November 8, 2024), https://bit.ly/3Z8R8Nz.

[25]Giles Bruce, Why a Health System is Launching a Generative AI ‘Agent,’ Becker’s Hospital Review (September 26, 2024), https://bit.ly/3Aoq5UT.

[26]Id.

[27]Id.

Heather Landi, Nvidia’s Venture Arm Backs $17M Investment in Hippocratic AI to Build Out Generative AI Healthcare Agents, Fierce Healthcare (September 19, 2024), https://bit.ly/3YHCriR.

[28]Naomi Diaz, Company resolves AI ad dispute with Texas AG, Becker’s Hospital Review (September 19, 2024), https://bit.ly/3YZvxqo.

[29]Id.

[30]Lisa Lacy, Hallucinations: Why AI Makes Stuff Up, and What’s Being Done About It, CNET (April 1, 2024), https://bit.ly/48GwWFG.

[31]Shefali Bhagat and Deepika Kanyal, Navigating the Future: The Transformative Impact of Artificial Intelligence on Hospital Management – A Comprehensive Review, 16 Cureus 1, 1-8 (2024).

[32]Jim McCartney, AI is Poised to “Revolutionize” Surgery, American College of Surgeons Bulletin (June 7, 2023), https://bit.ly/3YFsp1B.

[33]Simar Bajaj, This AI-Powered “Black Box” Could Make Surgery Safer, MIT Technology Review (June 7, 2024), https://bit.ly/3YVDdKm.

[34]Daniel Hashimoto et al., Artificial Intelligence in Surgery: Promises and Perils, 268 Annals of Surgery 70, 70-74 (2018).

Simar Bajaj, This AI-Powered “Black Box” Could Make Surgery Safer, MIT Technology Review (June 7, 2024), https://bit.ly/3YVDdKm.

[35]Daniel Hashimoto et al., Artificial Intelligence in Surgery: Promises and Perils, 268 Annals of Surgery 70, 70-74 (2018).

[36]Simar Bajaj, This AI-Powered “Black Box” Could Make Surgery Safer, MIT Technology Review (June 7, 2024), https://bit.ly/3YVDdKm.

[37]Karen Lasater et al., Patient Outcomes and Cost Savings Associated With Hospital Safe Nurse Legislation: An Observational Study, 11 BMJ Open 1, 1-6 (2021).

[38]Hanna von Gerich et al., Artificial Intelligence-based Technologies in Nursing: A Scoping Literature Review of the Evidence, 127 International Journal of Nursing Studies 1, 1-17 (2022).

[39]Id.

Andrew Wong et al., External Validation of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients, 181 JAMA Internal Medicine 1065, 1065-1068 (2021).

Qianqian Luo et al., External Validation of a Prediction Tool to Estimate the Risk of Human Immunodeficiency Virus Infection Amongst Men Who Have Sex With Men, 98 Medicine 1, 1-6 (2019).

[40]Maxwell Zeff, Nvidia Wants to Replace Nurses With AI for $9 an Hour, Gizmodo (March 19, 2024), https://bit.ly/3YZ82Oz.

[41]Id.

[42]Emma Beavins, National Nurses United Pushes Back Against Deployment of ‘Unproven’ AI in Healthcare, Fierce Healthcare (June 3, 2024), https://bit.ly/3O0zzbV.

[43]Felicia Stokes and Amitabha Palmer, Artificial Intelligence and Robotics in Nursing: Ethics of Caring as a Guide to Dividing Tasks Between AI and Humans, 21 Nursing Philosophy 1, 1-4 (2020).

[44]The Pulse of Nurses’ Perspectives on AI in Healthcare Delivery, McKinsey & Company (viewed on November 8, 2024), https://bit.ly/3ClZLLw.

[45]Id.

[46]Giles Bruce, 10% of Mass General Brigham Physicians Use Generative AI, Becker’s Hospital Review (May 9, 204), https://bit.ly/3Z4KBlN.

[47]Steve Lohr, A.I. May Someday Work Medical Miracles. For Now, It Helps Do Paperwork., New York Times (June 26, 2023), https://bit.ly/4fIDbLC.

[48]Brett Hryciw et al., Doctor-patient Interactions in the Age of AI: Navigating Innovation and Expertise, 10 Frontiers in Medicine 1, 1-3 (2023).

[49]Teddy Rosenbluth, That Message From Your Doctor? It May Have Been Drafted by A.I., New York Times (September 24, 2024), https://bit.ly/3Z1y0AQ.

[50]Teddy Rosenbluth, That Message From Your Doctor? It May Have Been Drafted by A.I., New York Times (September 24, 2024), https://bit.ly/3Z1y0AQ.

[51]Matthew Perrone, Ready or Not, AI Chatbots are Here to Help With Gen Z’s Mental Health Struggles, Associated Press (March 23, 2024), https://bit.ly/40IBJVo.

[52]Thousands of Mental Health Apps Available: Supporting Evidence Not So Plentiful, American Psychiatric Association (viewed on November 8, 2024), https://bit.ly/48HZBu9.

Matthew Perrone, Ready or Not, AI Chatbots are Here to Help With Gen Z’s Mental Health Struggles, Associated Press (March 23, 2024), https://bit.ly/40IBJVo.

Jessica Schreifels, Young Utahns Struggle with Their Mental Health. Is a New A.I. Chatbot the Answer?, The Salt Lake Tribune (September 23, 2024), https://bit.ly/3CpraML.

Terry Spencer, Chatty Robot Helps Seniors Fight Loneliness Through AI Companionship, Associated Press (December 22, 2023), https://bit.ly/4hK1fiT.

[53]Jessica Schreifels, Young Utahns Struggle with Their Mental Health. Is a New A.I. Chatbot the Answer?, The Salt Lake Tribune (September 23, 2024), https://bit.ly/3CpraML.

[54]M. D. Romael Haque and Sabirat Rubya, An Overview of Chatbot-Based Mobile Mental Health Apps: Insights From App Description and User Reviews, 11 JMIR mHealth and UHealth 1, 1-13 (2023).

[55]Id.

[56]Yuki Noguchi, Therapy by Chatbot? The Promise and Challenges in Using AI for Mental Health, National Public Radio (January 19, 2023), https://bit.ly/3CkJl69.

[57]M.D. Romael Haque and Sabirat Rubya, An Overview of Chatbot-Based Mobile Mental Health Apps: Insights From App Description and User Reviews, 11 JMIR mHealth and uHealth 1, 1-13 (2023).

[58]Shunsen Huang et al., AI Technology Panic—is AI Dependence Bad for Mental Health? A Cross-Lagged Panel Model and the Mediating Roles of Motivations for AI Use Among Adolescents, 17 Psychology Research and Behavior Management 1087, 1087-1100 (2024).

[59]J.P. Grodniewicz and Mateusz Hohol, Waiting for a Digital Therapist: Three Challenges on the Path to Psychotherapy Delivered by Artificial Intelligence, 14 Frontiers in Psychiatry 1, 1-8 (2023).

[60]Thomas Germain, A Mental Health App Tested ChatGPT on Its Users. The Founder Said Backlash Was Just a Misunderstanding, Gizmodo (January 9, 2023), https://bit.ly/4envs4p.

[61]M.D. Romael Haque and Sabirat Rubya, An Overview of Chatbot-Based Mobile Mental Health Apps: Insights From App Description and User Reviews, 11 JMIR mHealth and uHealth 1, 1-13 (2023).

[62]M.D. Romael Haque and Sabirat Rubya, “For an App Supposed to Make Its Users Feel Better, It Sure is a Joke” – An Analysis of User Reviews of Mobile Mental Health Applications, 421 Proceedings of the ACM on Human-Computer Interaction 1, 1-27 (2022).

[63]Chloe Xiang, ‘He Would Still Be Here’: Man Dies by Suicide After Talking with AI Chatbot, Widow Says, Vice (March 30, 2023), https://bit.ly/4fCDxmI.

[64]Steve Lohr, A.I. May Someday Work Medical Miracles. For Now, It Helps Do Paperwork., New York Times (June 26, 2023), https://bit.ly/4fIDbLC.

[65]Nigam Shah et al., A Nationwide Network of Health AI Assurance Laboratories, 331 JAMA 245, 245-247 (2023).

[66]Ben Leonard and Chelsea Cirruzzo, Beyond Science Fiction: AI Meets Health Care, Politico (February 5, 2024), https://bit.ly/3Z2jkBq.

[67]Daniel Gilbert, Health System to Pay $65 Million After Hackers Leaked Nude Patient Photos, Washington Post (September 22, 2024), https://bit.ly/40Gj2kS.

[68]Sandeep Reddy, Navigating the AI Revolution: The Case for Precise Regulation in Health Care, 25 Journal of Medical Internet Research 1, 1-4 (2023).

[69]Press Release, U.S. Department of Health and Human Services, HHS Reorganizes Technology, Cybersecurity, Data, and Artificial Intelligence Strategy and Policy Functions (July 25, 2024), https://bit.ly/3Z0aiEZ.

[70]Press Release, Public Citizen, Public Citizen Urges HHS to Create Guardrails for Use of AI in Health Care (April 18, 2024), https://bit.ly/3O3omax.

[71]Emil Sayegh, Decoding Trump’s Tech and AI Agenda: Innovation And Policy Impacts, Forbes (November 18, 2024), https://bit.ly/40Q0wXl.

[72]Public Citizen, Regulating AI in Health Care 1 (April 2024), https://bit.ly/3O4nEK5.

[73]Heather Landi, As AI Adoption in Healthcare Grows, Senate Lawmakers Weigh Regulation, Payment Approaches, Fierce Healthcare (February 12, 2024), https://bit.ly/3UMyMit.

[74]Luyi Cheng and Mike Tanglis, Public Citizen, Artificial Intelligence Lobbyists Descend on Washington DC 12 (May 2024), https://bit.ly/3ChqDw9.

[75]Heather Landi, As AI Adoption in Healthcare Grows, Senate Lawmakers Weigh Regulation, Payment Approaches, Fierce Healthcare (February 12, 2024), https://bit.ly/3UMyMit.