Testimony to FDA’s Digital Health Advisory Committee regarding generative artificial intelligence (AI)-enabled digital mental health medical devices
By Michael T. Abrams, M.P.H., Ph.D.
I am Michael Abrams, senior health researcher with the nonprofit, consumer advocacy organization, Public Citizen. I have no financial conflicts of interest related to today’s topic.
Mental health devices using generative artificial intelligence (AI)-enabled technology portend a distinctive spectrum of outcomes, ranging from agony to recovery. When they work, they may save time and reduce physical and emotional “pain” for both patients and mental health professionals. Otherwise, such devices may drain the pockets, autonomy and spirit of people who, because of complex brain-based illnesses, are in especially vulnerable states. The behavioral health domain of medicine strives to maintain or restore physiological function that is essential to our humanity. Before generative AI, we have accumulated a frustrating, though many times successful, record of addressing major psychiatric illness including substance use disorders. To assume we can advance that record with generative AI is both logical and risky; that is why we are here today.
So far, however, FDA’s Center for Devices and Radiologic Health (CDRH) has been limited in regulating mental health devices. Many unregulated companion or wellness applications are currently available. Moreover, the FDA recently has cleared other digital devices with limited review or despite emerging concerns. For example, FDA briefing materials for this meeting cites a computer game for attention-deficit/hyperactivity disorder (ADHD) as an “authorized” mental health device.[1] Not noted in the briefing material is that this device demonstrated very limited effectiveness, if any, in treating most of the symptoms that characterize ADHD.[2]
Another digital device example, albeit not pertaining directly to mental health or generative AI, is the performance standard for widely used pulse oximeters. This performance standard has yet to be fully addressed even though data from the COVID-19 epidemic revealed that these mostly class II devices often fail to accurately report blood oxygen levels in persons with dark skin.[3] This example highlights the importance of robust and diverse clinical trials to evaluate the effectiveness and safety of novel medical technologies, including generative AI-enabled mental health devices, which may be especially prone to racial disparities in performance.
For today’s discussion questions, I offer these brief comments.
For the first two questions, regarding a “standalone” (perhaps even an over the counter) chatbot to treat major depressive disorder (MDD) in adult patients refusing traditional psychotherapy raises the following question for our health care system:
Why are people with such serious diagnoses refusing standard of care?
It seems the scenario presented above answers this question in part by encouraging the filling of poorly-defined treatment “gaps” with highly concerning therapy substitutes. Moreover, it suggest many of us need to be reminded that FDA’s role is principally to judge safety and effectiveness of new devices, rather than to “facilitate innovation” (as the FDA often likes to say), which is the purview and responsibility of other entities such as the National Institutes of Health (NIH), research universities and device manufacturers.
Accordingly, it is essential that FDA evaluation of any such device should require randomized, well-powered studies with robust comparator arms. The evaluation should further involve standardized reviews of the AI software, human-in-the-loop protocols, and input-output time-drift data. De novo Class III (highest risk) designation should be the default, as well as expertise within FDA staff in disciplines germane to randomized designs for evaluations of psychotherapy and the computational technology used in the devices. For these and related FDA staff positions, Public Citizen urges the use of direct Congressional funding, because reliance on user fees introduces concerns about “industry capture” by large device firms.
For question 3, Public Citizen urges that children and adolescents not be considered as users of autonomous software therapy programs (bots) for a serious mental illness until such bots have been fully evaluated and shown to be safe and effective in adults. Even then, generative AI therapy bots may never be appropriate for children. A prestigious consortium of developmental scientists recently wrote this: “As scientists, we are sounding the alarm now because we believe that AI has the potential to derail the foundations of human relationships.”[4]
Finally, here are a few resources that may be helpful to this committee and AI staff at the FDA should be familiar with, all freely available at Public Citizen’s website:
“Promise and peril: artificial intelligence in health care” by Eagan Kemp[5]
“Chatbots are not people” by Rick Claypool[6]
{A letter to the OMB Director (Russel Vought) urging the federal government to block its procurement and deployment of the AI-software, Grok, because that large language model (LLM) has demonstrated “recurring patterns of ideological bias, erratic behavior, and tolerance for hate speech…”[7]}
Thank you.
—
[1] U.S. Food and Drug Administration. Executive summary for the Digital Health Advisory Committee meeting. General artificial-enabled digital mental health medical devices. November 5, 2025. https://www.fda.gov/media/189391/download. Accessed November 11, 2025.
[2] Abrams MT. Devices to treat ADHD: do they work? Health Letter January 1, 2023. https://www.citizen.org/article/devices-to-treat-adhd-do-they-work/. Accessed November 5, 2023.
[3] Abrams MT. Bias in measuring blood oxygen in patients with dark skin: comment on the FDA’s latest draft guidance for pulse oximeter makers. March 10, 2025. https://www.citizen.org/article/bias-in-measuring-blood-oxygen-in-patients-with-dark-skin-comment-on-the-fdas-latest-draft-guidance-for-pulse-oximeter-makers/. Accessed November 5, 2025.
[4] Roche EC, Hirsh-Pasek K. Romeo, et al. Statement on the risks of AI to babies and toddlers around the world. September 9, 2025. https://docs.google.com/document/d/1sz0lCkeeEdug5GltKQNODYE8jLV8STNF/edit. Accessed November 5, 2025.
[5] Kemp E. Promise and peril: artificial intelligence in health care. November 21, 2024. https://www.citizen.org/article/promise-and-peril-artificial-intelligence-in-health-care/. Accessed November 5, 2026.
[6] Claypool R. Chatbots are not people: designed-in dangers of human like AI-systems. September 26, 2023. https://www.citizen.org/article/chatbots-are-not-people-dangerous-human-like-anthropomorphic-ai-report/. Accessed November 5, 2026.
[7] Public Citizen. New letter calls on OMB to block Grok implementation. August 28, 2025. https://www.citizen.org/news/new-letter-calls-on-omb-to-block-grok-implementation/. Accessed November 5, 2026.