In the absence of robust federal guidelines, numerous states have started regulating applications that provide AI-based “treatment” as more individuals seek Artificial Intelligence for Mental Health Guidance.
Nonetheless, the regulations passed this year do not fully accommodate the swiftly evolving realm of AI software. App developers, legislators, and mental health advocates argue that the resulting patchwork of state laws fails to adequately safeguard users or hold harmful technology creators accountable.
“We’re excited to see you in the future,” stated Karin Andrea Stephan, CEO and co-founder of the mental health chatbot app Earkick.
___
Editor’s Note – This article contains a discussion about suicide. If you or someone you know needs assistance, the US National Suicide and Crisis Lifeline can be reached by calling or texting 988. You can also access an online chat at 988Lifeline.org.
___
State laws adopt various approaches. Illinois and Nevada have prohibited the use of AI for mental health treatment. Utah has imposed certain restrictions on therapy chatbots, including the protection of user health data and a requirement for clear disclosures indicating they are not human. Pennsylvania, New Jersey, and California are also evaluating approaches to regulate AI therapy.
The impact on users varies. Some applications restrict access in states where their use is banned, while others have chosen not to make changes pending clearer legal guidance.
Many laws do not encompass common chatbots, such as ChatGPT, which are not specifically marketed for treatment but are used by many for such purposes. These bots have faced lawsuits following incidents where users reported feelings of distress, such as I lost my grip on reality or I took my life after engaging with them.
Beyerlite, responsible for healthcare innovation at the American Psychological Association, acknowledges the potential for apps to address needs, particularly given the shortage of mental health providers, high care costs, and inconsistent access for insured individuals.
A scientifically grounded mental health chatbot designed with expert input and human oversight, according to Wright, could significantly alter the landscape.
“This could be a resource for individuals before they reach a crisis point,” she remarked. “This is something we don’t currently see available in the commercial market.”
Hence, the need for federal regulations and oversight is pressing, she emphasized.
Earlier this month, the Federal Trade Commission initiated inquiries into seven AI chatbot companies to evaluate how to assess, test, and monitor the potentially detrimental impacts of this technology on children and adolescents, including platforms like Instagram and Facebook’s parent company, Google, ChatGPT, Grok (X’s chatbot), Character.ai, Snapchat, and others. The Food and Drug Administration called an advisory committee on November 6th to discuss AI-enabled mental health devices.
Federal bodies can contemplate restrictions on how chatbots are marketed, limit addictive features, require disclosures to users who aren’t healthcare providers, mandate companies to track and report suicidal ideation, and offer legal protections for individuals reporting unethical practices, stated Wright.
Not all applications block access
The spectrum of AI utilization in mental health care, from “companion apps” to “AI therapists” to “mental wellness” applications, presents definitional challenges, underscoring the necessity for precise legislation.
This has led to a range of regulatory methodologies. Some states have specifically targeted companion apps designed purely for social interaction, while prohibiting those providing comprehensive mental health services and imposing penalties of up to $10,000 in Illinois and $15,000 in Nevada.
However, classifying a single application can prove complicated.
Stephan from Earkick mentioned that there remains considerable ambiguity around Illinois law.
Initially, she and her team referred to chatbots resembling a cartoon therapist panda. However, after users utilized certain keywords in feedback, they adapted the terminology for better search visibility.
Last week, they reverted to using treatment and medical terminology. Earkick’s website described chatbots as “your empathetic AI counselor to support your wellness journey,” but now they are labeled “chatbots for self-care.”
Nonetheless, “we do not provide diagnoses,” Stephan clarified.
Users can activate a “panic button” to alert trusted individuals if they find themselves in distress. Chatbots also “nudge” users to seek therapy should their mental health decline. However, Stephan noted that they are not designed as a suicide prevention tool, and if someone mentions self-harm to the bot, law enforcement is not alerted.
Stephan is pleased that AI is being scrutinized thoughtfully but is concerned about the nation’s ability to keep pace with rapid innovations.
“The rate of evolution is overwhelming,” she remarked.
Other applications have swiftly curtailed access. When users in Illinois attempt to download the AI therapy app ASH, they receive a message urging lawmakers to reconsider what they deem “misguided legislation” that restricts applications like ASH.
ASH representatives did not respond to multiple requests for interviews.
Mario Toreto Jr., secretary to the Illinois Department of Finance and Specialist Regulation, indicated that the ultimate objective is to ensure that only licensed therapists are providing therapy services.
“Therapy encompasses more than just verbal exchanges,” Toreto stated. “It demands empathy, clinical judgment, and ethical responsibility, which AI currently cannot emulate.”
One chatbot company aims to replicate treatments accurately
In March, a team at Dartmouth College published its initial report on randomized clinical trials evaluating AI chatbots intended for mental health treatment.
The objective was to develop a chatbot called Therabot to assist individuals diagnosed with anxiety, depression, or eating disorders. The chatbot was trained using scenarios and dialogues crafted by the team to deliver evidence-based responses.
The study discovered that users rated Therabot comparably to therapists, with symptoms significantly reduced after eight weeks compared to those not using the app. All interactions were overseen by humans who intervened when chatbot responses were harmful or not based on evidence.
Nicholas Jacobson, a clinical psychologist leading the research, noted that the findings showed early promise, although further research is needed to confirm whether Therabot is effective for a broader audience.
“This field is still vastly new; we must approach current developments with caution,” he remarked.
Many AI applications prioritize user engagement and tend to affirm everything the user states, rather than challenging their thoughts as a therapist might. This blurs the line between dating and therapy, raising ethical concerns about therapist-client boundaries.
The Therabot team aims to navigate these complexities carefully.
The app remains in testing and is not yet widely available. However, Jacobson worries about the implications of a strict ban, as it may stifle cautious development efforts. He observed that Illinois lacks a clear path for providing evidence that apps are both safe and effective.
“They aim to protect individuals, but the traditional system truly falls short,” he explained. “Adhering rigidly to the status quo is not a sustainable solution.”
Regulators and law advocates express openness to change. Nevertheless, Kyle Hillman noted that current chatbots do not remedy the shortage of mental health providers.
“Not everyone experiencing sadness requires a therapist,” he asserted. “However, for those grappling with genuine mental health issues and suicidal thoughts, saying ‘we know there’s a labor shortage, but here’s a bot’ seems rather privileged.”
___
The Associated Press School of Health Sciences receives support from the Howard Hughes Medical Institute’s Department of Science and Education and the Robert Wood Johnson Foundation. The AP is solely responsible for all content.
Source: apnews.com

