Close Menu
  • U.S.
    • Education
    • Immigration
    • Abortion
    • Transportation
    • Weather
    • LGBTQ+
  • Politics
    • White House
    • U.S. Supreme Court
    • Congress
  • Sports
    • NBA
    • NHL
    • NFL
    • Soccer
    • MLB
    • WNBA
    • Auto Racing
  • Entertainment
    • Movies
    • Television
    • Music
    • Books
  • Business
    • Tariffs
    • Financial
    • Inflation
    • Technology
  • Science & Tech
    • Physics & Math
    • History & Society
    • Space
    • Animals
    • Climate
  • Health
What's Hot

Schumer: “I Lack Confidence in Donald Trump’s Legal System.”

September 29, 2025

Texas A&M President Resigns Following Classroom Uproar

September 29, 2025

Tropical Storms from Former Typhoon Bualoi Sweep Through Vietnam, Claiming at Least 12 Lives

September 29, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
World on NowWorld on Now
Subscribe
  • U.S.
    • Education
    • Immigration
    • Abortion
    • Transportation
    • Weather
    • LGBTQ+
  • Politics
    • White House
    • U.S. Supreme Court
    • Congress
  • Sports
    • NBA
    • NHL
    • NFL
    • Soccer
    • MLB
    • WNBA
    • Auto Racing
  • Entertainment
    • Movies
    • Television
    • Music
    • Books
  • Business
    • Tariffs
    • Financial
    • Inflation
    • Technology
  • Science & Tech
    • Physics & Math
    • History & Society
    • Space
    • Animals
    • Climate
  • Health
World on NowWorld on Now
Home » Nation Faces Challenges with AI Therapy App Regulations Amid Growing Mental Health Demands
Health

Nation Faces Challenges with AI Therapy App Regulations Amid Growing Mental Health Demands

September 29, 20256 Mins Read
Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Email

In the absence of robust federal guidelines, numerous states have started regulating applications that provide AI-based “treatment” as more individuals seek Artificial Intelligence for Mental Health Guidance.

Nonetheless, the regulations passed this year do not fully accommodate the swiftly evolving realm of AI software. App developers, legislators, and mental health advocates argue that the resulting patchwork of state laws fails to adequately safeguard users or hold harmful technology creators accountable.

“We’re excited to see you in the future,” stated Karin Andrea Stephan, CEO and co-founder of the mental health chatbot app Earkick.

___

Editor’s Note – This article contains a discussion about suicide. If you or someone you know needs assistance, the US National Suicide and Crisis Lifeline can be reached by calling or texting 988. You can also access an online chat at 988Lifeline.org.

___

State laws adopt various approaches. Illinois and Nevada have prohibited the use of AI for mental health treatment. Utah has imposed certain restrictions on therapy chatbots, including the protection of user health data and a requirement for clear disclosures indicating they are not human. Pennsylvania, New Jersey, and California are also evaluating approaches to regulate AI therapy.

The impact on users varies. Some applications restrict access in states where their use is banned, while others have chosen not to make changes pending clearer legal guidance.

Many laws do not encompass common chatbots, such as ChatGPT, which are not specifically marketed for treatment but are used by many for such purposes. These bots have faced lawsuits following incidents where users reported feelings of distress, such as I lost my grip on reality or I took my life after engaging with them.

Beyerlite, responsible for healthcare innovation at the American Psychological Association, acknowledges the potential for apps to address needs, particularly given the shortage of mental health providers, high care costs, and inconsistent access for insured individuals.

A scientifically grounded mental health chatbot designed with expert input and human oversight, according to Wright, could significantly alter the landscape.

“This could be a resource for individuals before they reach a crisis point,” she remarked. “This is something we don’t currently see available in the commercial market.”

Hence, the need for federal regulations and oversight is pressing, she emphasized.

Earlier this month, the Federal Trade Commission initiated inquiries into seven AI chatbot companies to evaluate how to assess, test, and monitor the potentially detrimental impacts of this technology on children and adolescents, including platforms like Instagram and Facebook’s parent company, Google, ChatGPT, Grok (X’s chatbot), Character.ai, Snapchat, and others. The Food and Drug Administration called an advisory committee on November 6th to discuss AI-enabled mental health devices.

Federal bodies can contemplate restrictions on how chatbots are marketed, limit addictive features, require disclosures to users who aren’t healthcare providers, mandate companies to track and report suicidal ideation, and offer legal protections for individuals reporting unethical practices, stated Wright.

Not all applications block access

The spectrum of AI utilization in mental health care, from “companion apps” to “AI therapists” to “mental wellness” applications, presents definitional challenges, underscoring the necessity for precise legislation.

This has led to a range of regulatory methodologies. Some states have specifically targeted companion apps designed purely for social interaction, while prohibiting those providing comprehensive mental health services and imposing penalties of up to $10,000 in Illinois and $15,000 in Nevada.

However, classifying a single application can prove complicated.

Stephan from Earkick mentioned that there remains considerable ambiguity around Illinois law.

Initially, she and her team referred to chatbots resembling a cartoon therapist panda. However, after users utilized certain keywords in feedback, they adapted the terminology for better search visibility.

Last week, they reverted to using treatment and medical terminology. Earkick’s website described chatbots as “your empathetic AI counselor to support your wellness journey,” but now they are labeled “chatbots for self-care.”

Nonetheless, “we do not provide diagnoses,” Stephan clarified.

Users can activate a “panic button” to alert trusted individuals if they find themselves in distress. Chatbots also “nudge” users to seek therapy should their mental health decline. However, Stephan noted that they are not designed as a suicide prevention tool, and if someone mentions self-harm to the bot, law enforcement is not alerted.

Stephan is pleased that AI is being scrutinized thoughtfully but is concerned about the nation’s ability to keep pace with rapid innovations.

“The rate of evolution is overwhelming,” she remarked.

Other applications have swiftly curtailed access. When users in Illinois attempt to download the AI therapy app ASH, they receive a message urging lawmakers to reconsider what they deem “misguided legislation” that restricts applications like ASH.

ASH representatives did not respond to multiple requests for interviews.

Mario Toreto Jr., secretary to the Illinois Department of Finance and Specialist Regulation, indicated that the ultimate objective is to ensure that only licensed therapists are providing therapy services.

“Therapy encompasses more than just verbal exchanges,” Toreto stated. “It demands empathy, clinical judgment, and ethical responsibility, which AI currently cannot emulate.”

One chatbot company aims to replicate treatments accurately

In March, a team at Dartmouth College published its initial report on randomized clinical trials evaluating AI chatbots intended for mental health treatment.

The objective was to develop a chatbot called Therabot to assist individuals diagnosed with anxiety, depression, or eating disorders. The chatbot was trained using scenarios and dialogues crafted by the team to deliver evidence-based responses.

The study discovered that users rated Therabot comparably to therapists, with symptoms significantly reduced after eight weeks compared to those not using the app. All interactions were overseen by humans who intervened when chatbot responses were harmful or not based on evidence.

Nicholas Jacobson, a clinical psychologist leading the research, noted that the findings showed early promise, although further research is needed to confirm whether Therabot is effective for a broader audience.

“This field is still vastly new; we must approach current developments with caution,” he remarked.

Many AI applications prioritize user engagement and tend to affirm everything the user states, rather than challenging their thoughts as a therapist might. This blurs the line between dating and therapy, raising ethical concerns about therapist-client boundaries.

The Therabot team aims to navigate these complexities carefully.

The app remains in testing and is not yet widely available. However, Jacobson worries about the implications of a strict ban, as it may stifle cautious development efforts. He observed that Illinois lacks a clear path for providing evidence that apps are both safe and effective.

“They aim to protect individuals, but the traditional system truly falls short,” he explained. “Adhering rigidly to the status quo is not a sustainable solution.”

Regulators and law advocates express openness to change. Nevertheless, Kyle Hillman noted that current chatbots do not remedy the shortage of mental health providers.

“Not everyone experiencing sadness requires a therapist,” he asserted. “However, for those grappling with genuine mental health issues and suicidal thoughts, saying ‘we know there’s a labor shortage, but here’s a bot’ seems rather privileged.”

___

The Associated Press School of Health Sciences receives support from the Howard Hughes Medical Institute’s Department of Science and Education and the Robert Wood Johnson Foundation. The AP is solely responsible for all content.

Source: apnews.com

App Challenges Demands Faces Growing health Mental Nation Regulations Therapy
Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
Previous ArticleUniversity Pressures Police Over Social Media Comments Regarding Kirk
Next Article Lufthansa to Eliminate 4,000 Jobs by 2030

Related Posts

Johnson Challenges Jeffries: “This Ongoing Solution Is Not Partisan.”

September 29, 2025

Trump’s Remarks on Autism Spark Outrage and Optimism

September 28, 2025

Mississippi Schools See Decline in Academic Progress: Here Are the Grades

September 27, 2025
Add A Comment

Comments are closed.

Top Posts

Schumer: “I Lack Confidence in Donald Trump’s Legal System.”

September 29, 2025

Texas A&M President Resigns Following Classroom Uproar

September 29, 2025

Tropical Storms from Former Typhoon Bualoi Sweep Through Vietnam, Claiming at Least 12 Lives

September 29, 2025
Advertisement

Global News at a Glance
Stay informed with the latest breaking stories, in-depth analysis, and real-time updates from around the world. Our team covers politics, business, science and tech, sports and health - bringing you the facts that shape our global future. Trusted, timely, and balanced.

We're social. Connect with us:

Facebook X (Twitter) Instagram Pinterest YouTube
Top Insights

Schumer: “I Lack Confidence in Donald Trump’s Legal System.”

September 29, 2025

Texas A&M President Resigns Following Classroom Uproar

September 29, 2025

Tropical Storms from Former Typhoon Bualoi Sweep Through Vietnam, Claiming at Least 12 Lives

September 29, 2025
Get Informed
Get the latest creative news from World On Now about Politics, Business, Sports, Science and Health.
© 2025 World On Now. All Rights Reserved.
  • Terms and conditions
  • Privacy Policy

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.