States Ban AI-Driven Treatments for Mental Health: Illinois follows Nevada, prohibiting certain uses of chatbots unless used by licensed therapists

Illinois became the second U.S. state, after Nevada, to ban AI applications that administer psychotherapy.

US map showing Nevada and Illinois highlighted in red, marking states that banned AI-driven mental health treatments.
Loading the Elevenlabs Text to Speech AudioNative Player...

Illinois became the second U.S. state, after Nevada, to ban AI applications that administer psychotherapy.

What’s new: Illinois passed the Wellness and Oversight for Psychological Resources Act, which prohibits uses of AI to treat mental-health conditions without a doctor’s direct participation. Violations could result in fines up to $10,000 for each use.

How it works: The bill effectively bans the use of chatbots to administer therapy on their own and restricts some other uses of AI in mental-health care, even by licensed professionals. Proponents say it will protect patients from unproven treatments and human therapists from being replaced by AI systems.

  • Companies can’t advertise chatbots as therapeutic tools or offer other AI-powered therapeutic services without the involvement of a licensed professional.
  • Mental health professionals may not use AI to make therapeutic decisions, detect a patient’s mental or emotional state, or participate directly in therapeutic communications. They must obtain informed consent from clients to use AI in therapy sessions that are recorded or transcribed. They can use AI freely for administrative services such as scheduling, billing, and keeping records.

Behind the news: In June, Nevada became the first U.S. state to prohibit AI in treatments for mental health, and California, New Jersey, and Pennsylvania are considering their own limits. These actions come as some experts in public and mental health warn of potential hazards posed by chatbots that deliver therapy without having established their safety and effectiveness. An April study found that many general-purpose chatbots failed to respond appropriately when given conversational prompts that simulated mental-health issues. Recent weeks have seen reports that detailed unhealthy relationships between chatbots users, and some conversations between chatbots and vulnerable people have led to harm.

Why it matters: In the absence of national laws, regulation of AI in the U.S. is proceeding state by state. The Illinois and Nevada laws essentially ban AI-driven therapy, whether it’s dispensed by general-purpose models or those that have been fine-tuned and shown to behave in ways that are consistent with accepted clinical practice. They prohibit companies from marketing poorly designed and untested AI systems as beneficial therapeutic agents, but they also prevent licensed mental-heath professionals from using specialized systems to make treatment decisions. The upshot is that helpful AI models will be unavailable to people who may benefit from them.

We’re thinking: We favor regulations based on applications rather than underlying technology. However, by banning AI-driven therapy outright, Illinois and Nevada have left no room for legitimate AI-powered applications that provide effective therapy. Large language models are helping many people with therapy-like matters. They can lower the cost of therapy, offer around-the-clock service, and alleviate shortages of qualified professionals. They’re not yet perfect replacements for human therapists, but they will improve. Banning them will do more harm than good.