Can AI therapy save a hurting mind? In some ways, it can help. A chatbot can answer at any hour, speak in calm language, and make a lonely person feel heard for a moment. Many people are already trying it. NAMI says 12% of U.S. adults are likely to use AI chatbots for mental health care in the next six months, and 1% say they already do. JAMA also reported in January 2026 that chatbots have become one of the most common sources of talk-therapy support in the United States, and that “therapy or companionship” was ranked the top use of generative AI in 2025. (nami.org)
But governments are starting to draw a clear line. In Illinois, Governor JB Pritzker signed the Wellness and Oversight for Psychological Resources Act in August 2025. The law says therapy services in the state must be conducted by a licensed professional. It allows AI for support work, such as administrative or supplementary help, but not for independent therapeutic decisions, direct therapeutic communication with clients, or treatment plans without human review and approval. Violations can bring a civil penalty of up to $10,000 per case. (idfpr.illinois.gov)
Illinois is not alone. Utah took a different path in 2025 by regulating “mental health chatbots” instead of banning them outright. Under Utah law, these chatbots must clearly tell users that they are AI and not human. The law also limits how companies can use a user’s personal mental health information and blocks them from using chat messages to target ads. Nevada also passed a 2025 law that stops AI providers from claiming their systems can give professional mental or behavioral health care and from offering systems specifically programmed to do that work. (le.utah.gov)
So, what is the real answer? AI may be useful as a tool, but it is not the same as a therapist. NAMI says AI is not a replacement for clinical care and should stay within safe informational boundaries instead of acting like therapy. Utah officials have also said that responsible use needs informed consent, privacy protection, careful monitoring, and attention to patient welfare. In short, AI can support human care, but trust, judgment, and responsibility still belong to real people. (nami.org)










