AI and the Future of Therapy: Opportunities, Limits, and Staying Human
/Artificial intelligence is rapidly entering the mental health landscape. From therapy chatbots and generative tools like ChatGPT to AI features embedded in electronic health record systems, new technologies are beginning to shape how therapists document, communicate, and even how clients seek support.
But what does this shift actually mean for clinicians?
In our newest Clearly Clinical podcast CE course, Kelly Higdon, LMFT, and Miranda Palmer, LMFT, explore how artificial intelligence is already showing up in mental health care and the ethical questions therapists need to be thinking about now.
How AI Is Already Appearing in Mental Health Care
While the idea of AI therapists may feel futuristic, many people are already interacting with AI for mental health support. Some individuals turn to therapy chatbots for emotional guidance or self-help exercises. At the same time, clinicians are increasingly encountering generative AI tools that promise to streamline tasks like documentation, treatment planning, and administrative work.
The episode explores three major categories of artificial intelligence currently influencing the field:
AI therapy chatbots that people use for emotional support and mental health guidance
Generative AI tools, such as ChatGPT and AI-assisted documentation features in electronic health records
Predictive algorithms used by health systems and insurers to guide care decisions
Each of these technologies introduces both potential opportunities and important ethical considerations for therapists.
Potential Benefits of AI for Therapists
Artificial intelligence tools are often marketed as a way to reduce administrative burden and increase efficiency in clinical practice. For example, some AI tools can help draft progress notes, summarize sessions, or assist with documentation.
For busy clinicians, these tools may offer benefits such as:
Reduced time spent on documentation
More time available for client care
Greater accessibility to mental health support tools for the public
However, efficiency gains do not eliminate the ethical responsibilities therapists carry when technology enters the therapeutic process.
Ethical Concerns About AI in Psychotherapy
As AI becomes more visible in mental health care, many clinicians are asking important questions about privacy, bias, and professional responsibility.
Research has shown that while some users report a sense of therapeutic alliance with mental health chatbots, other studies raise serious concerns about how these systems perform in complex clinical situations.
For example:
Studies have found that AI systems may produce biased or stigmatizing responses toward individuals with mental illness.
Some AI tools have demonstrated limitations when responding to high-risk mental health scenarios, including suicide-related prompts.
Training data used by AI systems may reflect broader social biases that influence responses.
These findings highlight why licensed clinicians remain responsible for ensuring that the tools used in practice support ethical and effective care.
Why Human Therapists Still Matter
Despite rapid technological advances, psychotherapy remains fundamentally relational. The ability to recognize subtle emotional shifts, respond to complex clinical presentations, and build trust over time is central to therapeutic work.
In this episode, Higdon and Palmer emphasize that while AI tools may support certain administrative or informational tasks, the human elements of psychotherapy — attunement, empathy, and ethical judgment — remain irreplaceable.
For therapists, the challenge is not simply deciding whether to adopt or reject AI technologies. Instead, it involves learning how to critically evaluate new tools while protecting the integrity of the therapeutic relationship.
What Therapists Will Learn in This CE Course
In this Clearly Clinical podcast CE course, listeners will:
Hear about the major categories of AI currently influencing mental health care
Learn potential benefits and limitations of AI tools in psychotherapy
Consider ethical issues related to privacy, bias, and clinical responsibility
Learn how therapists can thoughtfully integrate technology while maintaining human-centered care
The conversation offers a grounded and thoughtful look at a rapidly evolving topic that many clinicians are encountering in practice.
Earn Continuing Education Credit
This episode is available as an on-demand podcast CE course through Clearly Clinical, allowing mental health professionals to earn continuing education credit while exploring one of the most important emerging topics in the field.
Clearly Clinical offers unlimited podcast CE courses through our low-cost annual membership, with some of the strongest CE approvals available for mental health professionals, including APA, NBCC, ASWB, and more.
If you’re curious about how artificial intelligence may influence the future of therapy, this episode offers a balanced and thoughtful place to begin.
Learn More
AI and the Future of Therapy: Opportunities, Limits, and Staying Human (Ep. 265) is now available as an on-demand CE podcast course.
Listen to this episode for free on YouTube (only listeners who have an active paid membership are able to earn CE credit): AI and the Future of Therapy: Opportunities, Limits, and Staying Human, Ep. 265
Join our 1-year membership for $130 for unlimited podcast CE credit for a year.
REFERENCES
Beatty, L., Fitzpatrick, K. K., Darcy, A., et al. (2022). Evaluating the therapeutic alliance with a free-text CBT conversational agent (Wysa). Frontiers in Digital Health.
https://www.frontiersin.org/articles/10.3389/fdgth.2022.847991/full
Xu, L., Sanders, L., Li, K., & Chow, P. I. (2025). The digital therapeutic alliance with mental health chatbots. JMIR Mental Health.
https://pmc.ncbi.nlm.nih.gov/articles/PMC12552820/
Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavior therapy using a conversational agent (Woebot): A randomized controlled trial. JMIR Mental Health.
https://mental.jmir.org/2017/2/e19/
Abd-Alrazaq, A., Rababeh, A., Alajlani, M., et al. (2020). Effectiveness and safety of using chatbots to improve mental health: Systematic review and meta-analysis. Journal of Medical Internet Research.
https://www.jmir.org/2020/7/e16021/
Abramson, A. (2025). Exploring the dangers of AI in mental health care. Stanford Human-Centered AI Institute.
https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care
