January 20, 2026

Where can I learn about AI & Mental Health in 2026 ?

The learning path has changed. Free. Low-cost. Resources to get started or keep on learning in 2026. In English or en français.

Originally published on Therapist in the Loop on Substack on January 20, 2026. Republished here with links preserved.

Hi, we're Kate (clinical psychologist) and Hassan (data scientist), the team behind Therapist in the Loop: a free newsletter in which we deliver AI literacy for psychologists. We keep you in the loop on mental health and AI so you can safely advance your practice and improve care.

It takes time to create work that's clear, genuinely useful, and not just something spun up by ChatGPT! We've just launched, and would like to make sure the word travels to those interested. If you've learned something useful from this newsletter, consider sharing with a like-minded colleague.

Over the past couple of years, learning about AI in mental health has forced many of us to step outside our usual professional development channels. That extra effort is paying off. We can do more than we've ever been able to do using AI. Yet high-quality, practice-relevant training and AI literacy can be hard to find.

That's why Therapist in the Loop exists, and we're working hard to prepare in-depth content for you this year. But we also would like to share the best resources we've found so far so you can keep in the loop (pun definitely intended).

For this issue, we've compiled a shortlist of AI-literacy resources, from workshops to attend to researchers to keep tabs on. We've combined Hassan's picks for AI fundamentals with Kate's picks for AI in psychology, ethics and best practice. We've also included a dedicated section for Francophone readers (skip ahead).

Workshops & Webinars in Psychology

Stanford Medicine (Department of Psychiatry and Behavioral Sciences): AI for Mental Health (AI4MH) — Events & recorded talks

Format: Seminar/event series page with a running archive of recorded talks.

Organizer: Stanford Medicine, Department of Psychiatry and Behavioral Sciences, AI for Mental Health initiative.

What you'll find: A curated set of research-facing talks at the intersection of AI and mental health, with specific, concrete topics (for example: LLMs for behavioral health, AI + wearables/smartphones for depression, ethics of AI in clinical practice).

College of Alberta Psychologists (CAP) — 2025 Professional Development Event (AI, privacy, clinical practice)

Format: Free virtual event, full recording available

Why it's worth your time: In case you missed it! A rare "all-in-one" session that puts ethics, governance, and legal risk on the same table as clinical realities, hosted by a regulator. Worth your time even if not registered in the host province, as it is relevant for North-American psychologists.

Hexagon Training — Ethical Use of AI in Psychology

Format: 3-hour webinar (CPA CE eligible)

Why it's worth your time: A practical ethics-first overview that helps clinicians map common AI use-cases to concrete risks (privacy, bias, scope, documentation, accountability).

Northwestern (CBITs) — Future of Digital Mental Health webinar series (open, mostly recorded)

Format: Webinar series, broad range of speakers/topics

Why it's worth your time: A reliable pipeline of "what's next" in digital mental health, often closer to practice realities than traditional conferences. On the list last year? Presentation of the first RCT of a chatbot designed to treat clinical conditions.

Staying oriented on AI

Bookmark OpenAI's mental health + safety updates

These posts are not neutral (they're partly technical reporting, partly product governance), but they do reveal what major model providers are prioritizing, measuring, and changing. Relevant posts so far include:

OpenAI — AI & mental health research grants (up to $2M total program)

Why it's useful: Tracks what questions are becoming serious research problems (well-being, distress detection, safety, outcomes).

OpenAI — Strengthening ChatGPT's responses in sensitive conversations

Why it's useful: A window into user experiences using ChatGPT for sensitive conversations, and how the major vendor is responding.

OpenAI — Parental controls + safety alerts (teen accounts)

Why it's useful: A concrete example of product-level harm-reduction features (limits, quiet hours, alerts) that will shape how families and adolescents use AI.

OpenAI — Sycophancy in GPT-4o

Why it's useful: A clear case study showing that model personality shifts can be a safety issue, especially in emotionally vulnerable contexts.

AI research and AI products are constantly evolving. These are two outlets we follow to keep updated. If we miss a week, we feel out of the loop!

AI Explained (YouTube)

Format: YouTube channel (news + model capability analysis)

Why it's useful: For the assessment psychologists—I know you've been wondering how smart ChatGPT 5.0 is as compared to the Cattell-Horn-Carroll theory of intelligence! ;) This is the place to go find the answer! A fairly technical synthesis of fast-moving AI developments, often citing AI research about changes in model behaviour and capabilities.

It's not specific to psychology at all, but this knowledge can help make you decisions on how to best use AI (e.g., compare best models to use now for image generation).

Hard Fork (New York Times Podcast) (This covers technology in general and not just AI)

Format: Weekly podcast (Apple Podcasts, Spotify)

Why it's useful: Fast, intelligible coverage of major AI shifts (products, policy, failures) from two tech journalists, without requiring a technical background. They've covered the intersection of tech and mental health several times, providing situational awareness for emerging controversies and litigation (for example, lawsuits involving OpenAI and youth suicide) and a glimpse into public perceptions of these.

Researchers

Geoffrey Hinton + Yoshua Bengio

Format: Talks, interviews, essays (non-CE)

Why it's useful: World-class AI researchers who frame AI as a broader social/technological shift shaping people's lives. Good for understanding how AI works (good enough for Jon Stewart!) and AI safety—Bengio's recent writing and public work on AI safety (including LawZero) is a good "current signal" of where safety debates are heading.

John Torous (Harvard/BIDMC)

Format: Peer-reviewed papers, talks/podcasts, clinical-facing commentary

Why it's useful: Torous consistently sits at the intersection of evidence, implementation, and real-world risk. He writes directly about generative AI's role in mental health care and has been around in the field for a long time. Best place to follow? Linkedin. Or catch him at Stanford University seminar talks this 28th January.

Ressources francophones

Université Laval — IA + Santé : techniques, applications, cadres éthiques et juridiques

Format: Formation en ligne, à votre rythme ("en continu") ; durée ~10 h ; coût 100 $.

Pourquoi ça vaut votre temps : Conçue pour non-spécialistes, elle couvre (1) les bases de l'IA, (2) l'usage des données de santé, (3) les concepts techniques (apprentissage supervisé/non supervisé, deep learning, évaluation), et (4) les cadres éthiques, juridiques et réglementaires.

Vous pouvez également tester votre littératie IA ici !

Ordre des psychologues du Québec (OPQ) — Intelligence artificielle : promesses, écueils et réalités d'une révolution toute sauf tranquille

Format : Formation en ligne, à votre rythme

Pourquoi ça vaut votre temps : La première formation officielle offerte aux psychologues du Québec. Un bon point d'entrée pour comprendre les promesses et les écueils de l'IA, et pour situer la posture de l'Ordre.

That's a wrap! It's a non-exhaustive list, but we hope it's enough to make you feel like a thoughtful Therapist in the Loop.

Thanks for reading! If you learned something useful from this post, help us grow by sharing it with a like-minded colleague.