https://www.sprucehealth.com/
top of page

AI Tech for Mental Healthcare

A Practicing Clinician’s View


According to NIMH, more than 20% of adults (in America) have a diagnosed mental health disorder. While it’s clear mental health treatment is absolutely needed, the backend workings of mental healthcare in America are grim, and the reason so many go without proper treatment/care. This isn’t surprising when you consider the barriers that exist in American mental health care access, including a shortage of providers, inadequate insurance coverage, and the lingering generational stigma associated with mental illness. But, is the answer in AI? Not hardly, from my standpoint. But can it be helpful AT ALL? Maybe. Let’s consider HOW AI can be utilized in support of better mental health, while also considering its significant limitations and issues.


The Good

In general healthcare, AI is currently being used to facilitate early disease detection, enable better understanding of disease progression, optimize medication or treatment dosages, and uncover novel treatments. “A major strength of AI is rapid pattern analysis of large datasets. Areas of medicine most successful in leveraging pattern recognition include ophthalmology, cancer detection, and radiology, where AI algorithms can perform as well or better than experienced clinicians in evaluating images for abnormalities or subtleties undetectable to the human eye.” Cool. AI has better ‘vision’ for scanning images and greater ability to ‘see’ patterns. That’s useful. These intelligent systems are increasingly being used to support clinical decision-making. AI can also triage patients, automate appointment scheduling, bring up treatment suggestions based on patient history. Also, cool. Scanning symptoms for effective triage of care and offering treatment suggestions to providers based on scanning gathered history data are in fact good uses of AI in healthcare.


In mental healthcare, an individual’s unique bio-psycho-social-spiritual profile is best suited to fully explain their holistic mental health. I say it like this: Whole person in Whole context. However, it is believed that we—humans--have a relatively narrow understanding of the interactions across these biological, psychological, and social areas. There is considerable heterogeneity in the pathophysiology of mental illness—meaning similarities in the abnormal brain and body functioning present when someone has a diagnosable mental health disorder—and identification of certain biomarkers may allow for more objective, refined definitions of these disorders. Leveraging AI techniques offers the ability to develop better prediagnosis screening tools and formulate risk models to determine an individual’s predisposition for, or risk of developing, a mental health disorder. Fantastic. I like this. We know that machine learning models analyze vast amounts of data—text messages, voice patterns, wearable data, and more—to spot early signs of anxiety, depression, or even PTSD. That’s all fine and good, as long as the patient agrees to their data (text messages, voice, & data from wearables) being analyzed by a machine (and I feel I should make a distinction here: the patient agrees to their data being analyzed by a machine, then reviewed, interpreted and explained by their healthcare provider). Awesome sauce. As long as we all agree, your doctor can access data analyzed by a machine to inform effective treatment for you, the patient, using AI tools. These are some good developments and uses of AI in healthcare & mental healthcare for sure.


The Bad

One of the biggest predictors for success in mental health care treatment is the therapeutic alliance between therapist and client. Mental health practitioners are more hands-on and patient-centered in their clinical practice than most non-psychiatric practitioners, relying much more on “soft” skills, such as developing rapport & forming relationships (active listening, social awareness, etc) with patients and directly observing patient behaviors and emotions (assessment, psychoeducation, feedback etc). Mental health is not one-size-fits-all, and neither should the treatments be. That’s why individual psychotherapy with a human therapist is a tailor-made healthcare service. Therapists are trained to adapt theory, modality, and treatment options based on the presenting information from the patient (not just on diagnostic criteria).


Some of the argument for AI mental health chatbots is that AI can assess mood changes through voice inflection, language patterns, and even facial expressions during video sessions. Hate to break it to you, but humans can too! And human care providers are exceptionally good at this. This isn’t a benefit to AI in mental healthcare. One mental health provider said AI use in mental healthcare would be “more supplemental, if anything, because of the limited scope of the open-ended dialogue—and developing rapport with the client and getting a good clinical history is really important.” Yes, getting a good clinical history is called assessment. A full mental health assessment is conducted between provider & patient, with provider ability to ask more or nuanced questions to get accurate information and patient ability to change, correct, or restate information. This back and forth is less effective and less accurate with AI. Some folks who have used AI as if it were a therapist have had less than stellar reports. “It seems like it’s trying to be empathetic, but I don’t really think it can get there,” one tester shared. “It would give me the same responses repeatedly like ‘You have handled difficult situations before and come out stronger.’ But it doesn’t know anything about me or anything that I’ve been through, so it felt very out of place…I could tell it wanted to understand my emotions, but it didn’t seem capable of fully grasping my feelings.” Others were hesitant to share honestly citing “I don’t trust or know if there is ‘doctor-patient’ confidentiality so I am not going to be 100% honest and forthright” with an AI chatbot. And others noted that they were hesitant to share details due to not being able to physically see and hear the response from the “provider,” while others felt wary of their responses potentially being recorded for the benefit of machine learning. You know what’s missing here? Trust! And developing trust is a HUGE part of developing rapport and establishing a therapeutic alliance. Strength of bonds and trust with therapists is one of the overall predictors of efficacy in talk and cognitive behavioral therapy (both of which are main components of my practice). However, placing that same amount of trust in a machine and developing a deep relationship with it is not healthy in the long run, as plenty of reports and studies have shown already with AI chatbot use.


Part of the problem with AI as a stand-in therapist is their lack of ability to ask for elaboration and understand more of the context the patient provides, as well as inability to give more specific feedback. Some human users felt a human could better react specifically and directly to their exact situation and give more targeted feedback. One psychiatrist, heavily involved in development of one of AI chatbots for mental health said, “Even though AI will keep getting better at psychoeducation and skills training, I personally don’t think it should ever replace human relationships, being with a therapist, friend, or loved one.” I absolutely agree. The nuanced empathy, shared human history, and ability to diagnose and plan a treatment are human strengths. We know from current evidence-based research in psychology and psychotherapy treatment that human-to-human interaction and connection can go such a long way in a person’s mental health improving. Genuine, authentic interactions are a really beneficial part of the healing process, maybe the most important, depending on which disorder we’re treating.


Most, if not all, AI chat bots are large language models (LLMs)—a type of machine learning model that can comprehend and generate human language text by processing large amounts of data. A limitation that I’ve always seen with this is the data input. Who’s teaching the machine what data is important and what information is not? Who’s involved in the development, financial support, and execution of these LLMs? It’s often not transparent which will always give me pause with any AI use. We know that the kind of regular biases that we exhibit in our daily interactions with other people show up in AI algorithms, especially when it comes to things like sexism and racism, and this has been supported by many researchers of AI. People who do not have access to mental health care or who are of lower socioeconomic backgrounds will be using these AI apps, but if the apps themselves are biased against those groups they’re intending to serve, then we have an even bigger problem on our hands. Discrimination is a human problem, and with humans inputting the data, that problem will also exist in AI, but unfettered and less controlled (especially without proper regulations to safeguard the public against this kind of thing). There also has been evidence that these AI chatbots are designed—and therefore perform as such—to offer affirmation almost unconditionally (they way social media is designed, to keep you coming back to the app). There is so much danger in this in terms of mental health, and this is NOT something human therapists do. We’re trained to gently challenge unhealthy thoughts and behaviors, and guide patients toward healthier choices with better outcomes for them.


And finally, much of the current research on AI chatbots for mental health support has been non-randomized, non-controlled, and small in sample size. That’s far from comprehensive information which means it would be a huge stretch to assume its benefits before it’s been properly tested. AND! Many studies are being conducted by the companies who are trying to sell their product, which is a research integrity and conflict of interest issue. Huge red flag. A researcher at the APA has said, “There’s really barely any evidence that shows that AI therapy chatbots are in fact effective. We do not have longitudinal studies that look at people with diagnosis of mental disorders who did not have any other kind of help, who used these chatbots, and can say that they benefited from these kinds of interventions.” Needless to say, more research needs to be done before we can say ‘yay’ or ‘nay’ on the benefits of AI chatbots for mental health improvement.


The Blend

Now, how can we co-exist then? Because AI is here and I guess we better get used to it (even though I don’t want to). AI’s role in mental healthcare is about empowerment and support, not replacement of human therapists, who have incredible people-skills and usually excellent experience in dealing with people in their worst and most vulnerable states. It’s a tool. A hammer doesn’t work by itself. It would be most appropriate to have hybrid digital and human therapy platforms that integrate in-session therapy done with clinicians along with interactive AI-guided tools…to support and enhance the therapy being conducted by the human provider. That’s how I see AI having the most benefit, and the least harm, for individuals with poor mental health.


How do we go about ensuring design, investment, testing, and public use is not causing harm? Clinicians need to be involved, at every level, at every stage. Clinical experts need to be a part of the design and periodically audit the prompts, then AI can tailor wording on the fly while staying within those human-approved boundaries. There need to be safeguards in place to help prevent it from giving potentially inappropriate or harmful responses, and ensure that it provides users with helpful resources in a crisis situation. And then these safeguards need to be studied and reviewed, then refined for better accuracy and to meet current treatment research where it’s at. Research also needs to be conducted by an objective third party, and cannot have any stake holding in the AI tech. Funding needs to be considered very closely. Clinical judgment needs to be top priority where investors and clinicians differ. Ultimately, AI tech in mental health needs to be about mental health, not about how cool this AI tech is, or how much money we could make from rolling out this AI tech and marketing it to millions of vulnerable users. That’s unethical. And anything unethical shouldn’t even be a whisper in mental healthcare.


Some other ways we can blend AI tech and the current mental healthcare world include alleviating administrative burden on providers. Anything from appointment reminders to billing assistance, AI tools can be used to streamline the behind-the-scenes work of mental health providers. However, I have yet to see a version of AI that’s any better than what my EHR can do with just a little bit of my input or direction. Also, leveraging AI for research purposes could be a great benefit. AI is like a magnifying glass for researchers, allowing them to sift through massive datasets and uncover trends that weren’t visible before. From studying population-wide mental health patterns to evaluating treatment outcomes, AI could accelerate the pace of discovery which could have benefits for developing better treatment options. Real-Time Monitoring, with permission and protected by the laws for HIPPA compliance and a patient’s right to confidentiality about their health, could provide a patient’s doctor/provider with useful and valuable information about their habits, patterns, and physiological responses. However, there have GOT TO BE better regulations and enforcement of regulations about data sharing, data selling, and data using to extract either more information or more money from users. This has to be an absolute NO from any AI tech developer, or investor for me to see any benefits in the use of wearables and smartphone apps equipped with AI to provide continuous monitoring for collecting data like heart rate, sleep cycles, and daily activity. If this information isn’t going JUST to the patient’s doctor, then it absolutely should not be used in ANY healthcare AI tech. I am firm on that.


Final thoughts (for today anyway)

As a mental health provider, who was trained and practiced in real-world settings with real-live people, the shift to tech uses has been gradual for me. My first mental health position was still using paper charting (in 2011) before requirements for every healthcare facility to have and use EHR came on the scene. (By the way, this law was passed to encourage across discipline coordination of care for patients with multiple healthcare providers—has that happened to the degree in which they believed it would by requiring providers to have & use EHR systems? Not hardly, I can assure you). When I first started my private practice (in 2016), all of my intake forms were paper, and after a couple of years, I moved everything to be done through a client portal with my EHR system. For 4+ years, I strictly saw patients in-office, in-person, for conducting psychotherapy sessions. Starting March 2020 when the global pandemic hit, I moved my entirely in-person practice to an entirely virtual practice literally overnight. After 5+ years, I’ve never gone back to in-person sessions.


Something interesting I’ve witnessed first hand is the concern for and opposition to telehealth, especially for therapy, when everything went virtual in 2020. Insurance companies didn’t want to cover healthcare services provided via real-time video conferencing with their real human provider because it was assumed to be less effective. There was also concern about disparity in service provision due to location or socioeconomic status and access to reliable devices and internet connection so that folks could engage in telehealth. Now here we are, a mere short 5 years later and there is growing favor for use of AI chatbots (oftentimes in place of real human therapists) for mental healthcare, for greater access to mental health support, while alluding to there being no disparity or concerns about access with AI chat bots. C’mon now! That’s crazy-making at its best! Five years ago, therapy wasn’t as “good” or “effective” if done in real-time with their real-therapist through use of synchronous video technology…but NOW there’s incredible endorsement for AI chatbots to “fill the gaps in mental healthcare” and these chatbots are considered just as effective as human therapists. That’s an incredible leap in 5 years, wouldn’t you say? I’m all for greater access; I’m all for more support; I’m all for adopting things that actually improve the landscape—whether for providers or for patients—in mental healthcare; but I am NOT all for innovation, change, and new if it’s not actually helpful, and especially not for any of it if it’s harmful.

 

Where do you stand? If it was your mental healthcare, or the mental healthcare of someone you love, would you want it to be safe, confidential, and effective?

Follow more for more related content. Check out my Medium page for more articles & information.

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page