Bad therapy, AI and mental health, & Jung’s shadow
It is tough to come by any bad press for therapy. Which is very, very odd. After all, therapists are humans, and science is a work in progress; which means that it can not always be rosy cheeks and dimple chins. To set the context, my need to look into this is purely for research for fiction. Satori, when in the throes of depression could potentially visit, or be visited by, a therapist. And to construct a scenario where things could do wrong for him Volaire’s Candide style I was looking for situations when the proverbial shit could hit the fan.
But it is not so simple. There is abundant praise for the system as an unfaltering good for all tool. But is it really?
And I found it — ‘Bad therapy’. It is not exactly what I was looking for, but it has some examples.
It opens with, “In 2000, Jeane Newmaker took her adopted 10-year-old daughter Candace to an ‘attachment therapy’ retreat designed to increase their emotional bond. While there, Candace underwent an intervention that’s supposed to replicate the birthing process. Therapists wrapped her in a flannel sheet and covered her with pillows to simulate a womb or birth canal. Then they instructed her to fight her way out while four adults (weighing nearly 700 lbs in total) tried to stop her. Candace complained and screamed for help and air, unable to escape from the sheet. After 70 minutes of struggling, pleading that she was dying, and vomiting and excreting inside the sheet, Candace died of suffocation. This tragic case highlights an important but often overlooked aspect of psychological interventions designed to help people – sometimes they can be harmful, or even fatal.”
That is tragic. A human kind of tragic. Now imagine a scenario where an AI therapist predicts the state of your mental health based on your medical records. It was a hypothetical until I found this very disturbing abstract in Nature — Machine learning model to predict mental health crises from electronic health records.
It suggests that “The timely identification of patients who are at risk of a mental health crisis can lead to improved outcomes and to the mitigation of burdens and costs. However, the high prevalence of mental health problems means that the manual review of complex patient records to make proactive care decisions is not feasible in practice. Therefore, we developed a machine learning model that uses electronic health records to continuously monitor patients for risk of a mental health crisis over a period of 28 days. The model achieves an area under the receiver operating characteristic curve of 0.797 and an area under the precision-recall curve of 0.159, predicting crises with a sensitivity of 58% at a specificity of 85%. A follow-up 6-month prospective study evaluated our algorithm’s use in clinical practice and observed predictions to be clinically valuable in terms of either managing caseloads or mitigating the risk of crisis in 64% of cases. To our knowledge, this study is the first to continuously predict the risk of a wide range of mental health crises and to explore the added value of such predictions in clinical practice.”
Do not misunderstand me, I am not against AI. Yes, it is useful. And though the zeitgeist suggests that machine learning will simply solve all of our problems. In my humble opinion, it is all a bunch of balls and ketchup. Nothing will save us from ourselves. But the hubris in suggesting that reading complex records is not humanly feasible so an AI should instead do it is so cyberpunk!
The pendulum has been offset. We started with the best of intentions. With Jung — With the notion that we had to embrace our inner demons, our shadow, to bering peace to ourselves. And from there we somehow became obsessed with diagnosis. Everyone has some disorder now. Yes, there is trauma in this world. There always has been and there always with be. But this war on trauma by means of medication feels so wrong.
So here is a gentle reminder from Own your own shadow!
Leave a Reply