top of page
Writer's pictureMadeleine Finlay

The opportunities and dangers of predicting mental health using social media

The opportunities and dangers of predicting mental health using social media: should one status reveal the other?


It’s time to have a conversation about diagnosing emotions using the parts of ourselves we put online

As a science journalist, I try as much as possible to keep up with what’s happening in the world of mental health, whether it’s a change in our understanding of treatments, or what’s going on in our bodies when we’re depressed. Alongside this, ever since I wrote my master’s dissertation on the narratives surrounding “big data”, I’ve had a personal obsession with the wild-west that is big-data tech. So a few months ago when a friend got in touch about a strange experience of mental health on Instagram, their story really piqued my interest.

A few months ago they messaged a group chat on Whatsapp to say they’d seen a lot of adverts and posts on their Instagram about ADHD, and it seemed as if some of the symptoms applied. My friend had been feeling easily distracted, bored, and unable to complete work tasks at their normal capacity. The group reassured them that in the midst of a pandemic, these were pretty typical experiences. Being trapped inside with reduced social contact, increased workloads and anxiety-inducing news appearing on our phones throughout the day is arguably the opposite of an environment conducive to efficiency and hard concentration.

It was probably something to monitor, we said. And definitely speak to a doctor about it if you’re concerned. But, they replied, why is Instagram showing me this stuff? Has an algorithm decided that my behaviours or habits are associated with ADHD?

Does Instagram know something I don’t?



The answer is, in all probability, no. My friend is highly engaged with mental health, and with a recent rise in awareness of ADHD, the algorithms were in all likelihood doing what they always do; predicting what they might be interested in.


But, it got me thinking. The suggestion that a social media algorithm might have detected a mental health disorder and then shown content about it didn’t seem outside the realms of possibility. And that’s because it’s not. Predicting mental health status using our social media posts — like what we say and how we say it — is a rapidly growing area of research. There are studies that have analysed data from platforms like Facebook, Twitter and Reddit to try and predict a range of mental health disorders, including depression, anxiety, eating disorders, suicidal ideation, PTSD, and schizophrenia. Some of them report being able do it with surprising accuracy.


And why wouldn’t researchers want to develop tools that could take social media posts and make diagnoses, or even predict who is at risk of developing a mental health disorder? Digital epidemiology, as it’s come to be known, opens up opportunities for monitoring and addressing conditions that are under-diagnosed and under-treated. It presents a chance to reach many people who might not want to — for a plethora of reasons — to go to the doctors.


Huge datasets with public access are therefore very appealing to clinicians and computer scientists alike. If it worked well, predicting when someone is feeling mentally unwell and intervening could save lives, increase treatment (as well as allow researchers to monitor their success rates), or catch people before they become seriously unwell. In the early days of developing a mental health disorder, a nudge in a different direction or towards help might be all someone needs.


But despite all the potential positives, there are significant ethical questions and downsides to consider too. Below are the ones I think are the biggest, and perhaps hardest, to answer.

1. Should there be limits on what our social media posts are used for, without knowledge or consent? Or, just because the data can be analysed, does that mean it should be?

Think about it like this. When you go to a park, you’re perfectly aware that people will be able to see you, hear you, and watch you. You’re in a public place, so fair enough. But what if there was someone with a clipboard hiding behind a tree, writing down everything you were doing. If you spotted the clipboarder you might not want to be in the park anymore, or feel self-conscious about what you were doing. You wanted to be out in public, sure, but you didn’t go there to be monitored. And I’m sure I don’t need to tell you this, but in the real-world, it would be extremely unethical for researchers to hide behind trees and watch people without their consent.

2. Who and what is the data being modelled on?

We all use social media differently. I expect there is a huge range in how and why people use social media and what they say, depending on where they’re from, their class or their culture. I’ve got no doubt women express themselves differently to men (lest we are trolled). I’m sure the same applies for non-white people. Any analysis needs to take into account whose data the algorithms are being built from — otherwise we are recreating an old medical problem, that the default is the white, western, middle-class male.

3. How will this data be used

This is a big question, with innumerable sub-questions leading off from it. I’ll run through a few. Imagine the algorithms are being used, you’ve given your consent, and the AI discovers you’ve got depression. What’s done with that information? Does your feed change? Do you get links to charities? Should you be told directly? Who else should be informed? If you’re told, how? In a pop-up? By email? What should that notification say? Should there be a follow up? If your GP was notified, how would they be expected to respond? What if you were hacked and that information was made public or used against you?



4. Who will be regulating the technology?


Depending on who the technology is owned and run by, there are big questions as to who should be monitoring the algorithms, and where responsibility lies if things go wrong. Doctors and nurses go through years of medical training to then be registered with regulators. There are standards and checks. But what happens when a piece of software is playing clinician? Who checks that the algorithm meets proper standards — who even sets the standards? Who gets blamed if something goes awry?


5. Where is the technology headed?


Predicting emotions doesn’t just apply to mental health, and isn’t just being worked on through what we say on social media. The tech researcher Kate Crawford recently wrote a fascinating article in Nature about how the pandemic is being used as a reason to push visual emotion-recognition software into workplaces and schools, which is both unproven and unregulated. I have a resting-frown face in meetings (despite my parent’s warning of early-onset wrinkles). I imagine an AI would think I was angry or upset, when (usually) I’m not. And that’s probably the least-bad outcome. Unproven and unverified AI software that purports it can read our emotions coming to a job interview, advertising company, or airport security gate near you? It’s a good premise for a dystopian novel.


6. How accurate is accurate enough?


This takes me back to my friend. Let’s say an algorithm could predict your mental health status with a 99% accuracy. When deployed across a platform like Facebook with a couple billion users — there’s going to be a lot of people getting an incorrect diagnosis or being directed to organisations they don’t need help from. Our relationships with technology mean we often think our devices are infallible. I reckon even if I felt mentally well, if my Twitter feed started showing lots of posts about getting help for depression, I might start to second-guess myself, or worry.


What next?


Bearing in mind the tech is in development, all these questions, and many, many more, need to be thoroughly discussed by mental health professionals, computer scientists, clinicians, organisations representing vulnerable or minority groups, ethicists, and the public. And pretty soon too.

bottom of page