Categories: Social Media

Researchers explore social media to monitor mental illness

Researchers at the University of Ottawa are tapping into social media data to detect and monitor mental illness warning signs.

The French and Canadian research team led by engineering professor Diana Inkpen is mining social media data to identify at-risk individuals by analyzing their mental states through novel algorithms.

Having received $464,100 for three-year grant from the Natural Sciences and Engineering Research Council of Canada (NSERC), the project will combine “natural language processing, data mining, social media processing and medical informatics, in both English and French,” according to the university’s announcement.

“We will investigate one application scenario for our predictive model, which will be used to identify at-risk individuals in online communities,” said Inkpen, adding, “the model will also be used by psychologists and psychiatrists to identify variables related to major mental illness.”

With more than 2.2 billion active social media users in the world, roughly 30% of the global population, screening for mental illness across platforms is a massive undertaking; however, online users have shown that they are more than willing to post personal information regarding their moods and activities, that that data is just waiting to be tapped.

Facebook’s launch of different emotional reactions in addition to the “Like” button is one such way in which users’ reactions can be directly analyzed for overall trends in mood and emotion.

Although it seems innocent enough, the Facebook data is being sold for advertising purposes, with people reacting strongly with options like “Love,” “Sad,” or “Angry.” This information is being used to determine target audiences and potential customers by outside parties who purchase this data from Facebook.

With regards to mental health, the US Government is already proposing screening all adults, including pregnant women, for depression.

According to the US Preventative Service Task Force (USPSTF), “All positive screening results should lead to additional assessment that considers severity of depression and comorbid psychological problems (eg, anxiety, panic attacks, or substance abuse), alternate diagnoses, and medical conditions.”

In this broad-sweeping description, anxiety is considered a mental illness, and their methods for treatment include “antidepressants or specific psychotherapy approaches (eg, CBT or brief psychosocial counseling), alone or in combination.”

What is concerning is that there is nothing written about whether or not people actually have a choice to be put on antidepressants or need counseling, and that raises serious issues of consent.

A person may be suffering from anxiety because of an isolated event, but if they are diagnosed as depressed, they would be automatically sent for treatment, without any regard to consent.

“The task force says one key is that appropriate follow-up be available to accurately diagnose those flagged by screening — and then to choose treatments that best address each person’s symptoms with the fewest possible side effects.”

Notice that the USPSTF makes no mention of reviewing options for the “patient” being screened. The only proposal is “treatment” through medication and/or counseling.

And what does the US Government propose as a means for prevention?

The government recommends “collaborative care for the management of depressive disorders as part of a multicomponent, health care system–level intervention that uses case managers to link primary care providers, patients, and mental health specialists.”

This means more trips to mental health experts, more taxpayer money, and all of this under the umbrella of a massive, population-wide screening of American adults with little room for actual in-depth analysis on a case-by-case basis.

While screening social media for mental health may have benefits, it may also inadvertently send someone to be screened for depression, resulting in medication, all based on the fact that they were having tough time and decided to write about it on social media.

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

View Comments

Recent Posts

Not Your Typical CPA Firm: A CEO on Mission to Guide Companies Through the Ever-Changing World of Tech Compliance (Brains Byte Back Podcast)

In today’s episode of the Brains Byte Back podcast, we speak with Mike DeKock, the founder…

2 days ago

‘Social problems in substituting humans for machines will be easier in developed countries with declining populations’: Larry Fink to WEF

Blackrock CEO Larry Fink tells the World Economic Forum (WEF) that developed countries with shrinking…

3 days ago

Meet Nobody Studios, the enterprise creating 100 companies amidst global funding winter 

Founders and investors alike were hopeful the funding winter would start to thaw in 2024.…

3 days ago

As fintech innovation picks up pace, software experts like 10Pearls help lead the way

Neobanks and fintech solutions hit the US market more than a decade ago, acting as…

4 days ago

CBDC will hopefully replace cash, ‘be one hundred percent digital’: WEF panel

Central bank digital currencies (CBDCs) will hopefully replace physical cash and become fully digital, a…

5 days ago

Ethical Imperatives: Should We Embrace AI?

Five years ago, Frank Chen posed a question that has stuck with me every day…

1 week ago