Joe Wright
The Feds have made their intentions known about conducting surveillance on a range of social media platforms covering everything from political speech, activist assembly, and even health. In tandem with corporate data harvesters, open public forums like Twitter and Facebook can provide a wealth of knowledge about general behavior, as well as real-time monitoring of activities.
Troubling new research from the University of Rochester (full press release below) aims to go a step further: turning “any computer or smartphone with a camera into a personal mental health monitoring device.”
That’s right, mental health.
As you’ll see, researchers hope that we will sign on willingly to have our interaction with social media scrutinized by an artificial intelligence algorithm that will evaluate our emotional stability.
What gives this new research an even higher creep factor than simply letting a computer make judgements about our mental well being, is if we look at how this could fit into pre-crime policingthat already has begun to tie into social media posts, health databases, and gun purchase records to identify those who are possibly “near the breaking point.” Combine this with the language embedded in ObamaCare that specifically states “predictive modeling” for a range of mental disorders (ever expanding in definition of course) and it is not difficult to heed the obligatory warning from these researchers:
…using this system means “effectively giving this app permission to observe you constantly,” but adds that the program is designed for the use of the user only and does not share data with anyone else unless otherwise designated by the user. (emphasis added)
Gee, where have we heard that before? Time and again, we have seen how the choice to opt-in for new technology is slowly eroded to the point where it becomes either pervasive through widespread adoption, or it simply becomes mandatory at a later date.
Press Release (my emphasis added)
Researchers at the University of Rochester have developed an innovative approach to turn any computer or smartphone with a camera into a personal mental health monitoring device.
In a paper to be presented this week at the American Association for Artificial Intelligence conference in Austin, Texas, Professor of Computer Science Jiebo Luo and his colleagues describe a computer program that can analyze “selfie” videos recorded by a webcam as the person engages with social media.
Apps to monitor people’s health are widely used, from monitoring the spread of the flu to providing guidance on nutrition and managing mental health issues. Luo explains that his team’s approach is to “quietly observe your behavior” while you use the computer or phone as usual. He adds that their program is “unobtrusive; it does not require the user to explicitly state what he or she is feeling, input any extra information, or wear any special gear.” For example, the team was able to measure a user’s heart rate simply by monitoring very small, subtle changes in the user’s forehead color. The system does not grab other data that might be available through the phone – such as the user’s location.
The researchers were able to analyze the video data to extract a number of “clues,” such as heart rate, blinking rate, eye pupil radius, and head movement rate. At the same time, the program also analyzed both what the users posted on Twitter, what they read, how fast they scrolled, their keystroke rate and their mouse click rate. Not every input is treated equally though: what a user tweets, for example, is given more weight than what the user reads because it is a direct expression of what that user is thinking and feeling.
To calibrate the system and generate a reaction they can measure, Luo explained, he and his colleagues enrolled 27 participants in a test group and “sent them messages, real tweets, with sentiment to induce their emotion.” This allowed them to gauge how subjects reacted after seeing or reading material considered to be positive or negative.
They compared the outcome from all their combined monitoring with the users’ self reports about their feelings to find out how well the program actually performs, and whether it can indeed tell how the user feels. The combination of the data gathered by the program with the users’ self-reported state of mind (called the ground truth) allows the researchers to train the system.The program then begins to understand from just the data gathered whether the user is feeling positive, neutral or negative.
Their program currently only considers emotions as positive, neutral or negative. Luo says that he hopes to add extra sensitivity to the program by teaching it to further define a negative emotion as, for example, sadness or anger. Right now, this is a demo program they have created and no “app” exists, but they have plans to create an app that would let users be more aware of their emotional fluctuations and make adjustments themselves.
Luo understands that this program and others that aim to monitor an individual’s mental health or well-being raise ethical concerns that need to be considered. He adds that using this system means “effectively giving this app permission to observe you constantly,” but adds that the program is designed for the use of the user only and does not share data with anyone else unless otherwise designated by the user.
'Researchers Seek to Track Mental Health Through Social Media' has no comments
Be the first to comment this post!