Digital Safety Nets: When our Devices Know More about Us then we Know about Ourselves

At the end of January, Facebook announced they would partner with the suicide prevention group Save.org. The goals of the partnership are to research the online behaviors of suicide victims in the days and months leading up to their deaths in order to identify and detect patterns of potential suicides.

If successful, the research will identify common strings of those most at risk of committing suicide before they do it. If successful, this research could create the groundwork for the implementation of what I’ll call “digital safety nets” and aid in the prevention of suicides.  These digital safety nets are really just triggers that key a series of digital (and ultimately physical) responses to a given risk. At current, there is a vast amount of research going into the creation of digital safety nets for a swath of risks.

I’ve long been intrigued by the work being carried out by researchers at Northwestern who have been working with a mobile app platform they created call “Mobilyze.”  The mobile platform is designed to help those suffering from depression by prompting users to make changes in their surroundings and/or behavior to reduce or eliminate depressive symptoms. The platform is also designed to identify the patient’s state and provide intervention prompts like text messages or phone calls.

Many of the personal connected devices we see today take what we might already know and provide the information back to us in more exact terms.  For example, we know we exercised or walked or biked, but a given fitness device with an embedded GPS can tell us exactly how far we went or how much we did.  What we are seeing with the creation of digital safety nets is slightly different. Here these connected devices might take what we ourselves don’t know and provide us insight. These connected devices could develop the capability of identifying for us are own tells.

Yes – there are of course the risks of false positives.  But the alternatives are much worse.  And algorithms should get better and more refined (perhaps I’ll write a bit more about why this might not actually be the case in a future post). And while we are currently focused on behaviors with large negative outcomes, we could theoretically alter any behavior – simply through a  mix of digitization, inverted crowdsourcing or perhaps the use of quasi probe data, and finally digital prompts that subsequently exert influence on the behavior we are seeking to alter.