Mental health app privacy language opens up holes for user data


In the world of mental health apps, privacy scandals have become almost routine. Every few months, reporting or research uncovers unscrupulous-seeming data sharing practices at apps like the Crisis Text Line, Talkspace, BetterHelp, and others: people gave information to those apps in hopes of feeling better, then it turns out their data was used in ways that help companies make money (and don’t help them).

It seems to me like a twisted game of whack-a-mole. When under scrutiny, the apps often change or adjust their policies — and then new apps or problems pop up. It isn’t just me: Mozilla researchers said this week that mental health apps have some of the worst privacy protections of any app category.

Watching the cycle over the past few years got me interested in how, exactly, that keeps happening. The terms of service and privacy policies on the apps are supposed to govern what companies are allowed to do with user data. But most people barely read them before signing (hitting accept), and even if they do read them, they’re often so complex that it’s hard to know their implications on a quick glance.

“​​That makes it completely unknown to the consumer about what it means to even say yes,” says David Grande, an associate professor of medicine at the University of Pennsylvania School of Medicine who studies digital health privacy.

So what does it mean to say yes? I took a look at the fine print on a few to get an idea of what’s happening under the hood. “Mental health app” is a broad category, and it can cover anything from peer-to-peer counseling hotlines to AI chatbots to one-on-one connections with actual therapists. The policies, protections, and regulations vary between all of the categories. But I found two common features between many privacy policies that made me wonder what the point even was of having a policy in the first place.

We can change this policy at any time

Even if you do a close, careful read of a privacy policy before signing up for a digital mental health program, and even if you feel really comfortable with that policy — sike, the company can go back and change that policy whenever they want. They might tell you — they might not.

Jessica Roberts, director of the Health Law and Policy Institute at the University of Houston, and Jim Hawkins, law professor at the University of Houston, pointed out the problems with this type of language in a 2020 op-ed in the journal Science. Someone might sign up with the expectation that a mental health app will protect their data in a certain way and then have the policy rearranged to leave their data open to a broader use than they’re comfortable with. Unless they go back to check the policy, they wouldn’t know.

One app I looked at, Happify, specifically says in its policy that users will be able to choose if they want the new uses of the data in any new privacy policy to apply to their information. They’re able to opt out if they don’t want to be pulled into the new policy. BetterHelp, on the other hand, says that the only recourse if someone doesn’t like the new policy is to stop using the platform entirely.

Having this type of…



Read More: Mental health app privacy language opens up holes for user data

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

mahjong slot

Live News

Get more stuff like this
in your inbox

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Thank you for subscribing.

Something went wrong.