The Facebook data privacy story continues to be in the headlines this week. For many of us in IT, this event is not really a surprise. The sharing of data from social media is not a data breach, it’s a business model. Social media developers make apps (often as quizzes and games) that harvest data in alignment with social networks’ terms of service. By default, these apps can get profile information about the app users and their friends/contacts. There are no granular consent options for users. What gives this story its outrage factor is the onward sharing of Facebook user data from one organization to another, and the political purposes for which the data was used. Facebook now admits that the data of up to 87 million users was used by Cambridge Analytica. If you are a US-based Facebook user, and are curious about how they have categorized your politics, go to Settings | Ads | Your Information | Your Categories | US Politics.
But data made available through unsecured APIs, usually exported in unprotected file formats without fine-grained access controls or DRM, cannot be assumed to be secure in any way. Moreover, the Facebook - Cambridge Analytica incident is probably just the first of many that are as yet unreported. There are thousands of apps and hundreds of thousands of app developers that have had similar access to Facebook and other social media platforms for years.
CNBC reports that Facebook was attempting to acquire health record data from hospitals, but that those plans are on “hiatus” for the moment. Though the story says the data would be anonymized, there is no doubt that unmasked health care records plus social media profile information would be incredibly lucrative for Facebook, health care service providers, pharmaceutical companies, and insurance companies. But again, according to this report, there was no notion of user consent considered.
It is clear that Facebook users across the globe are dissatisfied with the paucity of privacy controls. In many cases, users are opting out by deleting their accounts, since that seems to be the only way at present to limit data sharing. However, the data sharing without user consent problem is endemic to most social networks, telecommunications networks, ISPs, smartphone OSes and apps developers, free email providers, online retailers, and consumer-facing identity providers. They collect information on users and sell it. This is how these “free” services pay for themselves and make a profit. The details of such arrangements are hidden in plain sight in the incomprehensible click-through terms of service and privacy policies that everyone must agree to in order to use the services.
This is certainly not meant to blame the victim. At present, users of most of these services have few if any controls over how their data is used. Even deleting one’s account doesn’t work entirely, as a Belgian court found that (and ruled against) Facebook for collecting information on Belgian citizens who were not even Facebook users.
The rapidly approaching May 25th GDPR effective date will certainly necessitate changes in the data sharing models of social media and all organizations hosting and processing consumer data for EU persons. Many have wondered if GDPR will be aggressively enforced. As a result of this Facebook – Cambridge Analytica incident, EU Justice Commissioner Vera Jourova said “I will take all possible legal measures including the stricter #dataProtection rules and stronger enforcement granted by #GDPR. I expect the companies to take more responsibility when handling our personal data.” We now have the answer to the “Will the EU enforce GDPR?” question.
It is important to note that GDPR does not aim to put a damper on commerce. It only aims to empower consumers by giving them control over what data they share and how it can be used. GDPR requires explicit consent per purpose (with some exceptions for other legitimate processing of personal data). This consent per purpose stipulation will require processors of personal data to clearly ask and get permission from users.
Other countries are looking to the GDPR model for revamping their own consumer privacy regulations. We predict that in many jurisdictions, similar laws will come into effect, forcing social networks and consumer-facing companies to change how they do business in more locations.
Even before the Cambridge Analytica story broke, Facebook, Google, and Twitter were under fire for allowing their networks to spread “fake news” in the run-up to the US election cycle. Disengagement was growing, with some outlets reporting 18-24% less time spent on site per user. Users are quickly losing trust in social media platforms for multiple reasons. This impacts commerce as well, in that many businesses such as online retailers rely on “social logins” such as Facebook, Twitter, Google, etc.
To counter their growing trust problems, social network providers must build in better privacy notifications and consent mechanisms. They must increase the integrity of content without compromising free speech.
Facebook and other social media outlets must also communicate these intentions to improve privacy controls and content integrity monitoring to their users. In the Facebook case, it is absolutely paramount to winning back trust. CEO Mark Zuckerberg announced that Facebook is working on GDPR compliance but provided no details. Furthermore, he has agreed to testify before the US Congress, but his unwillingness to personally appear in the UK strengthens a perception that complying with EU data protection regulations is not a top priority for Facebook.
If social network operators cannot adapt in time, they will almost certainly face large fines under GDPR. It is quite possible that the social media industry may be disrupted by new privacy-protecting alternatives, funded by paid subscriptions rather than advertising. The current business model of collecting and selling user data without explicit consent will not last. Time is running out for Facebook and other social network providers to make needed changes.