
Facebook has been in the news several times for how its handled its data and other business practices. This will be an ongoing series that looks into how these incidents relate to cyber security and user privacy.
Cambridge Analytica is a data mining and analysis company that’s found itself in the news lately for how it used Facebook’s data as part of developing an app. Facebook has found itself in even hotter water for the Analytica Crisis, which has exposed just how poorly Facebook has handled the situation. Facebook has struggled as of late with negative price due to a various scandals such as their auto-complete search bar directing people towards porn, issues with Russian bots, changing their algorithm so that it doesn’t allow advertisers to reach all of their customers, and now Cambridge Analytica.
Cambridge Analytica worked with Facebook as a third-party developer that utilizes Facebook’s data to develop an application, something that Facebook encourages. It benefits Facebook immensely for companies to develop useful things from the data and services Facebook possesses, by doing so it invites more investment and raises the value of Facebook. What went wrong with Analytica, is that while it was permitted to view and collect some data on a limited pool of Facebook users it extended its reach far deeper into Facebook’s user-base than they were supposed too. Cambridge Analytica ended up collecting data on 50 Million Facebook users, instead of the sub-5 Million it was supposed to have access too. Cambridge collected its data through administering personality tests; when you gave permission for it to test you it also collected the data Facebook had on you. That’s not too abnormal, but what Cambridge did was to also include a small checkbox that allowed them to get access to your friends data as well. Using this tactic they were able to quickly expand their pool far beyond its original scope.
When third-party developers work with Facebook they sign several agreements, one of which stipulates that the developer won’t sell or allow access to the data that they collect by outside parties. That’s how Facebook protects the privacy of its users, at least that’s how it’s supposed to work. What the Cambridge Analytica scandal has brought to light is how poorly Facebook has enforced that policy. It raises doubts about how well Facebook has ensured that other developers have kept their word about the correct usage of their data. Cambridge Analytica sold the data (and the profiles they built from that data) to several political groups, such as Ted Cruz and Donald Trump’s campaigns. This constitutes a massive breach of their contract, on top of collecting data from more people than they were supposed too, because it violates the confidentiality agreement they had with Facebook. It’s becoming increasingly clear that Facebook knew about the issue, but failed to follow up with it in an appropriate manner. Facebook could have audited Cambridge Analytica, which is something they have the right to do at any time according to their agreement. Instead Facebook ordered Cambridge Analytica to give their word that the data hadn’t been used illegally and had been deleted from all Cambridge Analytica devices. Facebook accepted the signed documents even as it became apparent that Cambridge Analytica had failed to comply with the order. Now Facebook is preparing its legal team to take Cambridge Analytica to court over the issue, but the damage has already been done to their reputation. Facebook’s stock has fallen 10% and #DeleteFacebook has been trending because of this crisis.