PC Magazine|May 2020
What’s a society to do? Ours has begun clamoring for boycotts, for regulation, and even for breaking up the biggest tech giants. For a decade (or two), the tech industry, led by the largest, most successful companies, has painted attempts to regulate it as stifling of innovation; an impediment to the new, utopian “technology will solve everything” system these benevolent founders seek to build. Maybe that’s true, but considering the aforementioned abuses, the “Don’t be evil” edict seems to hold less water, and #deletefacebook might finally be having its moment.
Presidential candidates have made trust-busting a part of their platforms. Europe and California have instituted legislation designed to allow citizens greater control over their personal data and how it’s used. Other states are following suit, buoyed by bipartisan support. It feels like major tech regulation is coming, but whether it’s a culmination of decades of regulatory decisions or just a step on the path is unclear.
‘FREE’ ISN’T FREE
You probably know some of the basics of how internet advertising targets its viewers. Sometimes, ads might seem a little too relevant, leading you to wonder whether your phone is listening to your conversations. You feel uneasy about this, even as you admit that you’d rather see ads for stuff you like than for something completely uninteresting to you. From the advertisers’ perspective, it’s much more efficient to target just a few people and make sure those people see their ads rather than waste time and money putting ads in front of people who don’t need or care about what’s being sold. The companies that do this can even track whether a user who has seen a particular ad then visits the store that’s being promoted.
We’ve settled into a “freemium” model: In exchange for our data, we get to use free services, including email and social media. This is how companies such as Facebook make money and still provide us with the services we enjoy (although research has shown that spending more time on Facebook makes you less happy, rather than more).
But there’s more than one reason to be concerned about letting our personal data be sucked up by tech companies. There are many ways the wholesale gathering of data is being abused or could be abused, from blackmail to targeted harassment to political lies and election meddling. It reinforces monopolies and has led to discrimination and exclusion, according to a 2020 report from the Norwegian Consumer Council. At its worst, it disrupts the integrity of the democratic process (more on this later).
Increasingly, private data collection is described in terms of human rights—your thoughts and opinions and ideas are your own, and so is any data that describes them. Therefore, the collection of it without your consent is theft. There’s also the security of all this data and the risk to consumers (and the general public) when a company slips up and some entity—hackers, Russia, China—gets access to it.
“You’ve certainly had a lot of political chaos in the US and elsewhere, coinciding with the tech industry finally falling back to Earth and no longer getting a pass from our general skepticism of big companies,” says Mitch Stoltz, a senior staff attorney at the Electronic Frontier Foundation. “If so many people weren’t getting the majority of their information about the world from Facebook, then Facebook’s policies about political advertising (or most anything else) wouldn’t feel like life and death.”
Policy suggestions include the Honest Ads Act, first introduced in 2017 by Senators Mark Warner and Amy Klobuchar, which would require online political ads to carry information about who paid for them and who they targeted, similar to how political advertising works on TV and radio. This was in part a response to the Facebook-Cambridge Analytica scandal of 2016.
CAMBRIDGE ANALYTICA BLOWS UP
It’s easy to beat up on Facebook. It’s not the only social network with questionable data-collection policies, but it is the biggest. Facebook lets you build a personal profile, connect that profile to others, and communicate via messages, posts, and responses to others’ posts, photos, and videos. It’s free to use, and the company makes its money by selling ads, which you see as you browse your pages. What could go wrong?
In 2013, a researcher named Aleksandr Kogan developed an app version of a personality quiz called “thisisyourdigitallife” and started sharing it on Facebook. He paid users to take the test, ostensibly for the purposes of psychological research. This was acceptable under the Facebook policy at the time. What wasn’t acceptable (according to Facebook, although it may have given its tacit approval according to whistleblowers in the documentary The Great Hack) was that the quiz didn’t just record your answers—it also scraped all your data, including your likes, posts, and even private messages. Worse, it collected data from all your Facebook friends, whether or not they took the quiz. At best guess, the profiles of 87 million people were harvested.
Kogan was a researcher at Cambridge University, as well as St. Petersburg State University, but he shared that data with Cambridge Analytica. The company used the data to create robust psychological profiles of people and target some of them with the kinds of political ads that were most likely to influence them. Steve Bannon, who was Cambridge Analytica’s vice president, brought this technique and data to the Trump 2016 campaign, which leveraged it to sway swing voters, often on the back of dubious or inflammatory information. A similar tactic was employed by the company in the 2016 “Brexit” referendum.
In 2017, data consultant and Cambridge Analytica employee Christopher Wylie blew the whistle on the company. This set off a chain of events that would land Facebook in the hot seat and Mark Zuckerberg in front of the Senate Commerce and Judiciary Committees.
Giving this the best possible spin, it’s a newer, better version of what President Obama’s campaign did: leveraging clever social-media techniques and new technology to build a smoother, more effective, occasionally underhanded but not outright illegal or immoral political-advertising industry, which everyone would be using soon.
A darker interpretation: It’s “weaponized data,” as the whistleblowers have called it; psyops that use information-warfare techniques borrowed from institutions like the Department of Defense to leverage our information against us, corrupting our democratic process to the point that we can’t even tell if we’re voting for (or against) something because we believe it or because a data-fueled AI knew just what psychological lever to push. Even applied to advertisements, this is scary. Did I buy a particular product because its manufacturer knew just how and when to make me want it? Which decisions that we make are our own?
“You might say ‘Well, what happened before the last election—that was pretty darn malicious,’” says Vasant Dhar, a professor of data science at the NYU Stern Center of Business. “Some people might say, ‘I don’t know—that wasn’t that malicious, there’s nothing wrong with using social media for influence; and besides, there’s no smoking gun, there’s no proof that it actually did anything.’ And that’s a reasonable position too.”
The irony is that Facebook was sold to its early users as a privacy-forward service. You might remember MySpace and how it faded to oblivion after Facebook became available. That wasn’t an accident; Facebook intentionally painted itself as an alternative to the wide-open world of MySpace.
HOW THINGS WENT WONKY
You can read up to 3 premium stories before you subscribe to Magzter GOLD
Log in, if you are already a subscriber
Get unlimited access to thousands of curated premium stories and 5,000+ magazines
READ THE ENTIRE ISSUE