If you’re still considering whether or not to get off of Facebook, here’s one good reason you should

Hint: it’s not because of fake news or Zuckerberg‘s inhuman tendencies

Corduroy Bologna
The Startup

--

Image from this article by the Guardian

I’m writing this because popular opinion is colored with unfounded claims and easy targets leading to a mass misunderstanding of the problem of the current social media ecosystem, and also because it’s easier to write once than explain every time someone asks me why I’m no longer using Facebook (the long story isn’t quite as powerful in a short form).

While there are several problems with Facebook, here I will only go into the one which I believe is the most important, as it is deep rooted in the “DNA” of the platform and hints at the potential of it to gain influence over our lives, even more so than it has already. That DNA (not to be confused with the leadership/personality based DNA described in this well written piece) is the ad-based business model, without which Facebook would be nowhere near the size and dominance it currently possesses.

The recent controversies are only a preview of the amount of power Facebook could exercise if it chooses to. In an ideal (non-profit driven world), it won’t. But if it means manipulating users’ opinions and emotions for the sake of revenue, I don’t think any tech company has the empathy to back down. Such companies are profit-driven machines, sustainable only by their ability to serve relevant content to their users, so as to gain more and more engagement. In Facebook’s case, each like or reaction we make is a data point which Facebook uses to understand us, allowing them to provide even more relevant content in a continuous cycle of engagement and targeted advertising. This in itself is not bad. But it’s the potential of such data driven content serving that reveals the true negative implications of Facebook’s business model.

Here’s the cycle broken down:

  1. Users react to and like posts
  2. Facebook aggregates such user activity data to pinpoint his/her interests
  3. According to user interests, FB feeds content more likely to garner a reaction
  4. Facebook gains a broad understanding of the users' characteristics (interests, hobbies, political affiliation, intelligence level, sexual orientation, career field, etc etc).
  5. Facebook uses its incredible knowledge of its users to serve ads that blend in well with the content we are used to seeing, so they too have a very high likelihood of being clicked.

A high likelihood of being clicked means that an advertiser has the highest likelihood of making a return on said advertisement. From this it’s obvious to see why Facebook has become the world’s largest marketing company — because of its ability to pinpoint ads to relevant users so accurately, companies with a product to sell have no good reason to advertise anywhere else.

As the stockpiles of data keep growing, Facebook’s knowledge of users gets better, and ad click-rates get higher → Facebook makes more money from advertisers.

But surely, Facebook is using our data responsibly, right?

This is the scariest part. For the most part, Facebook assumes an innocent countenance when it comes to the question of data usage. Controversies like that of Cambridge Analytica are passed off as exploitation by a third party, without the knowledge or intentional involvement of Facebook itself. But Facebook’s management of user data is far from innocent.

Given the amount of time the average person spends on the platform each day, there is no doubt the content we see influences the way we think. So what happens when Facebook chooses to use our data for the wrong purposes, by showing us content meant to make us feel a certain way or believe a certain narrative or ideology. Facebook can, theoretically, manipulate people into falling in line with whatever agenda that it might find most beneficial — and that, most likely, is one in which Facebook itself is not harmful to society. How do we know it’s not doing this already? Its algorithm is completely opaque. Nobody but the engineers who created it know what data is being used to define what shows up in your Newsfeed.

It’s happened before. In June 2014, an article published by Forbes describes a “massive psychological experiment” Facebook performed on its users to test how they react to emotional content in their feeds. The results showed that, when exposed to more emotional content, users were more likely to create posts which reflected those same emotions (a phenomenon called emotional contagion). Case in point. Whether or not they have done it since or plan to in the future, Facebook has the power to manipulate the way its users think and feel, and this, I contend, is the most verifiable reason that Facebook is a malicious actor in today’s society and use of it should be reconsidered by all.

--

--

Corduroy Bologna
The Startup

No war but class war. (I don’t paywall my garbage content and you shouldn’t either)