Why Netflix’s ‘The Great Hack’ should make all of us in marketing uncomfortable

'The Great Hack' on Netflix is a grim reminder of how marketers are not at a remove from Cambridge Analytica and its exploitative use of data, but part of the same problem, says Timber Wolf's Miguel Bernas

Netflix’s The Great Hack is easily the most discussed movie among industry peers, at least, based on the lively conversations on my LinkedIn feed. 

The film turns the spotlight on the now infamous ‘data research’ company Cambridge Analytica and how it allowed the weaponisation of social media – Facebook in particular – to spread misinformation. It is told mainly from the perspective of insider whistleblowers.

The reaction from viewers is usually a mix of both alarm and outrage.

As shocking as the revelations about Cambridge Analytica and the role it played in the Brexit referendum and the 2016 US Presidential elections are – they could be just the tip of the iceberg.  

According to an article in Business Insider: “Facebook dealt with a slew of major breaches and incidents that affected more than 100 million users” in 2018 alone.

The uncomfortable truth behind ‘The Great Hack’ is that if you are part of the media and marketing industry (and if you’re a Mumbrella reader, I assume that you are) you are part of the problem. 

We are all contributors to the digital media ecosystem that enables bad actors like Cambridge Analytica and their clients to weaponise social media. That is because the digital infrastructure used to warp political opinions and the one we use every day to sell sneakers, hotel nights and consumer electronics are one and the same

The amorality of algorithms

I first woke up to this realisation when I watched a 2017 TED talk by Turkish writer, academic, and techno-sociologist Zeynep Tufekci renowned for her research on the social implications of emerging technologies. She continues to write on this subject for ‘The New York Times’

Today, much of the media placement decisions – matching copy creative with audience in the right channel at the right time – are made not by humans but by machine learning algorithms.

While they have certainly contributed to the cost efficiency and effectiveness of digital advertising, the truth is that little is known about how exactly these algorithms make decisions.

The problem is that they have no moral compass. They do not apply ethical parameters of right and wrong or human decency. They simply place ads in a manner that ensures the maximum number of clicks. 

To quote one of Tufekci’s examples, what if the algorithms discover that the best way to sell air tickets to Las Vegas is to target people who are bipolar and about to enter the manic phase? 

“Such people tend to become over-spenders and compulsive gamblers,” she describes in her presentation. “They could do this, and you’d have no clue that’s what they were picking up on. 

“I gave this example to a bunch of computer scientists once and afterwards, one of them came up to me. He was troubled. He had tried to see whether you can indeed figure out the onset of mania from social media posts before clinical symptoms, and it had worked very well. 

“And he had no idea how it worked.”

The weaponisation continues

There is every indication that ad networks and personal data continue to be exploited. 

Consider that the Trump reelection campaign bought over 2,000 Facebook ads making frequent reference to the term ‘invasion’, according to a recent article in the The New York Times. 

Critics link his language on immigration – particularly his use of the word ‘invasion’ – to inspiring violence, especially after the mass shooting in El Paso, Texas on August 3. 

For me, this social media weaponisation hits even closer to home. 

Before these tactics were deployed in the US Presidential elections and the Brexit referendum in the UK, they were present in developing markets with high social media usage like the Philippines during its own Presidential elections in 2016. 

Those tactics continue today, with many pointing the finger at the administration currently in power.

I attended a talk at Oxford University last year where Maria Ressa, Filipina journalist, publisher of independent news website ‘Rappler and Time 2018 Person of the Year frequently referred to the Philippines as the “petri dish” for this brand of media and audience data manipulation. 

According to an interview in the Washington Post, in mid-2016 Ressa “identified 26 accounts that reached more than three million Facebook users.”

That October, she asked Facebook to remove them, she said, arguing it would be too dangerous for her news outlet to publish the findings first. She described meeting with more than 50 employees at Facebook headquarters, including chief executive Mark Zuckerberg himself, to urge them to stop the systematic abuse taking place on Facebook’s pages. 

The social network’s inaction prompted Rappler to publish a series of articles exposing the clandestine network of fake accounts designed to warp public opinion.  

The Post’s article reports: “Ressa’s discoveries showed Facebook’s failure to enforce its own policies against fake accounts and calls for violence.

“Rappler’s series described how ‘sock puppets,’ fake accounts controlled by a network of Duterte supporters, engaged real people online and spread lies, misleading photos and false incidents of rampant crime to drum up support for Duterte’s hard-line anti-drug policies.

“The accounts called for violence against legislators, civic activists and journalists who spoke up against Duterte’s tactics. Ressa was among them.” 

There are very real consequences. A report by Human Rights Watch reveals that in 2018 there were 22,983 deaths from extrajudicial killings in the Philippines.

With great power comes great responsibility 

So far, attempts by regulators to pressure Facebook to introduce more effective safeguards seem to have had little effect. Last July, the Federal Trade Commission fined Facebook US$5 billion for multiple data privacy violations. This decision actually drove Facebook’s stock price up.  

So what can we in the media and advertising industry do? 

First, we should recognise technology companies like Facebook and Google for what they are: media companies. It defies all logic to claim to be otherwise when your main source of revenue is advertising. 

And as media companies, should there not be greater accountability for what is published on your platform?

If a newspaper published an ad that promoted racial or religious intolerance, would other advertisers not have something to say about it? 

Would this not prompt brands to withdraw future ads in protest, vowing never to advertise again until the publication denounced the offensive message?

Why do digital media companies somehow always get a pass? 

We need a greater consciousness of how data sets are used, where they come from and whether they were truly obtained with the audience’s consent. 

Ad campaigns are often purchased with little scrutiny. How much of the political misinformation campaigns were enabled by “lookalike” targeting? Do advertisers not contribute to these large data sets each time they run lookalike targeting campaigns of their own? 

Finally, I would like to see more advertisers start building their own media audiences rather than continuously rely on renting audiences from third parties.

 After all, a consumer can’t complain about breach of privacy if he voluntarily subscribed to your newsletter and continues to consume the content you produce willingly. 

We as an industry can do much more than simply wring our hands and be shocked each time a documentary like ‘The Great Hack’ comes out. Miguel Bernas is the digital media and marketing consultant of Timber Wolf Media and is based in Singapore


Get the latest media and marketing industry news (and views) direct to your inbox.

Sign up to the free Mumbrella Asia newsletter now.



Sign up to our free daily update to get the latest in media and marketing