Deepfake videos will soon have the potential to destroy brands
The technology behind deepfake videos may well have some marketing applications over time, but the threat to brands is much greater than the opportunity – argues Bryant Chan, a content marketing specialist at Singapore Press Holdings
“Don’t believe everything you read on the Internet,” said Abraham Lincoln.
The above statement is demonstrably false (and if you’re not sure why, read it again, but slower). But not everything can be quite so easily verified in the internet age, particularly when it comes to deepfake videos.
Simply put, deepfakes (a portmanteau of ‘deep learning’ and ‘fakes’) are videos where artificial intelligence is used to superimpose elements of one video onto another, most often used to put a celebrity’s or politician’s face on another person’s body.
While this technology was first predictably used for the creation of pornography involving celebrities – leading to their blanket ban on a number of sites like Reddit, Gfycat and even Pornhub – it’s hard not to think of the possible marketing applications.
For one, as the technology progresses, it could open up a wealth of opportunities. You could walk past a storefront and find a projection of your face on a holographic body. It turns out that coat really does look great on you. Who would have thought it? What about giving normal people the ability to appear on a talk show in spite of never being in the studio?
As is to be expected of human nature though, deepfakes have found another niche in a more nefarious purpose. That is to put words in the mouths of key opinion leaders.
While the terms ‘fake news’ and ‘alternative facts’ may have come to prominence in recent years, manipulation of truth by doctoring media is nothing new. Joseph Stalin famously made use of photo manipulation to increase his public standing, by portraying himself as Lenin’s close confidante.
But the existence of deepfakes sends us plummeting deeper into the post-truth world than ever before. Videos – previously the ultimate form of proof, an ironclad defence against slander – can now no longer be trusted.
After all, a single malicious rumour has the potential to run an entire brand into the ground. A video could do even more to erode public confidence.
In 2008, an anonymous netizen reported on CNN’s iReport page that Steve Jobs had suffered a major heart attack. Within the hour of the stock market opening, Apple’s stock had dropped 10 points – doing $4.8 billion worth of damage.
And 2017 saw the proliferation of the infamous ‘Dreamer Day’ tweets, in which 4chan (an imageboard website) users spread rumours that Starbucks would give free frappuccinos to undocumented migrants across the United States. Again, these were tweets were just 140 throwaway characters. How much more damage could a video – evidence of a talking human – do?
Also, these were Apple and Starbucks: Multi-billion-dollar corporations, to which a million dollars is little more than a drop in the ocean. But what if there is a targeted attack on a smaller company – a weaponised deepfake depicting a key director or stakeholder in a compromising situation?
In an article by The Verge, visual effects artist Benjamin Van Den Broeck estimated that a single person with a sufficiently powerful computer and enough footage can churn out a convincing deepfake in less than 24 hours. A dedicated team with a render farm (a network of computers simultaneously processing visual effects) could potentially do far more in far less time.
It would be a simple matter for a large enough player to eliminate competition in an industry, purely through the threat of deepfake blackmail. Corporations could entrench themselves within their own monopolies, effectively shutting out all possibilities for smaller disruptors to enter the market.
Deepfakes could also sound the death knell for celebrity endorsement. As deepfake hoaxes become increasingly common – and increasingly difficult to separate from reality – the value of key opinion leaders is likely to take a nosedive, maybe even spelling the end of the influencer era. The media will quickly have to adapt.
And what of the morality of deepfake use in the first place? Do we have the right to use people’s images without their explicit consent, even just to show them how they look in the latest Karl Lagerfeld collection? Without the existence of a regulatory framework to govern the use of deepfakes in the media, any commitments we make toward ethical marketing practices will be woefully ineffective.
Ultimately, no matter how strong our resolve to adhere to any code of conduct regarding deepfakes is, history has shown that people will believe what they want to believe. Whether it is that the earth is flat, that vaccines cause autism or that crystals have healing properties.
Perhaps the best solution is simply not just to pay lip service to ethical advertising, but also to educate ourselves and others to be more discerning about the content we consume. I mean, if President Lincoln said it, then it’s got to be true right. What a guy.
Bryant Chan is specialist editor for consumer tech and motoring at Sweet, the content marketing arm of Singapore Press Holdings
That explains everything. Surely Gillette didn’t really create a video attacking their entire customer base.
Reply“but also to educate ourselves and others to be more discerning about the content we consume”
Pretty one-sided take on this, IMO. If you’re talking about deep learning being used to create indiscernible content, what is the base that you’d then use to educate yourself? Nice take on the threat of this technology but I feel that your recommendation should also include technological solutions that are being worked on.
The burden of validation should sit with the creators in future, instead of receivers. Dartmouth researchers are already working on technology and approaches that would allow the use of a special recording app (instead of the stock camera) to record the content and the verified signature of which is then passed through Blockchain, so that every piece of content generated already has a verification attached to it.
If the threat is with advanced technology, the solution also lies in even mroe advanced technology. Not “educate ourselves better”, with what? Deepfake content? Just my 2 cents.
ReplyHave your say