Our Insights

Deepfakes: They’re About To Change How We See The World

November 19, 2020

 

Deepfakes have only been around for two and a half years – already it’s clear that this newcomer in the (mis)information age is likely to bring major disruption.

 

Deepfakes are a type of digital media in which someone’s face, voice or likeness is matched up with a new image, video or recording to create an entirely new piece of content. When you add Artificial Intelligence (AI) to this mix – which can increasingly build its own media – the result is an entirely fabricated array of people, text, images and voices, allowing the ability to create entirely synthesised, and completely convincing, pieces of content from scratch. Otherwise known as ‘synthetic’ media.

 

During a recent talk recorded for the UK’s Institute of Practitioners in Advertising, author of the book “Deepfake” Nina Schick describes synthetic media as “One of the most exciting and revolutionary things in the future of content production”. She believes it will democratise media production by making content cheaper, easier and faster to produce: “By the end of the decade, any YouTuber will be able to create the same special effects currently only accessible to a Hollywood studio, citing the ‘homemade’ improvements that this amateur YouTuber made to Martin Scorsese’s film “The Irishman” using nothing more than a freely available tool, found on the internet.

 

Schick has also advised global leaders on the impact of deepfakes, including the newly elected US president-elect Joe Biden, and former Secretary General of NATO, Anders Fogh Rasmussen. She has a particular interest in geopolitics and the role deepfakes will play in information warfare and warns that the rise of synthetic media won’t all be good news. Indeed, it doesn’t require much imagination to see how easily this could be weaponised and used to undermine society, business and politics.

 

She believes that by 2030 there is a good chance that 90% of all video content online will be synthetic.

 

Synthetic media may pose a threat to the financial system

 

Global think tank and advisory group Carnegie Endowment says that synthetic media has already triggered widespread concern about the new potential it provides for the spreading of political disinformation. They believe similar tech could also be used also facilitate financial harm, which is no small threat. In 2019 the world saw the first publicly documented cases of deepfake-led corporate fraud and extortion. Meanwhile, the FT reported this year that HSBC has started working more closely with fintechs to counter deepfake fraud.

 

Carnegie has identified a list of the top 10 financial risks that synthetic media may soon pose to individuals, companies, markets and regulatory structures (see the full list here). They narrow the most imminent threats down to these three:

 

  1. Deepfake Voice Phishing (or vishing) – which clones someone’s voice, e.g. a financial advisor or stockbroker, who could then be used to sell or acquire false information.
  2. Fabricated private remarks – video or audio clips that falsely depict a business leader speaking on a topic “behind the scenes”. For example, a voice note about a CEO talking about the imminent collapse of a business – which could have dire implications for the stock price.
  3. Synthetic social (media) botnets – these use AI to create fake social media accounts complete with images of people who don’t exist, with entirely fabricated profiles and content. The risk is that a network of bots could help fuel a bank run or a stock price rally.

 

How can we prepare for the rise of synthetic media?

 

It’s clear that the role of corporate communications is changing and every professional within the communications space must have a fastidious eye for content that could pose a reputational risk – whether it comes from a news article, or a Twitter post.

 

It goes without saying that any executive caught up in a misinformation scandal will fair far better if they have a solid, trustworthy reputation to begin with. Carnegie believes these scandals are more likely to come at a time when a company is already under fire and therefore more vulnerable to negative sentiment. So effective digital media monitoring, and reputational risk assessment is essential to any business.

 

In order to counteract the impact of misinformation, corporate leadership at the highest level will need to adopt the approach that digital communications is not just about marketing, it’s about reputation. Second, they must curate and develop an authentic and recognisable voice for themselves and their business, that is honest and trustworthy.

 

Invest in content that builds trust

As our colleagues in Aprio Credence will attest, trust ‘capital’ will go a long way in any crisis. The earlier you start building trust with your stakeholders; and investing in managing the perceptions your audiences have of you, the more people will take your side if you ever have to defend against misinformation in the form of synthetic media.

 

Develop a clear and consistent corporate voice

 

Invest time and effort in understanding how your organisation thinks and start finding ways to express this externally. Companies must be crystal clear in their thinking and consistent in their tone. The more established these two features are, the harder it will be to create fake content.

 

Convert business risks into thought leadership

 

Using content to explore difficult topics not only invites important stakeholders into your thinking as a business, it demonstrates transparency. If audiences feel that you are open and frank with the issues you face as a business, they will take your side in a spurious accusation.

 

What can regulators and industry do today?

 

Companies, industries and regulators will all need to become experts in the field of digital transformation, regardless of the type of business. Schick describes two categories that will need to be combined to create new solutions as they arise:

 

  • Technical solutions – as synthetic media becomes more prevalent it will become impossible for us to tell what is real and what is fake. We will need detection solutions to establish this authenticity. This is becoming more difficult as AI becomes better at fooling, well, other AI. Solutions that track the provenance of content will need to be developed – e.g. a digital watermark to track the journey of a piece of content, from its inception; and
  • Human solutions: policy makers will need to establish metrics and monitoring for responsible use. But we’re also going to have to take a networked approach. Industry, policy makers and journalists will need to work together to identify risks and debunk false information – accurately and quickly.

 

Want to know more?

 

Contact the Aprio Digital team to learn more about synthetic media, our offering (and many other fascinating topics). You can also follow Thomas McLachlan, Director, Aprio Digital on LinkedIn or visit Aprio Digital on www.apriodigital.co.za

 

We look forward to hearing from you! Until then…here are a few fascinating links to keep your curiosity fuelled:

 

Our Insights: View More Articles, Podcasts and video