Deepfaking and the ethics of synthetic media

Paul Hamilton, Head of Technology, The Brand Agency Perth
By Paul Hamilton, Head of Technology, The Brand Agency Perth | 27 November 2019
 

By now most of us have experienced an interaction with a voice enabled service or assistant. However, up to now the conversation has probably felt a little like you were communicating with a relatively intelligent Dalek. But these primitive conversations are heralding the start of a much larger relationship that is being forged by rapid advancements in voice technology and other forms of synthetic media, which encompasses AI generated audio and video.

Lifelike synthetic media has the power to bring scale and realism to brand interactions with consumers. However, the power of this technology brings an ethical question regarding its usage, security and potential abuse. Plus, how do consumers and businesses even know what is real or fake when they start to experience synthetic media that is ultra-realistic?

This isn't the first wave of consumer uncertainty that we've seen; the rise of Photoshop formed an entirely new adjective to describe the manipulation of imagery. Plus, special effects in films continually drive visual storytelling without being distinguishable from what was shot for real. But we enter these cinematic worlds with a clear sense of belief expectation, so when our daily lives could be subjected to similar synthetic manipulation, are we ready to question what is real and what could be manipulated?

Unpackaging the current state of synthetic media in the form of voice, there are now several service providers globally who are able to replicate the unique characteristics found within individual voices to create on-demand ultra-realistic vocal identities using artificial intelligence. The potential applications for these AI voices is vast, and something we're actively exploring with our clients at the moment. However, utilising this technology requires engagement with organisations that comprehend the ethical and security implications of these products.

The protection of the data and methods required to create these AI voices is critical. As is the inclusion of watermarking and countermeasure tools within this content, which in the future will become essential to enable fraud detection and prevention against malicious usage in call centres and other areas leveraging voice as a method of identification.

Taking the misuse of synthetic media to the extreme, deepfaking has become part of the vernacular of the media landscape. In the context of synthetic media, deepfakes are highly realistic fake videos and audio that can be seen across social and web channels. This content sets out to mislead or impersonate individuals and can have a powerful and devastating impact to the individual manipulated. Recent examples include the deepfaking of a video featuring Italy’s former prime minister, Matteo Renzi. His derogatory behaviour towards the current prime minister and his deputy incited a public backlash on social media. Another instance was the questionable appearance of Ali Bongo Ondimba, the president of the African nation of Gabon, which may have led to an attempted coup in the country. Politics has historically drawn upon propaganda as a mechanism of manipulation but it's easy to envision a situation where a brand or business could equally be compromised by this type of content.

A recent report by Deeptrace, a global leader in researching the evolving capabilities of deepfake threats, suggests that the deepfake phenomenon is growing rapidly online, with the number of deepfake videos almost doubling over the last seven months to 14,678. This increase is “supported by the growing commodification of tools and services that lower the barrier for non-experts to create deep-fakes” says Deeptrace's chief scientist Giorgio Patrini. Therefore, for brands that are inextricably linked to public personalities, the risk of misrepresentation through manipulation could be catastrophic for consumer trust.
Deepfaking and misuse aside, AI voices can offer so much for society. An example of this is the utilisation of AI voices to help those affected by speechlessness or requiring assistive voice technology.

For brands, AI voice provides an unprecedented opportunity to create better targeting, agility and scale of economy over traditional voice recordings. Voice assistants are going to revolutionise the way we interact with brands and services, via our devices, cars and IoT products. So, there is a great deal to be excited about in this technological evolution.

As consumers we want to interact with brands that we trust and are authentic, without being deceptive. AI voices will become a big part of that landscape, so we need to start recognising that realism isn't always reality. Ethical usage must be integral to this technology and by starting this conversation we all have a voice to contribute.

comments powered by Disqus