A collaboration between social media -- Facebook, YouTube and Twitter -- and marketers and agencies have agreed to adopt a common set of definitions for hate speech and other harmful content.
Definitions of harmful content vary by platform, making it hard for brands to make informed decisions on where their ads are placed.
The changes follow 15 months of talks through the Global Alliance for Responsible Media (GARM) between major advertisers, agencies and key global platforms.
GARM is a cross-industry initiative founded and led by the World Federation of Advertisers (WFA) and supported by other trade bodies.
Key areas for action designed to boost consumer and advertiser safety:
Adoption of GARM common definitions for harmful content;
- Development of GARM reporting standards on harmful content;
- Commitment to have independent oversight on operations, integrations and reporting;
- Commitment to develop and deploy tools to better manage advertising adjacency.
“The issue of harmful content online has become one of the challenges of our generation,” says said Stephan Loerke, WFA CEO.
“As funders of the online ecosystem, advertisers have a critical role to play in driving positive change and we are pleased to have reached agreement with the platforms on an action plan and timeline in order to make the necessary improvements.
“A safer social media environment will provide huge benefits not just for advertisers and society but also to the platforms themselves.”
WFA believes that the standards should apply to all media given the increased polarisation of content regardless of channel, not just the digital platforms.
Today, each platform has its own methodologies to measure the occurrence of harmful content. There is a need to harmonise those methodologies and to focus on metrics that are truly meaningful from a brand and a societal perspective, namely the number of incidents and prevalence of harmful content per platform.
Between September and November work will continue to harmonize metrics and reporting formats, with the system to launch in the second half of 2021.
The goal is to have all major platforms fully audited or in the process of auditing by year end.
Advertisers need to have visibility and control so that their advertising does not appear adjacent to harmful or unsuitable content and take corrective action if necessary and to be able to do so quickly.
Platforms that have not implemented an adjacency solution will provide a development roadmap in Q4 2020. Platforms will provide a solution through their own systems, via third party providers or a combination thereof.
As well as Facebook, YouTube, and Twitter, there are firm commitments from TikTok, Pinterest and Snap to provide development plans for similar controls by year end.
Raja Rajamannar, CMO at Mastercard and WFA President, says he’s delighted that GARM has made such significant progress in such a short period of time.
“I know these discussions have not been easy but these solutions when implemented, will offer more choice and control for advertisers and their agencies by supporting content that aligns with their values,” he says.
Luis Di Como, executive vice president, Global Media, Unilever: “We are encouraged by the acceleration and focus to come together as an industry and agree on these four key areas of action. The issues within the online ecosystem are complicated, and whilst change doesn’t happen overnight, today marks an important step in the right direction.”
Jacqui Stephenson, global responsible marketing officer, Mars: “This is not a declaration of victory as there is much work to be done and we rely on all of our platform partners to follow through on their commitments with the pace and urgency these issues demand. Nevertheless, this is an important step in making social media a safer place for society and it’s important to recognise the progress and build further momentum as a result.”
Have something to say on this? Share your views in the comments section below. Or if you have a news story or tip-off, drop us a line at email@example.com