Dodgy content spurs YouTube’s ‘new approach to advertising’

Rosie Baker
By Rosie Baker | 6 December 2017
 

Google is taking new steps to protect advertisers and creators from inappropriate content on the platform in the wake of yet more brand safety concerns.

Late last month YouTube was found listing thousands of videos of children being targeted and exploited by paedophiles.

It says it will apply stricter criteria to which channels can host advertising and will apply more manual curation of the content by ramping up the number of ad reviewers it has.

To ensure that advertisers have better protection it is “carefully” considering which channels should have advertising.

Global YouTube CEO Susan Wojcicki says: “We want advertisers to have peace of mind that their ads are running alongside content that reflects their brand’s values. Equally, we want to give creators confidence that their revenue won’t be hurt by the actions of bad actors.

“We believe this requires a new approach to advertising on YouTube, carefully considering which channels and videos are eligible for advertising. We are planning to apply stricter criteria, conduct more manual curation, while also significantly ramping up our team of ad reviewers to ensure ads are only running where they should."

YouTube knows that it needs advertiser dollars to fund its content ecosystem and creators to make the content since it doesn't make any of its own, so reassuring both parties is key. 

She says: “It’s important we get this right for both advertisers and creators, and over the next few weeks, we’ll be speaking with both to hone this approach.

“We are taking these actions because it’s the right thing to do. Creators make incredible content that builds global fan bases. Fans come to YouTube to watch, share, and engage with this content. Advertisers, who want to reach those people, fund this creator economy. Each of these groups is essential to YouTube’s creative ecosystem—none can thrive on YouTube without the other—and all three deserve our best efforts.”

“As challenges to our platform evolve and change, our enforcement methods must and will evolve to respond to them. But no matter what challenges emerge, our commitment to combat them will be sustained and unwavering. We will take the steps necessary to protect our community and ensure that YouTube continues to be a place where creators, advertisers, and viewers can thrive.”

The latest issue follows the first brand safety scandal on the video platform earlier this year which saw several leading brands boycott YouTube after ads were found to be running against extremist and violent content.

Speaking earlier this year at the AdNews Media + Marketing Summit in May, Google Australia MD Jason Pellegrino admitted that YouTube had “let a lot of people down in the industry and we know that we need to do better”.

Humans are essential

YouTube is also increasing the number of people it enlists to manually check content to 10,000.

It’s a platform that has helped businesses grow, and enriched people's lives, she says, but also concedes it is also being used by unsavoury elements and has a “more troubling” side.

“I’ve seen how some bad actors are exploiting our openness to mislead, manipulate, harass or even harm,” she says.

“Our goal is to stay one step ahead of bad actors, making it harder for policy-violating content to surface or remain on YouTube.”

The platform has taken steps in the last year to protect users and advertisers from violent and extremist content, including new artificial intelligence systems to flag content and aid the human moderators identify content that contravenes its policies. It has tackled comments and shut down problematic accounts.

It has made progress, but still faces major challenges and reputational issues and the latest problem only goes to show the scale of the problem digital platforms face.

More than two million videos have been reviewed since June, and 150,000 videos removed for violent extremism. Of those 98% were flagged by machine learning algorithms. Nearly 70% of violent extremist content is taken down within eight hours of upload - nearly half within two hours.

Google says human reviewers are “essential” for removing and identifying content and training its machine learning tools, but that the AI tool has helped remove five times as many videos than were previously being handled manually. It would have taken 180,000 people working 40 hours a week to assess the same number of videos.

It is developing additional tools and expanding those processes developed to tackle violent and extreme content to other kinds of problematic content on the platform.  

Google will also develop a regular report that outlines the content that is flagged and the actions taken to remove or moderate it.

Have something to say on this? Share your views in the comments section below. Or if you have a news story or tip-off, drop me a line at rosiebaker@yaffa.com.au

Sign up to the AdNews newsletter, like us on Facebook or follow us on Twitter for breaking stories and campaigns throughout the day. Need a job? Visit adnewsjobs.com.au.

Read more about these related brands, agencies and people

comments powered by Disqus