Brands, Terrorists, Google; an unholy alliance?

Curated by Ankit Sehgal, Head of Performance.

Bulletin #5


bulletin5.jpg

You may have been reading about a global news story that has erupted after The Times newspaper investigated what video content Google allowed ads to be placed next to.

The Times investigation found online video and banner ads for other news organisations and the UK government, amongst others, were being placed next to objectionable content from uploaders such as extreme terrorist groups and rape apologists. As a by-product, the groups and bloggers uploading the content are paid by Google; this is the normal revenue model for videos on YouTube containing advertising. Terrorists being 'funded' by the UK government naturally means great headlines.

Hype aside, this is clearly an issue for advertisers and media agencies. None of us want ads shown next to this type of content. As a result of The Times exposé, many high profile advertisers ceased all Google advertising in the UK and the US, claiming they would not return until better safe-guards were available.

How did this happen? What can we do to mitigate the risk?

The issue highlights the level of control Google has over what content creators post, and how videos are classified. Whilst this isn't just a Google problem, (any platform has the same margin of error), YouTube (owned by Google) is the largest video platform and was the first to offer this revenue model to content creators.

When each of the hundreds of thousands of videos are uploaded every day, the content needs a 'classification'. It is how videos are typified to feed the search algorithm and for advertisers to purchase ads against them. Failures in the controls regarding classification has shown the system has serious flaws.

An Ikon Point of View - Ankit Seghal, Head of Performance.

Losing large advertisers is a very big business problem for Google, they have responded immediately with expanded safeguards. These are the first line of defence to ensure brands aren't shown next to objectionable material.

Like all advertisers, Ikon uses classifcations to decide whether or not to advertise on a page. This classification is based on the overall theme of the site rather than the specific content of video scripts. For video specifically there is very little traceable captioning, something I believe Google needs to rectify.

Whilst the exposé was last week, the industry has been aware of the risks for some time. Because there are few industry standards Ikon has developed our own safety process that we've built up over years (and use day-to-day placing out clients advertising). Much of this is via manual mechanics. It goes well above anything the industry currently offers.

Whenever we set up a new campaign (Display or Video) we go through the below five exclusions steps:

  1. Website exclusions: we've created a list of 52,000 sites and 90 YouTube channels deemed to be inappropriate (handpicked and adding more sites weekly)
  2. Topic exclusions: we do not show ads within any of the following topics: crime, police, emergency, death, tragedy, military, international conflict, juvenile, gross, bizarre, profanity, rough language or sexuality.
  3. Keyword contextual exclusions: we have a negative keyword list of over 1,800 keywords where ads will not show (e.g. adult / extremist / slang)
  4. "Not yet labelled content" or on YouTube embedded videos: we avoid as it could potentially meet one of the exclusions above.

Google are feeling the pressure right now to tighten up their processes. We're hoping the outcome will be better industry standards, which we can add to our own stringent process.

Archive

  • June 2017
  • March 2017
  • February 2017
  • January 2017