YouTube-gate, Brand Safety and the Human/Tech Solution
In the past few weeks, questions around ad placement against inappropriate material has dominated the news. Lest we forget, this is a recurring problem. But this time, it's serious for Google – with clients pausing spend on YouTube and its broader Display Network.
But if we believe that the story is just about tech (or the lack of it) keeping brands safe on YouTube, we’d be naive. In a lot of ways, it speaks to some of the main challenges in media and advertising right now: the increasing role of automation, of course – but also the dominance of the Google/Facebook duopoly – to the point where people forget the drawbacks of such ‘walled gardens’.
It’s interesting to note how quickly the discussion around what we will call here 'YouTube-gate' developed. At first, some suggested that with 400 hours of video uploaded every minute, there was just no way of getting round the issue. Then, within a matter of days, Google apologised, announcing improvements for advertisers, and even ‘artificial intelligence-powered filtering’ to follow.
Of course, there are parallels with the fake news scandal which blew up around the US Elections. In that case, Facebook moved quickly from denial to a practical, combined tech and human-based solution to the issue. And, lest we forget, Facebook had sacked its human editors not long before thism with the specific aim of avoiding accusations of political bias. Little did we know that algorithms can also be partisan.
While at Netric we only work with the top premium publishers across the four Nordic countries, and make tireless efforts to ensure ad quality is not an issue – we ourselves know from many years’ experience that the solution is human plus technology, not just one or the other.
Taking a broader view, as Videology’s Tim Gentry does here, the problem is that we are now an industry led by short-term goals. In other words, as long as the average CMO’s tenure is just 18 months, as long as brands switch agencies with increasing regularity, and as long as the focus on direct response metrics increases - ad quality problems will persist.
A continuing push from brands to lower agency costs and fees will no doubt feed the dark side of the business. And to be clear, that means people only too ready to work cheaply – and provide ‘blind’ (i.e. suspicious, maybe fake) networks, clicks, traffic and media.
Given what has come to light in the past couple of weeks, does simply working to a whitelist of known, trusted publisher brands look like such a bad idea? Even if it means missing out on some scale, isn’t that extenuated by guaranteeing brand safety?
And what of Google and Facebook’s status as ‘walled gardens’ versus the relative openness of the rest of the real-time bidding landscape? To this point, some argue that the subtext of YouTube-gate is that brands are using it to pressure Google into knocking those walls down. But if those calls include allowing third party tracking and cookie syncing, Google is unlikely to agree: this would mean essentially giving up its crown jewels, especially when among those boycotting it include major competitors like Yahoo buyer Verizon.
Perhaps a more effective argument against walled gardens is that if they lowered their defences, and allowed third party verification tags to run, serving ads against offensive or illegal content could be blocked at source. And perhaps none of this mess would’ve occurred in the first place.
The Dark Side of Peppa Pig
A final point to consider is that – as our lives are increasingly led by technology, and algorithms – there will be other moments like YouTube-gate. But back to my earlier point – the solution is machine plus human intelligence, not one or the other in isolation.
Just as advertisers have apparently become hooked on YouTube, so have parents, for keeping the kids entertained. In the past few days, it emerged that fake, inappropriate versions of such loved childrens’ programmes such as Peppa Pig have been appearing on the video service, shown in some cases alongside the real thing.
Should parents boycott YouTube? No doubt some will. Were they unaware of the fact that, whatever automated filters are put in place, it is ultimately not a curated, broadcast medium likeTV, or even Netflix?
Whether it’s a cartoon pig, or advertising in general, this is an opportunity to check ourselves – become more knowledgeable around the issues that still affect technology – most of which, of course, are caused by human interference.
if anything good comes out of the whole Google/YouTube scandal, it will be a more level, equal ecosystem, which is not so dominated by two companies. Such a duopoly leaves us more at risk from flaws in their technology – and by extension, the people who know how to exploit them.