News

Brand safety like the spam email problem that ‘snuck up on us’: Google’s AI research director

The problems plaguing online platforms including brand safety concerns and the proliferation of fake news have been compared to the spam email phenomenon which people were unprepared for by Google’s AI research director, but he is confident emerging technologies will ‘beat it back down’.

Peter Norvig, Google Research Director

Peter Norvig

Peter Norvig, who established Google’s research division in 2000 after being recruited to the company from the NASA Ames Research Center, described how the company is addressing advertisers’ concerns about brand safety on YouTube.

“We’ve broken up videos into pieces and then look for pieces that match each other, we’ve used this technology before for copyright issues where we had to detect if this was owned by somebody else and if it’s an exact copy then that’s really easy.

“We’re used to matching against something we know about even when the match isn’t that close. So now we’re trying to do that not against a known source but against a more abstract concept.”

Brand safety has become a major concern for Google following a number of prominent advertisers, including Kia and Holden in Australia, withdrawing their YouTube advertising over fears their brands will be placed on extremist or undesirable channels.

The advertiser boycott is clearly hurting the online industry with GroupM overnight revising downwards its growth estimates for UK ‘pure-play’ internet advertising from 15% to 11% on the back of brand safety concerns.

Norvig explained how Google is using both technology and manual processes to identify content that may worry advertisers.

“We have lots of examples we’ve pulled out by hand and now we’re asking our system to find other things that look like this, they don’t have to be an exact match but they’re similar in some way. We can do that to some degree, we know we can’t do it 100% so we still have a lot of humans in the loop making the decisions.

“We said we’re going to up both the AI that does the initial matching and the number of humans we have that are involved in verifying that.”

The company is applying similar techniques to battle ‘fake news’ with Norvig suggesting sites appearing to distribute misleading stories may be penalised in the Google search index. However he sees social media platforms as having more challenges in combatting this problem.

“It’s hard to make these calls, companies are looking at a combination of AI and humans to make those judgements. It’s easier for us than Facebook because for us we can have our new site and say ‘These sources look pretty sketchy but we’re not sure they are completely illegal so we’re not going to kick them out of our index’ but when you search they’ll be a couple of pages down and that way they don’t get promoted very much but we haven’t gone to the step of actually censoring them.

“If we can do that with a little bit of AI and a little bit of human labelling we can do a good job. But Facebook has a much harder job where if I forward something to you, it’s a much bigger step for them to actually censor that forwarding and say ‘I’m not going to deliver it, even though you told me to’. That’s a harder problem.”

Combatting extremist videos and sites may however be beyond Google’s control, Norvig suggested.

“We can find these recruitment videos so that’s a step we can take. I think it’s hard to find actual actions because there is so few of them. We’re good at dealing things where there’s millions of examples and now we can detect patterns, if there’s a couple of terrorist attacks a year that doesn’t give us a lot to go off, especially if each one is different.”

“I think we can give tools to the police to help them with investigations but I don’t think we’re going to solve the problem on their own.”

Despite the problems facing online platforms Norvig is optimistic that technology will eventually find the answers, citing how the industry combatted the spam deluge that overwhelmed many email inboxes.

“I think we’ll be able to deal with it, we’ve had similar kinds of things in the past. You may remember when your email inbox was filled with spam – that was another problem that kind of snuck up on us.

“We built the technology to beat it back down and now it’s not such a problem. I think we can do that again.”

Continuing on the theme of artificial intellegence,  Norvig, who was speaking at the University of New South Wales Engineering School, said adland’s adoption of the technology may be more about branding and their desire to be seen as ‘modern’ than a real change in business direction.

“I don’t know the details,” Norvig told Mumbrella when asked about Publicis’ plans for its AI platform. “It could be just a buzzword that they are talking about and they are trying to say ‘Aren’t we just so modern?’ But certainly you want to give your employees the tools to do their job and understand the patterns.

“Whether you’d call that AI or you’d say ‘I’ve got a central database’, some of it is marketing and branding but you want to empower people to understand their jobs,” he continued. “Sometimes it’s real AI and sometimes it’s simpler technologies.”

While Norvig is somewhat sceptical of advertising agencies using AI, he advises all companies need to have someone senior management keeping track of developments and looking for potential market openings, “every company should have somebody who has a basic understanding that can follow what is going on in the field and look for where the opportunities are.”

ADVERTISEMENT

Get the latest media and marketing industry news (and views) direct to your inbox.

Sign up to the free Mumbrella newsletter now.

 

SUBSCRIBE

Sign up to our free daily update to get the latest in media and marketing.