Solving content moderation requires something AI can’t do

Artificial intelligence might be a powerful tool, but Taboola CEO Adam Singolda argues it should be no substitute for the human side of content moderation.

While we can find the term Artificial Intelligence (AI) in every investor’s deck, or company ‘About Page’ on the web, it’s actually quite rare to witness real AI – it’s a very complicated thing to do. There is a world of difference between Machine Learning (ML), Deep Learning (DL) and Bullshit (BS).

Saying that, even true AI can go wrong. Recently, there was a trend on TikTok where people used the phrase “I had pasta tonight,” not to talk about what they’d had for dinner, but as a code word to signal a call for help.

It wasn’t TikTok’s fault that the algorithm didn’t catch the trend quickly enough to stop promoting these posts, because AI requires ample historical data in order to work. In computer science it’s referred to as “garbage in, garbage out.” This is why AI can beat humans in chess or Mahjong, but it would have never invented the game.

This was also something I discussed with Harvard’s Professor Steven Pinker last year when he referred to the “art of asking questions,” something that’s still reserved for humans. While AI will get better and better at computing things, it will never fall in love, or ask a question.

AI is really important, and as of now, it plays a big part in content moderation online. It decides what’s OK for us to see and what’s not OK, what‘s harmful, what’s hateful, what’s fake, what gets boosted, what goes “viral” and what gets buried. But as we’ve seen from the big tech platforms over the past years, or from the examples above, it has fundamental issues and even more than that, poses a fundamental question – is AI enough to moderate content, and to moderate ads? Or do we need humans?


AI is an incredible revolution, probably as big as the invention of electricity or the internet, and it will be a huge part of our lives, forever. But there are two important things to know about AI.

  1. AI only works when there’s sufficient data to train the AI model. For example, AI failed to predict the spread and impact of COVID-19 because there was no existing data to model the scale of its actual impact effectively. Or when face ID was first introduced as a way to unlock an iPhone, it didn’t account for people’s “morning face,” so the iPhone didn’t open. There was not enough data to suggest that people might look different in the morning when they wake up, versus the rest of the day.
  2. Some mistakes are too big to bear. As an example, if Alexa made a mistake, and suggested that I buy coffee beans I don’t really want to buy based on my behaviour, it’s not a big deal. It’s annoying, but not a big deal. If YouTube tagged a video as a “pet video” thinking there were dogs in the video, but it there weren’t, it’s not a big deal. It’s annoying, but it’s not a big deal. I could go on. But if we do decide to use AI in more serious matters, like whether or not we should take the beginning of a virus spread seriously, or when it comes to topics related to democracy, depression, racism or human rights, it begs a bigger question, is AI enough?


When it comes to serious matters, such as moderating content, we must also recognise the limitations of humans as well. People get fatigued, whereas a computer has endless stamina whether it’s reviewing 100 or 1,000 articles. People have bias, they have good days and bad days, and so forth. If we are to consider a more human approach to moderating content, it’s important that those content review teams are incredibly diverse and supported.

When it came to finally realising that “eating pasta” was not about eating pasta, and that it was a codename for suicide, it was humans who caught it, or when COVID-19 happened, humans saw it spread, not machines, or when an image recognition AI labeled black people as Gorillas, it was humans who picked it up, not AI.

Humans + Machines 

The future will be over indexed by machines that help us to live a better life across so many daily interactions. But I’m convinced that in serious matters, there are human problems that require humans to solve them with AI in a supporting role.

After the Facebook boycott, I suggested that they hire 50,000 content reviewers in addition to AI, to manually review content side-by-side. I think it is important for every tech platform that has meaningful distribution, and to take responsibility for the content on its platform

The limitations of human review don’t outweigh risks we’re taking without having it.

I vote for humans (with AI).


Adam Singolda is the CEO of Taboola.


Get the latest media and marketing industry news (and views) direct to your inbox.

Sign up to the free Mumbrella newsletter now.



Sign up to our free daily update to get the latest in media and marketing.