Opinion

Facebook’s chatbot ban is a chance to finally get our legislation in order

As Facebook's chatbot ban continues, Flamingo AI CEO Catriona Wallace argues that Facebook's strict new policies might actually be the perfect chance to right the AI world's wrongs.

In the fall out of the Cambridge Analytica scandal, Facebook has gagged the development of its chatbot applications.

These applications previously provided the core technological capability for a large number of small and not-so-small third-party chatbot providers. It’s bad news for many chatbot companies, a large number of whom will go out of business if the ban continues.

The decision is a result of the Cambridge Analytica scandal – a horrendous breach of data and trust, but perhaps less so a huge breach of privacy legislation. We will find out about the exact breach of law during the Senate hearing that Mark Zuckerberg appears at this week.

Currently, advances in technology outstrip the advances in legislation and in the chatbot, conversational commerce and robotics sector, legislation is way behind.

The ramification of all this is that Facebook has made over 20 changes to its privacy rules in the last week, including adjusting algorithms and shutting down applications that may expose data to external sources. Chatbots are at the top of the list.

Facebook’s chatbot technology is fairly rudimentary, even by the company’s own admission. Developers can launch chatbots using the Facebook platform in a matter of minutes or hours by plugging into Facebook APIs.

However, there is little support from Facebook, very basic technology and little by way of active security requirements. These little robots are vulnerable for hacking and data scraping, and may be regarded as additional windows into people’s private data.

So, aside from the chatbot developers in pain right now, it is actually a good thing for privacy that the chatbots are being gagged. All this could have been avoided in the first place, however, if Facebook had a true culture of security built into its application layers, and particularly its chatbots.

Humans have an uncanny knack of telling chatbots or robots much more personal data than they would tell a human, so these little robots should have been top of Facebook’s mind with regard to security. Clearly they are now.

Many of the other chatbot or conversational commerce platform providers recognised from day one that these chat interfaces can be used to extract the most sensitive customer data, be it financial, medical, health, personal, contact details or other. 

Although Telstra’s Codi was an epic fail, at least it didn’t give away private information.

Facebook is now up against the senate with a ‘please explain’. Trust is violated and chatbots are silenced. This moment provides a strong signal to the market that we are at the next stage of evolution and development in this strongly hyped field.

It’s a necessary fail that all companies with chatbots and conversation interfaces need to heed to very seriously. Hopefully now the legislation will get a move along and start to better regulate the AI sector in a sensible way.

And why is this so important now? Well, we will see 30% of all customer interactions globally handled by chatbot interfaces within the next three years, according to Gartner.

So, this is all headed towards us like a freight train and currently the legislation is severely lagging. And this lag is in the US.

It’s another dimension of laggard when we come to consider Australia’s laws and knowledge around this sector. Australia will also be hit in the near future with a similar incident if we don’t start having these discussions and debates on legislation and privacy around AI-based technology.

Dr Catriona Wallace is CEO and founder of Flamingo AI.

ADVERTISEMENT

Get the latest media and marketing industry news (and views) direct to your inbox.

Sign up to the free Mumbrella newsletter now.

 

SUBSCRIBE

Sign up to our free daily update to get the latest in media and marketing.