Opinion

To learn about AI, I went back to print – here’s what I found

With so much attention on AI at present, Initiative's Ryan Haeusler decided to stop with the overly dramatic Facebook articles and actually read some books on the subject. This is what he discovered.

After watching the Google Duplex demo that was released in May, I started to notice a lot of worrying articles like Controversial AI has been trained to kill humans and Are you scared yet? Meet Norman, the psychopathic AI. Cue flashes of Skynet and mankind’s impending doom…

After holding back the urge to start investing in a fallout shelter for my back yard (I did the research just in case), I decided to do a bit of research on AI past my FB feed.

My news, like many, often comes from social media, and a lot of the time only involves the headlines on posts as I scroll past them, or the first two paragraphs of an article I actually clicked on.

I decided to look a little deeper, and read some actual books – or at the very least read some articles in full. The first thing I read was Jerry Kaplan’s book, Humans Need Not Apply, which provided me with my first stage of enlightenment. In fact, this has pretty much been my Bible.

I found that there is a pretty big discrepancy between AI’s perceived intelligence and the technology’s actual capabilities. It definitely isn’t yet comparable to human intelligence.

To understand the difference, you first need to understand that human intelligence is defined as the combination of two domains: general intelligence (emotion, common sense, reasoning, etc.) and specific intelligence (ability and performance on specific tasks).

If we apply this to AI, we can start to see that one of biggest misconceptions is that Artificial General Intelligence (AGI) is already here. Taking a closer look, you can see that these advances still sit within a narrow skill set, this includes systems like Google Duplex and self-driving cars, which are all examples of Specific Intelligence.

General intelligence on the other hand requires an extension into emotion, common sense and reasoning. In this respect, AI as it stands today is not general intelligence or independent thought, but a learned and very specific skill.

So, what’s creating the confusion?

I started this article by describing the feelings of dread caused by media sensationalism, this is a major cause for the confusion about AI. Of course people who don’t know anything about AI will get a tad anxious when they see the article “Google Assistant Learned How to Shoot a Gun”. The truth of the matter is, in this case, someone manually rigged up a system where he could say “OK Google, activate gun” and it fired.

This is just a human-directed mechanism controlled by voice instead of a physical control (like a trigger). If I tied some string around a gun trigger, could I say that the string learned how to fire a gun?

Somehow, “Google Assistant used in gun firing system in order to add voice control mechanism” doesn’t have the same ring to it.

Anthropomorphisation is another major cause for confusion, which is basically the act of giving human characteristics (voice, physical features, mannerisms, etc.) to an object. Everyone will have been exposed to this whether in movies or through AI itself, think Ex Machina, Terminator, Sophia the Robot and even Siri. We see videos of Sophia the humanoid robot, who looks and talks like a human, and think a robot with human intelligence has been created.

What we don’t see is that the questions Sophia is answering are from a very small list with pre-programmed answers, making Sophia essentially skin-covered query response system.

The categorisation of all AI developments as specific intelligence advancements is made particularly interesting when you consider that all of these advancements are based on a single breakthrough in neural networking called backpropagation, which happened in 1960. Backpropagation is essentially a method of training an artificial neural network to deliver a particular outcome. If you really want to know more about how this works, the simplest explanation I could find is here.

The big issue here however is that this breakthrough lacks the capacity to extend past specific intelligence, so while we are advancing what AI can do, we are a far way off the singularity.

It is possible that the breakthrough required for AGI has happened already, and that its implications in AI have yet to be understood. It is also very possible that the breakthrough that will lead to AGI is yet to even happen.

The analogy that stuck with me through my research was by Brad Templeton, a consultant for Google’s autonomous car building team, who said: “your car will be truly autonomous when you instruct it to go to the office, but it decides to go to the beach instead”.

So after a lot of reading, watching and listening, I would say I have a good grasp of the AI basics, but am still a far way off being an expert on the topic. If you feel like you could benefit from getting a better understanding of AI, I would highly suggest starting with the below resources that help me understand more.

Humans Need Not Apply by Jerry Kaplan

Machines of Loving Grace by John Markoff

Superintelligence by Nick Bostrom

The AI Playbook

This video:

And when in doubt, ask your favourite voice activated machine for more information. Google has 1.2 billion responses for you to look at.

Ryan Haeusler is communications design associate director at Initiative Sydney.

ADVERTISEMENT

Get the latest media and marketing industry news (and views) direct to your inbox.

Sign up to the free Mumbrella newsletter now.

 

SUBSCRIBE

Sign up to our free daily update to get the latest in media and marketing.