Let’s call time on the ChatGPT hype cycle

Public Address CEO and CommTech expert Shane Allison argues that OpenAi’s ChatGPT is just another tool in the communicator’s arsenal, not the revolution many promise it to be. 

It might be too early to call it, but I hope that after 41 days of public release, we’ve reached peak hype-cycle on ChatGPT with Ryan Reynold’s Mint Mobile ad 

The only element of the Australian hype-cycle left is for a creative PR agency (I’m looking at you Thinkerbell) to use it in a domestic campaign.  

Since ChatGPT was opened to a public beta, there’s been no shortage of breathless commentary about the ascendency of Ai and how we’re all out of a job – especially those employed in white-collar jobs – like PR professionals – who turn out a variety of copy for a living. 

In fact, I was chatting with a friend over the summer break who believes that ChatGPT is going to cut swathes through the ranks of customer service agents, knock off a few communicators, and polish off a few associated fields on the way through.  

 Fortunately (or perhaps unfortunately when it comes to some occupations), this is not going to be the case. This might sound a little bit odd coming from the CEO of a technology company – but let me explain. 

To understand why I’m not as excited as my friend was about ChatGPT, we should first simplify some of the hype around the GPT-3 model that it is built on.  

The way that I find it useful to think about ChatGPT as a program that has two very different capabilities:  

  1. ChatGPT can understand a user’s question phrased in natural language 
  2. ChatGPT can program this question to generate or summarise text  

Because of the large data sets it is trained on, the generated text can also be perceived as having an ability to conduct research, but ChatGPT doesn’t consider the accuracy of the facts it produces in its research, so it is up to the user to verify the accuracy (or not) of this ‘research.’ 

In fact, the generative underpinnings mean it’s often just flat out lying. 

These two capabilities, particularly the ability to summarise or generate text based on a provided prompt, is not particularly new.  

The difference with ChatGPT is that all of these are bundled in an incredibly user-friendly interface thanks to its ability to understand a natural language question, allowing millions of people to easily use a summariser or generator for the first time. You don’t need to boot up python (the most common code for machine learning), find a model, train a model on a specific dataset and then write a program to ask it to generate text.  

But, for all of that, the text that ChatGPT is generating is just that – generative. It can’t come up with original thought, it will use the same structure for almost every media release that you ask it to write, and try to write a five paragraph essay when you ask it to write an article. 

In short, ChatGPT is a relatively well-trained intern, capable of generating text, but still sharpening its critical thinking and ability to understand the user’s context.  

So if your current writing level is that of a PR intern – yes – your role might be under threat.  

Fortunately, this means that communicators are safe from ChatGPT. A colleague or client may be able to ask ChatGPT to write a media release, but the media release will be bland, boring and predictable.  

All of this is not to say that programs like ChatGPT and future GPT iterations aren’t going to become useful tools in a communicator’s arsenal.  

There’s no denying that the ability to ask a computer to easily write a first draft of something is useful. This might help increase the average writing ability of communicators, as we spend more time editing, refining and contextualising – and less time building the basic structures of content. ChatGPT puts permanently to bed the blank page problem in seconds.  

At Public Address we’re already experimenting with how the underlying text generation models can be used to help further personalise pitches sent to the media through our platform. This is because language generation models are already very good – and reliable – at iterating on a given paragraph or two, and ChatGPT hasn’t changed that, but has improved the accessibility of this technology.  

If you want to see these models in action, you can look at how the social content and SEO industries have been using OpenAi’s work in this space for a couple of years. In fact, you can pull up Gmail and look at the prompts that the application provides as you’re typing. 

But these models aren’t going to replace us in the next decade. 

All of these tools need a human being to drive them, make sense of the output, and contextualise it. To borrow from Reagan’s Russian proverb – trust, but verify. 

Natural language generation will eventually (and perhaps with Microsoft’s proposed acquisition of a good chunk of OpenAi, sooner) give you back more time in your day, delivering on the fundamental promise of technology.  

But it is a long way off from replacing communicator’s creativity, curiosity and intellect. 

Shane Allison is Australia’s leading CommTech expert, the CEO of Public Address and the President of the Public Relations Institute of Australia 


Get the latest media and marketing industry news (and views) direct to your inbox.

Sign up to the free Mumbrella newsletter now.



Sign up to our free daily update to get the latest in media and marketing.