Since ChatGPT’s launch last November, public opinion has transitioned through three distinct phases: first intrigue, then excitement and now fear. It’s safe to say we were all impressed when OpenAI’s flagship project first displayed its essay writing credentials, but maybe it was naive not to expect the debate surrounding this phenomenon to snowball.
Since then a whole host of famous faces have weighed in on ChatGPT’s growing influence, from Nick Cave requesting it kindly leave songwriting to the experts (he may not have put it quite so politely but you get the message) to Elon Musk calling for a six-month pause on developing systems more powerful than ChatGPT 4.
But is this trepidation – or downright anger in Cave’s case – a consequence of what will happen if we let ChatGPT evolve unchecked? Or is media doom mongering and disinformation standing in the way of a technological revolution?
Don’t believe everything you read
When it comes to technology with pretty much limitless potential, it’s easy for the media to fall into the trap of sensationalism, clickbait, or hyperbole. For ChatGPT and the rest of the AI-chatbot cohort, this has already proven to be the case – take, Bing’s AI tool, Sydney, which has been falsely reported to be bullying its users. Sure, Sydney could probably write this blog if you asked it to, but it isn’t capable of independent thought and certainly hasn’t been built with the intention of belittling or chastising the people it should be supporting.
Press coverage of AI chatbots should be accurate and fair, and this means acknowledging that this technology exists as a crutch for employees, creators and techies, and isn’t intelligent enough to get designs above its station.
Put the opinion pieces on hold
What I loathe most about ChatGPT isn’t the tech itself, but the dearth of opinion pieces it has spawned, stating that AI will be the death of art, humanity and pretty much everything in between. The majority of these articles are unnecessary and unfounded, quick to denounce AI chatbots as the scourge of all creativity but failing to acknowledge the sectors where they will have the most impact. You’ll generally find that such pieces have been written by a reporter from an older generation, who is so concerned with job losses and plagiarism that they wilfully ignore any positive implications.
The bottom line is that ChatGPT does not yet stand to cast us to the wayside, and will actually fill an important void in some seriously understaffed industries – for example, its ability to identify bugs in code will be an important asset for programmers in the next few years.
Fake news extends beyond the facts and figures to the tone and sentiment expressed by journalists. Negativity breeds uncertainty in the reader, so the press has a responsibility to understand the function and importance of AI chatbots before letting their testimony influence the public court of opinion.
PR is big enough for the both of us
Producing engaging content on behalf of clients is an important part of what PR is all about, so it’s understandable for PRs to view AI chatbots with apprehension – I should know, I’ve felt this same uncertainty myself. However, this technology is not sophisticated enough to convey the same level of tone or emotion as a human author, and I’m sure clients would be pretty miffed to hear they’re paying for AI-generated content. Overall, I think it’s safe to say we don’t need to start looking for a new profession just yet.
In fact, as an industry, PR has an important role to play in the AI chatbot debate. Turning to ChatGPT sensationalism is a very tempting approach for securing easy coverage, but this will just pour more fuel on the fake news fire.
Instead, it’s better to acknowledge that AI chatbots have a future in modern society and work out where our clients fit into this new reality. Our human instinct is to push technological boundaries, whether it scares us or not, so it’s time to accept that false reporting is a greater threat to progress than ChatGPT itself.