ChatGPT Is NOT the Enemy

What if I told you there was free software that could write your research papers, solve your calculus problems, debug your computer science code and give you relationship advice?

Believe it or not, this software already exists and has been downloaded by over 100 million users since its initial launch back in November of last year. Created by OpenAI, a San Francisco-based software company, ChatGPT is a chatbot that uses AI technology to provide human-like answers to just about any question you can think of. Whether you are hoping to find a ten-point summary of F. Scott Fitzgerald’s “The Great Gatsby,” or a recommendation for the best burger restaurant in your area, ChatGPT has the answers. All you have to do is set the parameters for your search. Think of the chatbot like Siri, but infinitely more helpful and much less annoying. It is no wonder that ChatGPT has taken the world by storm. 

Before sitting down to write this article, I set aside some time to play around with ChatGPT. I had to see for myself what all of the excitement was about. To begin my impromptu investigation, I fired off a string of carefully worded queries about topics with which I was familiar. Expecting just a few lines of dull, inarticulate text to lazily appear on the interface, I was stunned by what the little blinking rectangle left in its wake. Within seconds of tapping the enter key, ChatGPT spat out a series of clear, concise and informative responses. The smooth syntactic structure of the humanlike replies was simply mesmerizing. The software even managed to include additional information that I did not explicitly request, making the answers all the more cogent and impressive. Every last one of ChatGPT’s responses was well-written and thoroughly convincing. But were they all accurate?

At present, the spread of misinformation, bias and outright falsities poses the greatest threat to the viability of ChatGPT. Because the software sources all of its intel from the world wide web, it can only ever be as accurate as the internet. That is a scary thought. Remember the old saying, “don’t believe everything you read on the internet”? Now you can add chatbots to the ever-expanding list of things that you should use with caution. And yet, when ChatGPT is telling you funny jokes and typing up your ten-page English paper, it often feels more like a friend than a foe. That is the greatest danger that this new technology poses to our digitized society. How are we supposed to protect ourselves from misinformation when it sounds just like something our best friend or favorite professor might say? These days, the only thing standing in between a truth and a lie is our own minds. 

But let’s not get ahead of ourselves. We are not living in a sci-fi movie where computers are gearing up to conquer the human race. ChatGPT is only as powerful as we allow it to be. Before trusting the validity of whatever the chatbot has to say, it is imperative that we conduct our own research across multiple different platforms to ensure that we are grasping the bigger picture. If we allow ourselves to fall prey to the software’s “plausible-sounding but incorrect or nonsensical answers,” we lose. By fact-checking the chatbot and learning how to navigate its weaknesses, we can come out on top.  

The reality is that ChatGPT isn’t going away anytime soon. With the global conversational AI market expected to reach 18.6 billion dollars by 2026, chatbots are only going to become more and more prevalent in our daily lives. So why not embrace this change and start familiarizing ourselves with the chatbot’s lingo? By asking questions about topics that aren’t completely out of our depth, we can glean a basic level of understanding about how the chatbot itself thinks. Learning how to formulate effective questions and, more importantly, how to critically interpret ChatGPT’s responses, can protect you from the perils of biased or inaccurate information. It isn’t hard to see how ChatGPT can be an invaluable tool. All you have to do is use it responsibly.