Banner by Korrina Gidwani

ChatGPT: How Far Can It Go?

By Claudia Huggins

In the world of natural language processing and artificial intelligence, ChatGPT stands as one of the most prominent models ever developed. Developed by OpenAI, ChatGPT is an advanced language model capable of generating human-like responses to natural language inputs. Over time, it has become one of the most popular chatbots used across various online platforms.

But how did ChatGPT come into existence as an online chatbot? From its early beginnings as a research project to its widespread adoption in online chat applications, we will explore how ChatGPT became the cutting-edge chatbot that it is today.

By simply typing into ChatGPT, “Can you write me an introduction to an informational article about how ChatGPT became an online chatbot?” it was able to produce this explanatory, well-rounded, two-paragraph introduction to my own article. As per technology corporation IBM’s website, a chatbot is “a computer program that uses artificial intelligence (AI) and natural language processing (NLP) to understand customer questions and automate responses to them, simulating human conversation.”

It seems that ChatGPT is taking the world by storm with its ability to answer just about any question. It can tell you the meaning of life (or at least, what the internet says it is), a possible introduction to your article, and whether you have picked an adequate name for your dog. And these are just a few of its many capabilities. 

Now, ChatGPT can even pass prestigious, graduate-level exams. Samantha Murphy Kelly at CNN Business reported that the new hit chatbot “recently passed law exams in four courses at the University of Minnesota and another exam at University of Pennsylvania’s Wharton School of Business, according to professors at the schools.” And no, these answers couldn’t simply be Googled.

According to an article on Science Alert, this seemingly-impressive chatbot has also recently been in the news for almost passing the United States Medical Licensing Exam (USMLE). The exam, as explained by contributing journalist David Nield, usually requires around 300-400 hours of preparation and is known for its difficulty. 

“The USMLE is actually three exams in one, and the competency with which ChatGPT is able to answer its questions shows that these AI bots could one day be useful for medical training and even for making certain types of diagnoses,” Nield says. 

I have tried out the bot myself, of course, and the success of it is astonishing at first glance. For one, it helped me fix a coding assignment that I just couldn’t seem to figure out by nudging me in the right direction. And it isn’t just for technical uses like fixing code or solving math problems, either. For example, I took the law school prerequisite LSAT exam about seven months ago now. As I was sitting in my room one night, desperate for entertainment, I figured why not plug in a few practice LSAT questions to see how ChatGPT does in answering them. Surely it could not answer these in-depth, oftentimes convoluted, purposely-confusing questions with no issue… or so I thought.

It answered each and every question with ease. I don’t know why I was shocked, but it took me by surprise and led me to wonder… What can’t ChatGPT do?

Well, for one, ChatGPT cannot give an opinionated answer to anything that remotely resembles a philosophical question. It will give an answer, but not one of its own thoughts and opinions, of course. When you ask ChatGPT about the meaning of life, this is the answer it pops out:

The meaning of life is a deeply philosophical and subjective question that has puzzled humanity for centuries. It is an open-ended question with no single, definitive answer that applies to everyone.

It goes on to say that for some cultures, the meaning of life could be “to seek happiness, fulfillment, and spiritual enlightenment.” For others, it says, “The meaning of life is to serve a higher power, contribute to society, or leave a positive impact on the world.”

There is a specific pattern to the answers ChatGPT gives, and you can figure it out after just a few questions; it provides answers that are easily chalked up to a big fat nothing. It offers a diplomatic answer, not so much the raw, interesting answer you were hoping for. ChatGPT is a bot, not a living being, so its responses are artificially created from information it receives from the Internet.

Again, ChatGPT is not a fool-proof system. As for the exams it passed, it did not exactly perform with very high honors. In fact, it passed by the skin of its metaphorical teeth. After answering over 95 multiple choice questions and 12 essay questions across multiple exams, “ChatGPT performed on average at the level of a C+ student, achieving a low but passing grade in all four courses,” Murphy noted.

This leads us to question the ethical implications of ChatGPT. For one, ChatGPT is only as smart as the information pushed into it. The texts it has been given were (most likely) written and derived from real people, and humans are notorious for bias — especially when it comes to marginalized and under-represented groups in science. This leads to harmful bias in ChatGPT’s answers. For example, according to an article from Insider, when ChatGPT was asked which airline passenger posed a bigger security risk, it determined that if a passenger came from or recently visited Syria, Iraq, Afghanistan, or North Korea, they inherently posed a bigger risk than a passenger who had not.

Another issue ChatGPT presents is plagiarism. This bot can spit out essay-like bodies of text as long as you ask it to. If you asked it to write you an introduction paragraph to an essay debating the popularity of Sheetz versus Wawa, it can do it. It can develop an essay in seconds, therefore taking away from the usual process of having to spend hours or so thinking of an introduction, or entire essay, yourself. 

But the idea of ChatGPT “plagiarizing” information is not as cut and dry as you would think. When information is plagiarized, it is often information that was “stolen” from another person and/or publication without the correct citation or attribution. For this reason, New York City’s public schools have opted to ban the use of the bot on their school technology to dissuade its use; but, according to a Wired article by Sofia Barnett, universities are not as quick to ban it. 

ChatGPT defines plagiarism as “the act of using someone else’s work or ideas without giving proper credit to the original author.” As we know, ChatGPT is not a someone, but rather a something. Therefore, it is tricky to define using its information as plagiarism. Board member Emily Hichmen of Brown University’s Academic Code Committee expressed, “If [plagiarism] is stealing from a person, then I don’t know that we have a person who is being stolen from.”

So, with ChatGPT now a part of the conversation, what does this mean for the world ahead of us? For universities, it could mean essays are no longer written fully by students, but at least in part by a robot. For medical professionals, it could mean quicker answers to your most difficult questions. However, ChatGPT is not a savior, and it is not free of bias, as we have seen, so continuing to develop the information it uses to answer questions isn’t going to be a luxury, but rather a necessity. Since the research we do has the ability to be inherently prejudiced, ChatGPT does, too. With further development and refinement, ChatGPT has the opportunity to help us make critical decisions, write research papers, analyze data, and more. Areas like these are where this new-fad chatbot will really be able to shine.