The OTL has invited faculty with wide disciplinary expertise to contribute their perspectives on ChatGPT and AI writing tools for our OTL Blog. More faculty blogs to come!
By Dr. Kerstin Haring, Assistant Professor in the Department of Computer Science
In between Rihanna’s superbowl performance and the Chinese spy balloon it is hard not to notice another big discussion: ChatGPT, the Artificial Intelligence (AI) that can write anything at least as good as a human. In a conversational tone, one can ask the AI to write code, poems, lists of any topic that would be interesting to others, answers to test questions, roommate agreements, and essays.
Right off the bat it seems like there is nothing this conversational AI cannot do. After so many years of doubting if AI can do anything for us, we finally have a powerful AI in our hands and for the first time we truly see it as intelligent. This seems impressive, but it is only partly true. AI already helped you out in many ways you might not have taken notice. For example, if you searched the internet or maps, taken a recommendation from your favorite streaming service on what to watch, or have the internet translate anything you have used AI and taken advantage of some of the amazing things AI research put out for all of us to use.
We have been around what is traditionally defined as AI for quite a while. The idea of AI was phrased around the 1950s by Alan Turing asking if machines can think. In the 1980s researchers talked about deep learning, a machine learning mechanism on steroids that was supposed to simulate the behaviors of the human brain. Which it did—kind of. It can learn from large amounts of data and input, like a human. But so far, even sophisticated AI seemed rather dumb to us. People could rarely make sense of the output, and we benched AI to very confined systems like giving us suggestions on what to buy next, making recommendations when trying to make an on-demand-movie selection, or talking to us in a moderately good single-question, single-answer style. To us humans, hardly impressive or engaging. We want more from our AIs.
So, what changed? Why can AI suddenly write text that makes sense or generate images that look pretty good?
For one, we have a lot more data around. Data is needed to train AIs that are based on machine learning (or deep learning). Basically, one way AI can learn is like a small kid. After seeing enough examples of an image of a duck, kids learn what a duck looks like and that there is a difference between a duck and a swan. Then a kid could go and make a drawing of either, or a combination of both. AI can do the same, but it needs more data to learn. A lot more actually. And where could we find hundreds of thousands of all kinds of video, text, and image data? In the 80s, nowhere. But fast forward to today, we not only have more data than we ever could have imagined online, but we keep producing it and we combine it with significantly increased computational power that allows us to process that data to produce an output. This very text you are reading might itself end up as new training data for an AI. Or maybe it was even written by an AI? How can you tell?
Right now, you must take my word for it that it was not. We have not figured out yet what to do with our new superpower, namely having access to powerful AIs online that do… well, what do they do exactly?
It turns out that one can spend quite some time figuring out what AIs like ChatGPT can do before we find what it cannot do. Internet communities love a challenge, and apparently AIs can now write a Jay-Z song about a golden toilet (weird, but funny), explain how it will replace humans (creepy), pass law exams (interesting, but considered cheating) and write philosophical final papers (also cheating, and gets you an F for writing like a 12th-grader when you’re supposed to be college level).
So what can this coherent text-producer not do? This is where we enter the weird and not funny black box of AI. We do not really know why, but while ChatGPT sounds convincing and shines with good grammar, it gets facts wrong. So does the Google chatbot called “Bard”, who was recently off to a very rocky start after it made a huge factual error in its first demo to the public. Bard confidently declared that the James Webb Space Telescope took the very first pictures of a planet outside our own solar system. That’s a pretty cool fact, or it would be if it were true. But it isn’t. Turns out that trying to learn facts from the internet can go wrong. In Google’s case, around a $100 million wrong, give or take. If you go ahead and google (the action, not the company) if Webb did take the first photo, the search results also make it seem rather true. So powerful AI or not, search remains search and machines do not understand semantics very well. They just seem to hide it better behind what we consider good syntax.
Big blunders are not unique to a specific chatbot. There are plenty of math or coding questions ChatGPT gets wrong. So wrong indeed that software sites like Stack Overflow banned AI answers as they produce some code gibberish.
So, either the AI is hallucinating or making stuff up. It transpires that the latter is the problem. AIs make stuff up. They are essentially highly sophisticated text prediction programs. It is the same idea as the text prediction that you use when online shopping or trying to text. Yeah, we know you never talk that much about a “duck”, but bad words and bad facts should not come up in our polite AI systems. Unfortunately, the problem is that when these sophisticated text-prediction AIs make predictions, the facts are not always right. They are predicted with a certain probability. Unlike the good old times of algorithms that were deterministic, meaning same input always yields the same output, probabilistic models like AIs incorporate randomness in their approach. Which is useful to learn from the internet, and at the same time is also the problem: it was learned from all the internet. The sophisticated, convincing answer an AI gave you to a test question is maybe right. How right? You won’t know that. So, what are the chances you should take to make your own mistakes? Well, we won’t know that either. What we do know is that humans are still better than AIs by using the traditional way, which is, at least for Millennials or younger, synthesizing information ourselves from search engine results. It seems that for now and the future to come, chatbot AIs make persuasive statements without regards for factual accuracy. It is unlikely that we as humans will never ever produce creative text or images ourselves, or that instead of teaching and learning we hand “it” off to an AI. Partly because we have not defined what “it” really is. In other parts because there is something so unique we can do, that even for the smartest scientists it is hard to comprehend how machines could ever do this: we experience joy when learning, we feel accomplished when we create something, and we create new knowledge and things. AI just generates. And it does not feel anything or nothing about it.
Or, as a Princeton professor Narayanan phrased it, maybe we should not panic over the “bullshirt generators”. Or whatever word you predict is a good fit to what they might have said.
Stay tuned for my next take on AI in the classroom. With AI being able to do all the learning for us (read with caution), what is going to change for those who seek to learn? What does it mean to be a student in the time of AI?
The Office of Teaching and Learning has been exploring more information about AI writing tools and will be having ongoing conversations. Read our previous blog ChatGPT, Friend or Foe in the Classroom?, visit our OTL Events Calendar to see what upcoming sessions we are hosting, or contact us for support.