What of your problems can Bing, Bart, and ChatGPT solve?

What of your problems can Bing, Bart, and ChatGPT solve?

By Dr. Kerstin Haring, Assistant Professor in the Department of Computer Science

Now that Schroedinger’s AI cat is out of the bag, is it alive or dead? And what of your problems can Bing, Bart, and ChatGPT solve?

There are many names for Artificial Intelligence (AI) language models already, but what they are large language models and they, for better or worse, won’t go away. So, what can they do for you? What problem do you have this AI can solve?

I am going to postulate and say that if a large language model can solve your problem, you probably could have solved it yourself. So, the question remains what can it really do?

For one, it could write things. For example, my problem right now is that I committed to writing this blog, and now the time has come where I need to live up to that responsibility. But writing is hard. Frankly, I do not want to do this right now. Having an AI write most of this is very, very tempting. At the same time however, it also is a very unexciting prospect to not write this myself. Just parroting what the internet already says, albeit in a re-packaged way, is an underwhelming thought. Parroting off a large language model is also an exclusionary thought. But I will get to that in a second.

My previous blog post, ChatGPT and being a student: What could possibly go wrong?, raised the question about what it means to be learning in the time of large language model AIs. The short answer is that learning still only happens when an individual is willing to do so and engages in it. No AI can do that for us. The long answer is that we have been here before. I am dating myself here, but when I was young, I was told math is important as I will not always have a calculator with me. Then the internet came about, and I was told Wikipedia was a terrible and untrustworthy source (which it was right at the beginning). Also, online search came about, and the consensus was to not trust the internet and fact-check everything. The irony now is that every person has nearly ubiquitous devices and access to all internets and calculators at all times, and Wikipedia now is considered as a legit way to gather information. What my teachers feared (my decline in math skills due to a calculator, my inability to not be able to reason and deduct new knowledge) has not become true as they failed to see the second and third order effects of the internet. For example, that we are now using it to fact-check information. There is a very good chance that right now, we do not anticipate the effect of large language models (good or bad).

For a couple of months now, the internet has been having a field day on trying to fool those large language producing AIs. From coding to essay writing, from math problems to poems, the good, the bad, and the ugly of anything that can be written, we try to make this AI slip up by saying something wrong or dumb. By doing that, we effectively have become AI trainers. The intelligent chatbots seem to become more intelligent, and harder to trick. Which is half true. They are not becoming more intelligent, but their underlying models are updating and the responses that are generated are improving as the models can learn from what users input. While it seems to get better on the surface, they remain not credible for information. No matter how much it can learn and update, it learns patterns, not facts and data. What is learned are patterns of language (hence large language models), but there is no reasoning, no logical deductions, and no understanding of content. A very intelligent sounding AI parrot, but a parrot, nonetheless.

How is this parrot exclusionary? Very basically, AI learns from vast amounts of text that is out there on the internet. If you have spent some time in our cyberspace, you know that this is not always a safe space, not necessarily polite, and not always based on facts. And that is putting it very mildly. And that is where large language models learn from due to the very large amount of such text-based data they need. We call it bias in AI training data, and what it leads to is racist and discriminative language generation. The way current AI safeguards against that is to filter out “bad” content. A mix of humans and other learning models can be used for that, but here is where the exclusion comes in. The very mechanisms that we use to protect against the bad stuff, also excludes minority voices and input. Think of anything that is not written in “proper” English or *gasp*, a new idea. New ideas, deducted knowledge from the things we know already, and reasoning to get there is not what large language models do. They parrot, they do not think.

Not thinking is unexciting. However, that is not the problem students face. The problems students face are deadlines, grade pressures, scheduling social life and sports, and, hands-down, just finishing their homework. Very rarely, students cheat when they are in a supported safe space with appropriately challenging assignments.

As educators, this might be a good time to ask ourselves if we are asking students to do the right assignments if a language parrot can produce a borderline better answer than a college student. Behold that I do not have an answer to this question. I only have more questions. And they are not even new questions! While we are trapped between the potential of chatbots, somewhere between the doom that students will now exclusively cheat and the bliss of having an AI rephrase ideas for every different learning style and people who struggle with language, we arrive back at some of the questions we already have been asking.

Is the way we are creating assignments supporting learning and critical thinking in students?

Are we inclusive in the way we are giving assignments?

What are better ways of assigning and grading work?

And how dare we ask these questions to educators who dedicate a lot of their time and passion to teaching and learning, while being underpaid, overworked, and now have to look out for yet another thing?

Getting rather philosophical now, it seems that the same systemic mechanisms that make our AIs biased are at play.  Our AI holds up a mirror to what our output to the world is. It is the one with the highest probability, and it is not always pretty. The latest AI now holds up a mirror to our education system. As we look at those two mirrors, our thoughts and blame as to what’s not going right is bouncing back and forth like in a mirror maze at the carnival. How could one possibly not get lost?

The Office of Teaching and Learning has been exploring more information about AI writing tools and will be having ongoing conversations. Read our previous blogs ChatGPT, Friend or Foe in the Classroom?, ChatGPT and Being a Student: What Could Possibly Go Wrong?, and Getting Proactive with ChatGPT and Other AI Tools, visit our OTL Events Calendar to see what upcoming sessions we are hosting, or contact us for support.