Skip to Content

Educational tool or toll: AI prevalence redefines the future of education

Educational tool or toll: AI prevalence redefines the future of education

Artificial Intelligence (AI) is everywhere now. Notably, its presence has created a tech bubble that’s looming over the national economy, but it has a similarly looming presence over education. AI has become a common topic of conversation among students, and most classes have an AI policy on their syllabus. The reason for this, of course, is the AI cheating epidemic in education. This is what started the whole conversation: Before we had Case Western Reserve University AI and AI-generated education modules, we had students circumventing writing assignments by just pasting the problem statement into ChatGPT. And make no mistake, this problem doesn’t just affect the lazy and entitled. Clay Shirky, a vice provost at NYU, wrote passionately in a New York Times guest essay about the harm that AI is causing in colleges. Describing the shift among students in the past few years, he writes that “This was a theme I’d hear over and over, listening to faculty members across disciplines at the end of the semester; even some of the students who obviously cared about the material and seemed to like the classes were no longer doing the hard work of figuring out what they wanted to say.”

We just have to accept these risks because of how important AI will be in the coming years. Some pro-AI users have the belief that students aren’t being properly prepared for the real world if they are not taught how to use AI to their advantage. I don’t buy it for two reasons.

First, I’ll admit that refining prompts given to AI generators requires some effort and background knowledge. But the phrase “prompt engineering” is just such a laughably pretentious term that I can hardly believe that people say it with a straight face. The reality is that structuring AI prompts is really easy and does not require a college education. It’s a waste of our time to be taught how to use AI because it’s designed for a universal audience. After all, it wouldn’t be good for the AI market share if AI was a tool only accessible to those who have received training and education in its use. Perhaps, that type of AI would be better for society at large, but that’s simply not the type we currently have.

Second, as most professionals who have experience with AI will tell you, the most important skill to have is the ability to be critical of its output. If it hallucinates facts, makes false assumptions or invents a fake equation, you need to be able to recognize that. But that’s just applying your own preexisting knowledge and critical thinking to AI. Critical thinking has been taught at colleges for centuries now, and I’d argue that using AI as a crutch during our education erodes our ability to doubt the answers it gives us.

So, what can we do? I believe that being able to facilitate your assignments with AI easily means that it wasn’t a good assignment to begin with. I think that I speak for most students when I say that repetitive 200-word weekly discussion board posts don’t do a particularly good job of helping us learn course material. But, when representatives from tech companies, desperately trying to increase their market share, suggest that the solution is to design our curriculum with the assumption that AI use will be universal, who says that we have to accept that premise?

There’s a better solution: increasing the grade weight of exams and making all exams blind, in-person and closed-note. Some readers may be horrified at this suggestion. Personally, the most I’ve ever handwritten in one sitting is four pages, and I won’t pretend it wasn’t brutal. But, we don’t need to return to the Stone Age in order to fix this problem. Chromebooks, or other similar laptops, work perfectly well for high school students, so why wouldn’t they work for us?

CWRU’s AI policy is functional for the time being. Currently, it provides multiple frameworks for professors to allow varying levels of AI use in classrooms. I think it’s good to allow professors the freedom to embrace or reject AI in their classrooms. To those professors who accept AI usage, do you really think that your classes are improved by the use of AI? Do you really think that Large Language Model (LLM) AI has any future in the classroom beyond as a novelty? Is it really worth all the trouble of possible foul play among students? If we all recognize the problems and risks of college students using AI, and we also (maybe) agree that the supposed benefits are paper-thin, then why are we still allowing it? The typical answer is because it’s difficult to stop. Lots of laypeople have reacted to the shocking rise in cheating with fatalism — what’s the point of trying if you can pass with just a few prompts? What’s the point of going to college if no actual learning is happening anymore? Once again, I don’t buy it.

AI in education isn’t just bad because of its effects on decreased critical analysis and thinking. It’s also bad because of what it represents. That being, education is replaceable and has short cuts. To some, AI isn’t a tool to help you learn more or become a better academic, it’s a tool that we’ll simply have to use because soon, it’ll be how everyone else makes money in the real world. The commodification of education didn’t start here; rising tuition rates and a growing trend of anti-intellectualism have reached a fever pitch, especially considering recent actions from the recent Trump administrative shakedowns of Columbia University and Harvard University. We can’t control many of these trends, but with a little effort, we can control whether or not we want to allow AI to redefine education itself. Will the ideas of the future be thought up by our sharpest, most critical minds? Or will we allow those ideas to be generated by tech companies, instead?