AI tools are going to transform the way we do college assignments

Shreyas Banerjee, Executive Editor

Higher education has changed in incalculable ways over the past few years. Since I entered my final semester at Case Western Reserve University, it’s impossible to reconcile my memories of how classes functioned before the COVID-19 pandemic and after. Necessitated by social distancing, technology is now at the forefront of almost every class. 

Now professors are much more proficient in their use of digital resources, with many now posting lecture slides on Canvas, using Canvas discussion boards far more frequently, using Zoom for office hours and sometimes even using Zoom for classes when they are unable to attend in-person. Turnitin is now a commonly used tool to catch plagiarism, and sometimes even entire classes are held remotely or asynchronously depending on the course content. The expectations surrounding how accessible both students and professors are have been altered dramatically, as well as our expectations surrounding how much in-person interaction we are required to do. Making people get out of their room for a supplemental instruction session or a quick group meeting is no longer realistic when we can all meet virtually. As a result of these changes, I’ve felt as if class discussions have petered out, especially as we are all required to post comments on discussion boards and provide canned responses to each others’ posts. Regardless of the direct effects, it cannot be denied that we have all made big shifts in how we go through our college lives since the onset of the pandemic due to technology.

As I walked into one of my first classes this semester, I became acutely aware that we are going to be going through a big shift once again. Within the syllabus, the instructors included a new section titled “ChatGPT.” The section reads “If the instructors suspect use of ChatGPT or other artificial language language tools, then the student will be able to choose from three options: take an F on the assignment, resubmit the assignment with their own work, or do an oral exam/presentation. Should the student submit another assignment that the instructors suspect was created using ChatGPT or other artificial intelligence (AI) language tools, then the student will receive an F on the assignment.”

Now if you are unfamiliar with ChatGPT and what it entails that may seem like a harsh punishment, but in reality this is the first instance of professors trying to prevent the utter annihilation of our current system of assignments at universities. While the stiff penalty may curtail behaviors this semester, it will not be able to stop the onslaught of AI tools that are going to fundamentally change our society and our system of higher education.

That sounds hyperbolic on the face of it, but AI tools will have profound implications on future college life.

By now you’ve likely heard of ChatGPT—it’s only the fastest growing web platform ever with faster growth rates than Instagram and even TikTok. At the base level, ChatGPT is an AI chatbot based on large language models developed by corporation and non-profit OpenAI (it’s complicated). Based on a database of text that it is trained on, ChatGPT can generate text in response to queries that you may pose. In this way you can have the chatbot create poems, write code, draft essays, synthesize information and provide plans if you ask it to. It’s important to remember that none of it is actually original and that the AI isn’t actually thinking—it’s really just a stochastic parrot. They don’t actually do any actual thinking and don’t generate any content, rather just mirroring what other sources say online and the overall human discourse. The large language models are trained on data accessible from the internet and regenerates it in often a random manner, often making factual mistakes despite seeming authoritative. As Google themselves say about their own AI, “While [large language models] are an exciting technology, they’re not without their faults. For instance, because they learn from a wide range of information that reflects real-world biases and stereotypes, those sometimes show up in their outputs. And they can provide inaccurate, misleading or false information while presenting it confidently.”

All that being said, the technology is still wildly impressive. The proliferation of generative text has already, as mentioned before, raised red flags among professors as they try to discern what is written by students and what was given to them by an AI chatbot. However, this issue is only going to get worse because these technologies are only getting better at mimicking human language and in their synthesis skills.

OpenAI recently announced that they will be upgrading the large language model they are using from GPT-3.5 to GPT-4. This will not only improve the reasoning capabilities of ChatGPT but also can now analyze visual information and answer questions about it as well. Most interestingly, at least for professors, is that it can also mimic the writing style of the user, making plagiarism even harder to detect.

I recently had the chance to use Microsoft’s new chatbot, codenamed “Sydney,” that they will be integrating into Bing later this year to improve their search results and summarize answers, as well as do other chatbot functions like ChatGPT. Sydney is based off of GPT-4 and in my brief time with it, I was truly impressed. While Sydney did not appear to fall in love with me and gaslight me as it has with others, it did allow me to create chord progressions for songs, code macros for Excel and write hypothetical diary entries for historical figures all in a far more natural manner than ChatGPT currently does—though its search results are still often inaccurate. 

I’ve also been able to try out Google’s recently released AI named “Bard,” based off of their LaMDA model that is so convincing that it caused a Google employee to say they believed it was actually sentient. Bard is far quicker in processing results than ChatGPT or Sydney and is able to draw from Google’s vast array of search results to strengthen its responses. I was able to have Bard write the lyrics of a song in the style of a specific Beach Boys album, create a chord progression and assign instruments for the various verses. I was also able to have it draft emails to respond to professors and classmates using various tones while also giving ideas as to how people may react to my messages. It even gave me feedback on this very article. These uses of large language models are only the start, however.

Microsoft, using GPT-4, will soon be creating tools within Microsoft Office that will greatly change how we create documents, emails, slideshows and spreadsheets. Soon you will be able to tell Copilot, as it is being called, to draft proposals based off of notes that you provide, create spreadsheets with the ideas you give and even create PowerPoint presentations from Word documents. Within Office you’ll be able to simply type “create a proposal and presentation based off these notes” and it will do it for you. Additionally you can ask Copilot to proofread what you’ve written so far and write more paragraphs of content in the same style that you have so far. Google is also going to be rolling out similar features within Google Drive in the next year based off of its own large language models, along with features that allow you to auto-draft emails.

Besides destroying a lot of jobs, it is also going to make education radically different. When students no longer have to actually synthesize information or even create presentations, most assignments we college students have to do will become completely obsolete. Writing discussion posts is going to become as simple a matter as telling a chatbot to analyze a statement and draft a response. Creating proposals and slideshows will just be a matter of feeding some bullet points and having the rest be generated automatically. While writing good essays, with reputable citations and proper formatting, will continue to be a struggle, AI can be used to draft and generate ideas far differently than what most professors probably intend.

So how will academia react? We’ll probably continue to see more anti-ChatGPT clauses in syllabi, but as tools continue to improve and proliferate, there’s no stopping what’s coming. Perhaps this will lead to less emphasis on out-of-class work and more on in-class discussion and collaboration, but who can say.

Whatever happens, student life will never be the same.