The computer algorithms we’ve been using our entire lives, whether that be Google or Safari, have undeniably transformed the transfer, accumulation and generation of humanity’s knowledge. Algorithms that once started as simple search engines, easily separated from one’s private life, are becoming increasingly prominent and impossible to avoid in every sector of our lives. Recent developments in the past five years are not only changing how we understand collective knowledge, but also pointing toward the degradation of our cognitive abilities and mental health. These developments, of course, are referring to the commercialized use of large language models (LLMs) and natural language processing models (NLPs), or as they are incorrectly coined in common vernacular: “artificial intelligence.”
Before discussion on the research, evidence and impact surrounding these algorithms, I’d like to emphasize the importance of using proper terminology in discussions as serious as this. The constant push to call these algorithms “AI” is a manipulative promotional tool to separate the product (chatbots) from the companies selling them. It is not some mystical new technology, but an accumulation of algorithms being rebranded and advertised to the public in incredibly irresponsible ways. These companies are simply selling us a product, and the term “AI” is just a tactic to offload the responsibilities that are inherent to selling a product as dangerous as these algorithms. This article will be using the correct umbrella term, ML (machine learning), to refer to all the algorithms and programs utilized in commercialized AI.
The absolute worst of the worst effects of commercialized ML algorithms come from the very serious ethical concerns raised by its never-ending intellectual property and privacy violations. Obviously, AI is not intelligent. It cannot generate new ideas on its own, only work with the data it’s given. This has caused countless artists, authors and researchers to have their work outright stolen by the companies selling ML algorithms. Perhaps more violating are the companies, such as Grok (the ML product sold by X), which offer services where ML tools can be used to digitally undress any individual, including children, in any picture or video. Because of the disgusting lack of regulation and oversight, this is just one of the many examples of ML algorithms’ extreme privacy invasion. Further, ML tools have made it easier than ever for private companies, individuals and our own government to track absolutely everything in our lives—everything from where we spend our time to who we spend it with. The simple (and horrifying) fact of the matter is, without regulation, nothing will belong to you anymore: not your ideas, not your life and not even your own body.
The only way to solve these problems is proper legal oversight. Joining these efforts is absolutely crucial for the safety and sanctity of our future, but it is not the only aspect we must pay attention to as students in higher education. Several, of what I suspect will be many, research studies examining the cognitive health of ML users compared to non-ML users have recently been published, and to the surprise of no one, exclusive ML users have significantly-decreased cognitive activity, critical thinking skills, reasoning and problem-solving ability.
A 2025 study by the National Library of Medicine looked at the cognitive effort, extent of mental resources and writing performance of college students who use ML. They also looked at those who don’t and those who use it with regulation. Using extensive methodology, including, but not limited to pupil movement/dilation tracking, Functional Near-Infrared Spectroscopy to track brain hemodynamic activity and participant surveys, they found preliminary evidence of reduced cognitive function in ML users. Because of the depth and longitudinal nature of the methodology, more data from this study will be published in the coming months. In the meantime, several other independent researchers have come out drawing the same consistent conclusions about cognitive function.
For example, “Cognitive Risks of AI: Literacy, Trust, and Critical Thinking,” published by the Journal of Computer Information Systems, examined how the use of ML tools by postgraduate students, teachers and research scholars effected critical thinking, trust calibration and AI literacy. Not only did they find users who exclusively, or near-exclusively, use ML tools to have less critical reasoning and much poorer memory retention, they also found a concerning trend emerging: exclusive ML use corresponds directly to AI illiteracy. In other words, the bias and task off-loading that ML algorithms are trained to do, entirely erodes the critical reasoning and decision making in educational and research processes.
This bias is further studied in a Cornell University study: “The LLM Effect: Are Humans Truly Using LLMs, or Are They Being Influenced By Them Instead?” This paper uncovered an uncomfortable truth about ML use: “LLM suggestions may significantly improve task completion speed, but at the same time introduce anchoring bias, potentially affecting the depth and nuance of the analysis.” This, in my opinion, is the most dangerous part of ML use in academic and research spheres. Humans are wired for bias, and, as a result, every algorithm ever created ends up being just as biased as the humans coding it. The only difference is ML algorithms do not yet possess the critical thinking skills to look beyond the given data and code available to them. Human critical engagement and intentional ML use is the only way around these anchoring bias traps.
Despite the numerous papers already published warning of the dangers of commercialized ML use, it’s just as important in these situations to look at what good may come from this technology. And surprisingly, there is a fair amount of good that can come from it. A “Smart Learning Environments” review on the effects of ML overreliance details the importance of not cognitively offloading crucial cognitive reasoning, questioning and problem solving to external tools. While there are plenty of research papers out there exploring the brand new ways ML algorithms have been used for good, both in research and education, the one thing they have in common is an emphasis on restrained and intentional ML use. That is to say, offloading repetitive tasks that don’t require much cognitive function with ML tools is not only a valid way to go about education and research, but can actually increase efficiency and success. It becomes an anchoring bias trap when human critical thinking is minimally involved in the educational or research processes.
In spite of the inconceivable damage commercialized ML use has caused, its harm to the academic sphere can be mediated through us: the next generation of educators, researchers, doctors and professionals. An overreliance on these algorithms will not only waste your time, but erode the humanity that is essential in learning and discovering. This importance cannot be understated as education has been, and always will be, the most powerful tool in social, medical, technological and philosophical change.