Skip to Content

We need to talk about AI

Artificial intelligence (AI) is on the brain at Case Western Reserve University. Your first day in class may have had anywhere from one to 20 minutes devoted to the subject of AI on the syllabus, depending on how forward-thinking your professor is—or on how much they like to hear themselves talk. In my case, a professor presented a color-coded chart on the syllabus, delineating the extent to which we will be allowed to use AI on each assignment. The world of education is really beginning to take AI seriously, as evidenced by the recently unveiled CWRU AI, located at the simple, sleek and self-important URL “ai.case.edu.” Most coverage of AI will point out some of its most obvious issues, such as its immense power requirement, its use of copyrighted material for training or—if you’re in the mood to rage against capital—its ability to replace workers and channel the resulting savings to shareholders. But I’d like to focus on some lesser-discussed topics, especially its ability to shape the education and practice of professionals.

As a budding civil engineer, I can’t help but tie this into the struggles of my field. Civil engineering is undergoing a generational shift. Older, experienced engineers are retiring, and many of them are concerned about the new ones replacing them. You see, engineering has become substantially more complicated the past few decades, and engineering education has struggled to catch up. The coursework is still strenuous, but it has sacrificed depth into each subject in order to cover all the new ones that engineers are expected to know. To use a civil engineering metaphor, this leaves freshly graduated engineers with a wide foundational knowledge in each subject. They can expand upon this foundation with a comparatively small but strong frame, supported by one subject but influenced by an engineer’s broad knowledge. But in the worst case, this foundation may be too shallow. If you listen to a more cynical retiring engineer, they’ll complain about how new hires only know how to plug numbers into equations without knowing where those equations or numbers come from. And I worry that the advent of AI in engineering education will exacerbate this problem if students come to rely on AI as a means to getting their degree.

The most important skill an engineer has is their “engineering judgment”—the intuition informed by decades of experiencing what works and what doesn’t, alongside the confidence to apply that intuition to completely novel situations that have limited precedent. One of the most dangerous capabilities of AI is to act as an accountability sink. In the same way it’s hard to hold a corporation accountable for individuals within it and vice versa, shifting our responsibilities onto AI makes it harder for us to control those responsibilities and their consequences. What happens if engineers one day begin to rely on AI for engineering judgment and subconsciously shift responsibility away from themselves? The lesson we’re told over and over again about design software by these cynical engineers is that you must be able to critically evaluate how it reaches its analysis results. Yet, part of the point of AI is that it’s so complicated that we can’t meaningfully comprehend exactly what it does to reach its conclusions. People who study AI call this the “Black Box Problem,” and it’s especially dangerous when AI is involved in making engineering decisions where human lives are at stake.

But to look at the bigger picture, it’s also worth asking whether or not AI is something we really need right now. It’s not controversial to point out that massive amounts of money have been sunk into large language models. And given that it’s novel, disruptive and has the potential to make even more massive amounts of money, it’s hardly a surprise how much venture capital has flowed to companies such as OpenAI. But it is controversial to question whether AI is something we should be pursuing at all. In most circles, AI is perceived to be inevitable—just look at CWRU, whose embrace of AI is framed as getting with the times. In a country where many pressing issues—such as hunger, housing or infrastructure—are mainly a matter of throwing loads of money at the right institutions, should we really be sinking this much money into AI? When their goal of artificial general intelligence—an AI capable of any task assigned to a human—is never guaranteed to arrive? Is the financial and electrical price of all these chatbots worth the extra convenience? Will AI ever be anything more than convenient?

Look, I’ll be honest. I struggled a lot writing and rewriting this. Thinking about all of the “what-ifs” of AI made me feel like a stereotypical crotchety old man, ranting about these new-fangled “computer” devices that management is giving to everybody in the office. Or, more to the point, an elementary school teacher in the ’90s, sternly telling their students that they won’t always have a calculator in the “real world.” Because these comparisons get at what AI really is—it’s a tool. Is it a useful one? I begrudgingly admit that it is. I’ve relied on ChatGPT to find things that Google could not more times than I’d like to admit. And I can’t pretend to know whether it’ll have as much staying power as the computer or smartphone. But using a tool well means knowing about the good as well as the bad—not just the jobs that are made easier because of it, but also the jobs that may be made more difficult by overreliance. We cannot just appreciate the benefits presented by a technology of the future; we also must recognize the very real risks that come with it.