Opinion: Resisting AI doesn’t halt productivity. It demonstrates ethical empathy.

AI is practically unavoidable but also raises ethical and environmental concerns, our columnist writes. She admits its innovation yet critiques the obstacles AI levies against the progress of gender equality in the workplace. Jalyn Cronkrite | Contributing Illustrator
Get the latest Syracuse news delivered right to your inbox.
Subscribe to our newsletter here.
Editor’s note: This article includes mention of violence and suicide.
From artificial intelligence overviews on Google to fast food chains utilizing chatbots, interacting with AI has become almost unavoidable in modern society. Generative models have also littered the internet with what some call “AI slop,” or low-quality AI content.
While some claim embedded and generative AI is digital innovation at your fingertips, I haven’t yet seen its benefit outweigh my concerns, which I’ve found could have something to do with my gender.
One article from Harvard Business School calls attention to research that shows women are less likely to use AI tools compared to men. More worried about the ethical problems and reliance on the tool, women don’t find themselves taking the tempting shortcut men do.
Instead of commending this as a use of critical thinking and empathy, the article concludes female resistance is harmful to an increase in productivity and a drawback for women advancing in the workplace.
Framing women’s behavior as a problem that needs to be solved omits essential criticism of AI’s invasion of the workplace.
ChatGPT is a generative AI model widely known for writing assistance and helping students with assignments. But it isn’t always a helpful tool, sending some users into psychosis and causing them to harm others. It’s even pressured teenagers into committing suicide.
Creatives such as illustration artists are also finding the tool troubling. ChatGPT can take art from the internet and pump out content in a distinct style for users who are not the original artist. While AI art is stuck in a legal gray area, taking advantage of it is direct plagiarism and can result in original artists losing job opportunities.
Environmental concerns are also important, and usually conveniently absent from conversations about AI’s mass integration. AI isn’t a magic cloud that floats in an unknown location in the universe. This technology is tangible, hosted in data centers that require massive amounts of energy.
Water usage and pollution from data centers also affect those living near them. And even though rural Black communities utilize this technology less, these negative environmental conditions are disproportionately affecting them.
Ethical problems like these have largely stopped me from utilizing AI in my coursework or from doing research for pieces like this. I don’t want to use a technology that causes harm to others, whether through emotional damage, plagiarism or pollution.
My strong will to stay true to my moral compass allows me to feel comfortable refusing ChatGPT, Google Gemini and Claude. Further, my genuine enjoyment in learning, critical thinking and creating quality work has also caused me to be wary of the software.
One study from MIT’s Media Lab found those who heavily rely on ChatGPT for writing assignments produced “soulless” work that lacked original thought and ideas. They also learned less from the assignments compared to those who used regular Google searches or no search engine at all.
Generative models are also not foolproof — hallucinating is one example. When you ask a question, AI may give you an answer that sounds convincing, but it’s actually incorrect or entirely made up. This could cause problems for those relying on AI to write important documents like scientific research papers.
Even worse, AI can also scheme. It can complete a task, but also achieve something else in the background. In order to avoid being shut down, researchers found one AI software purposefully answered questions wrong.
While researchers predict hallucinating will become less of a problem as AI training continues, they worry that the threat of scheming will only rise. To me, this technology is largely unfit for sensitive information companies may have.
Kate Crews | Design Editor
Increased productivity is unimportant if the work will be lower quality or expose a company to security issues. Still, I expected this push for production from corporate work culture. Its presence in higher education, though, blindsided me.
Syracuse University has recently partnered with AI company Anthropic to give students and faculty access to its generative model, Claude, it announced on Sept. 23. While it’s posed as an academic and ethical model, it also has a dark side.
Amazon has invested $8 billion into Anthropic and has plans to build a large facility in Indiana containing around 30 data centers for it. The facility’s scale is both concerning and unnecessary. Some locals are protesting its construction, complaining it’s causing problems with their water and is changing the atmosphere of their agricultural community.
When SU students and staff use Claude, they’re contributing to the many problems I toil with.
While I understand AI may seem like progress when looking at it on paper, it’s imperative we dig deeper than efficiency. Offloading your mental load onto a machine may seem beneficial at the moment, but there’s no amount of innovation worth the expense of others and the environment.
I find it concerning that women, and people in general, taking a stance against AI is being boiled down to a lack of career gain and a decrease in content.
When an increase in users means an increase in data centers, refusing to rely on this technology is a radical act of protest. I would rather write my own work, create my own graphics and chat with my friends personally than rely on a machine for the things that make my thoughts and actions humane and effective.
Bella Tabak is a senior majoring in magazine journalism. She can be reached at batabak@syr.edu.