Opinion: AI can support nutrition, but cannot replace dietitians
Students may rely on AI for nutrition guidance. Without professional oversight, our columnist argues algorithmic advice risks misinformation, disordered eating and missed medical considerations. Emma Lee | Contributing Illustrator
Get the latest Syracuse news delivered right to your inbox.
Subscribe to our newsletter here.
Artificial intelligence has officially entered the chat … and lately, it’s been giving dietary advice.
From calorie calculators to full meal plans, more people are turning to tools like ChatGPT and Google Gemini to answer questions traditionally handled in clinical settings. This is both fascinating and concerning. While AI can technically make nutrition advice more accessible, it also blurs the line between education and nutrition counseling in ways we’re not fully prepared for.
Lawmakers have already stepped in to regulate AI “therapists” for mental health, and Illinois recently went so far as to ban AI from offering therapeutic decision making. But nutrition chatbots remain largely unregulated, even as more people use them to manage chronic conditions like diabetes, obesity and fatty liver disease, among others.
The question remains whether AI can safely act as a dietitian.
Recent studies suggest AI is decent at giving rudimentary nutritional advice but falls short on personalized and clinical judgment, struggling to account for medical history, lifestyle and individual needs.
A 2024 study evaluating ChatGPT’s dietary guidance for noncommunicable diseases found its accuracy ranged from about 55% to 73%, depending on the condition. For cases like fatty liver disease, responses were fairly aligned with clinical guidelines. But when multiple health conditions overlapped, such as diabetes and chronic kidney disease, the chatbot often provided contradictory or incomplete advice.
Another 2025 study comparing ChatGPT to Gemini in diabetes nutrition management found that while both tools showed promise, Gemini performed better in complex scenarios. The findings suggest AI health tools are improving but remain inconsistent in their future potential.
AI can explain what a balanced diet looks like, but it can’t reliably tailor guidance to real-life health situations, and this is exactly where healthcare professionals matter most.
To be fair, AI does have its benefits. It’s free, fast and available 24/7. It can help people who lack access to health care providers, especially in underserved or rural areas. AI can answer basic questions, explain food labels, generate meal ideas and reinforce general healthy eating patterns.
For Syracuse University students juggling stress, limited budgets and inconsistent dining hall options, AI can seem like a convenient stand-in for professional guidance that feels expensive or inaccessible. Between packed class schedules, late-night studying, gym culture and social media pressure, it’s tempting to turn to ChatGPT or even TikTok for quick answers about bulking, cutting calories, using supplements or eating “clean.”

Katie Crews | Digital Design Director
But that’s the thing, nutrition isn’t just about diet “tips.” AI can’t evaluate lab values, assess food allergies, recognize disordered eating patterns or read body language and emotional cues the way a human clinician can. It also doesn’t always account for updated guidelines, cultural dietary practices or especially nuanced medical needs.
Moreover, errors like recommending the wrong protein intake for kidney disease or unsafe foods for someone with allergies can carry real health risks. Unlike a licensed professional, AI isn’t liable when it gets things wrong.
Right now, mental health chatbots are under growing legal scrutiny, but nutrition AI seems to be influencing health decisions daily. As states debate who should regulate AI in health care, an important issue is determining where the line is drawn between education and nutrition counseling. When AI starts replacing healthcare professionals – even unintentionally – the consequences are no longer hypothetical.
Students are more vulnerable to diet culture, disordered eating patterns and misinformation that frames health in extremes – eat this, cut that, avoid everything. AI cannot often recognize when someone might be under- or overeating, or dealing with a medical condition that makes generic advice unsafe. What reads as “motivation” online can quickly turn into harmful restriction or anxiety around food, especially in such a high-pressure academic environment like SU.
At the same time, this raises a broader equity issue. Many students rely on AI because they lack easy access to personalized care or reliable nutrition counseling. If AI is going to play a role in how young adults make health decisions, it shouldn’t replace professional support. Instead, the technology should highlight the need for better health care resources, clearer nutrition pedagogy or systems that make credible nutrition education more accessible than algorithmic advice.
At SU, students have access to free nutrition counseling with registered dietitians through the Barnes Center at The Arch, providing professional support without the aforementioned risks of misinformation or oversimplified advice. Unlike AI, campus dietitians can consider medical history, mental health, dietary restrictions, etc.
AI shouldn’t be banned from nutrition, but it must stay in its own lane. It can serve as a support tool, not a replacement for registered dietitians. AI can help people learn, explore food options and better understand elementary health concepts. When it comes to individualized care or complex dietary needs, human expertise remains essential.
Using AI for meal ideas or straightforward advice is fine. But if an AI ever starts calling itself your dietitian, that’s when we should start questioning our reliance.
Sudiksha Khemka is a sophomore nutrition major. Her column appears bi-weekly. She can be reached at skhemka@syr.edu.

