How Syracuse University partner Claude AI collects, uses student data
During a Claude AI workshop, a graph showed that more SU personnel are downloading and utilizing the database. In the early days of SU and Claude’s partnership, concerns about security and transparency have grown. The D.O. has outlined the AI’s data process. Hannah Mesa | Illustration Editor
Support The Daily Orange this holiday season! The money raised between now and the end of the year will go directly toward aiding our students. Donate today.
When Syracuse University granted campus-wide access to Claude in September, it became one of the first universities to provide access to the AI chatbot through an enterprise agreement. Now, SU personnel are using Claude AI each day, Information Technology Services shared at a Nov. 12 AI at Work presentation.
Claude, an artificial intelligence model developed by Anthropic, includes an AI chatbot and other AI-powered tools used for education. As the tool gains popularity at the university, some users hold concerns about data privacy.
Under SU’s partnership with Anthropic, all user data collected by Claude is kept within the university and not used to train the larger learning model, Jeff Rubin, SU’s senior vice president for digital transformation and chief digital officer, said.
“If you look at the writings (Claude) puts out, it’s on the future of education and artificial intelligence now that plays a role in pedagogy,” Rubin said.
Faculty, students and staff can claim a license with their SU email to get started with the enterprise version, which allows users to upload documents, prompts, connect with Microsoft 365 and work on long-term projects.
Amid questions about security and transparency, The Daily Orange outlined how SU uses Claude user data and safeguards its ethics.
Tracking user data
Anthropic trains models with data collected from the internet, third-party cookies, data from users and crowd workers, according to Anthropic’s website.
Claude stores all shared user data by default, Anthropic’s website states. However, Anthropic employees can’t access users’ conversations with Claude’s AI chatbot unless given permission.
When using incognito mode — a setting on internet browsers that allows users to browse without a search history — Rubin said data is still processed despite the device not actively saving cookies and data.
However, the university could recover this data with a subpoena.
As chief digital officer, Rubin can view faculty, staff and students’ Claude metadata, he said, but users’ actual conversations are not accessible.
However, any projects, documents or any other attachments can not be accessed by anyone in the university, Rubin said.
Accountability measures
A new SU “community of practice” will oversee the integration of Claude by putting together a team composed of members from different areas of study and professional backgrounds, Rubin said.
The group will ensure AI doesn’t negatively affect the community through academics, security and mental health through solution-focused discussions that help target problems occurring in the community due to AI, he said.
As Claude’s interface rapidly changes, ITS sends frequent newsletters informing students and staff of the tool’s system updates. On Oct. 21, ITS announced that the database can now be connected to Microsoft 365.
In his inaugural role, Rubin is working to advance the university’s digital transformation, including data management, data security and AI, according to an SU Today release announcing his appointment.
“Spend time learning, because every organization in the globe, whether it be profit, nonprofit, government sector, military, is going to need people who understand how to integrate AI into their workflow,” Rubin said.


