Clemson Computing & Information Technology has established guidelines and resources for AI usage for Clemson students, researchers, employees and community members as a reminder to students to protect private data and verify artificial intelligence results.
“AI is becoming more prevalent and powerful as time goes on, and CCIT wants to ensure that Clemson students and the community are protecting their information and not taking the AI’s answers as the best source of information for research and other academic activities,” Pauline Sawadogo, a CCIT customer support services technician, said in an interview with The Tiger. “We need to be proactive and stay vigilant with these increased risks.”
CCIT educates readers with guidelines on how generative AI tools and programs have the ability to produce “erroneous” replies that may appear credible, which can be known as “hallucinations.” Along with the risk of receiving falsely advertised information, these systems can be trained on sensitive, copyrighted data that is released without the owner’s permission or knowledge.
The CCIT website lists guidelines for how AI should be utilized, with a reminder that data entered into an AI system and used to train models can be the same action as disclosing that data to the public.
Such action can be considered a breach of the Family Educational Rights and Privacy Act, the Health Insurance Portability and Accountability Act of 1996, the payment card industry compliance and the Gramm-Leach-Bliley Act.
FERPA is a federal law that protects students’ records and their privacy, while HIPAA is an additional federal law that protects patient and student health information from being released without the patient’s knowledge or permission, according to the U.S. Department of Health and Human Services.
In addition to FERPA and HIPAA, PCI compliance ensures the security of business credit card transactions. GLBA requires financial institutions to elaborate on their information-sharing practices to customers and protect their internal data, as outlined by the Federal Trade Commission.
“These violations are a lot more serious than most think, and AI can cause a lot of harm without users knowing it. That is why we want Clemson users to be aware of these risks because a lot of students don’t think about someone else’s information getting out without their consent,” Sawadogo said.
CCIT’s guidelines also list best practices, which include entering only public data into AI systems and opting out of sharing data, always verifying results generated from AI and considering “legal, regulatory and ethical obligations” before using these systems.
Although Clemson has recently highlighted the dangers of AI use, the primary purpose of the released guidelines is to educate users on the potential harm rather than stop AI usage completely.
The Artificial Intelligence Research Institute for Science and Engineering was established in 2020 with the mission of working across disciplines to expand research and education in artificial intelligence. There are 90 faculty and 30 disciplines involved in the effort, according to the institute’s webpage.
CCIT plans to continue updating the AI Guidelines page with additional guidelines, policies and directions working with technology and sensitive information to combat the risks associated with AI and encourages Clemson users to visit the resources often to stay up to date.