Menu
AI Rules of Engagement for UCF
- Ignoring Ethical Guidelines: Users should not bypass ethical guidelines and principles set forth by the university or governing bodies regarding AI use. Ethical misuse can lead to privacy violations, bias propagation, and other serious consequences.
- Misrepresenting AI Capabilities: Overstating or fabricating the capabilities of AI technologies in research, teaching, or administrative tasks is misleading and can erode trust and credibility.
- Using AI for Unfair Advantages: Leveraging AI to gain unfair advantages, such as using AI to complete assignments in a manner that violates academic integrity policies, should be strictly avoided.
- Neglecting Data Privacy: Compromising personal or sensitive data by using AI tools without robust privacy protections or failing to adhere to data protection laws and university policies should never occur.
- Bypassing Human Oversight: Relying solely on AI for critical decisions—whether in admissions, grading, or administrative tasks—without human oversight and judgment can lead to errors and injustices.
- Disregarding the Impact on Employment: Implementing AI without considering its impact on university staff and faculty employment, job roles, and responsibilities can lead to negative workplace dynamics and morale.
- Overlooking Inclusivity and Accessibility: Failing to ensure that AI tools are accessible to all users, including those with disabilities, or that they consider diverse perspectives and needs, is a significant oversight.
- Ignoring AI’s Limitations: Users should not overlook the limitations and potential inaccuracies of AI systems, especially in critical areas like research findings and academic evaluations.
- Avoiding Continuous Learning: Neglecting the need for continuous learning and adaptation as AI technologies evolve can render university practices outdated and less effective.
- Bypassing Collaboration and Consultation: Implementing AI solutions without consulting relevant stakeholders, including IT professionals, ethicists, and the affected university community members, can lead to overlooked concerns and resistance.
- Using Non-Vetted AI Tools: Adopting AI tools that have not been properly vetted for security, reliability, and compliance with university standards and regulations can expose the institution to risks.
- Engaging in Biased Data Practices: Using biased datasets to train AI models, which can perpetuate and amplify existing prejudices and inequalities, should be actively avoided.
- Overlooking Student and Staff Training: Failing to provide adequate training for students and staff on how to use AI tools responsibly and effectively can lead to misuse and underutilization
- Neglecting Impact Assessments: Launching AI initiatives without conducting thorough impact assessments to understand their potential effects on the university community and operations is imprudent.