Skip to main content

AI Rules of Engagement for UCF

  1. Ignoring Ethical Guidelines: Users should not bypass ethical guidelines and principles set forth by the university or governing bodies regarding AI use. Ethical misuse can lead to privacy violations, bias propagation, and other serious consequences.
  2. Misrepresenting AI Capabilities: Overstating or fabricating the capabilities of AI technologies in research, teaching, or administrative tasks is misleading and can erode trust and credibility.
  3. Using AI for Unfair Advantages: Leveraging AI to gain unfair advantages, such as using AI to complete assignments in a manner that violates academic integrity policies, should be strictly avoided.
  4. Neglecting Data Privacy: Compromising personal or sensitive data by using AI tools without robust privacy protections or failing to adhere to data protection laws and university policies should never occur.
  5. Bypassing Human Oversight: Relying solely on AI for critical decisions—whether in admissions, grading, or administrative tasks—without human oversight and judgment can lead to errors and injustices.
  6. Disregarding the Impact on Employment: Implementing AI without considering its impact on university staff and faculty employment, job roles, and responsibilities can lead to negative workplace dynamics and morale.
  7. Overlooking Inclusivity and Accessibility: Failing to ensure that AI tools are accessible to all users, including those with disabilities, or that they consider diverse perspectives and needs, is a significant oversight.
  8. Ignoring AI’s Limitations: Users should not overlook the limitations and potential inaccuracies of AI systems, especially in critical areas like research findings and academic evaluations.
  9. Avoiding Continuous Learning: Neglecting the need for continuous learning and adaptation as AI technologies evolve can render university practices outdated and less effective.
  10. Bypassing Collaboration and Consultation: Implementing AI solutions without consulting relevant stakeholders, including IT professionals, ethicists, and the affected university community members, can lead to overlooked concerns and resistance.
  11. Using Non-Vetted AI Tools: Adopting AI tools that have not been properly vetted for security, reliability, and compliance with university standards and regulations can expose the institution to risks.
  12. Engaging in Biased Data Practices: Using biased datasets to train AI models, which can perpetuate and amplify existing prejudices and inequalities, should be actively avoided.
  13. Overlooking Student and Staff Training: Failing to provide adequate training for students and staff on how to use AI tools responsibly and effectively can lead to misuse and underutilization
  14. Neglecting Impact Assessments: Launching AI initiatives without conducting thorough impact assessments to understand their potential effects on the university community and operations is imprudent.