The University of Rhode Island (URI) embraces artificial intelligence (AI) as a powerful tool for teaching, learning, research, and innovation. However, we recognize the importance of ensuring AI is used securely, ethically, and responsibly to reflect our values of transparency, fairness, and accountability. This page serves as a resource for understanding URI’s standards and practices for responsible AI use.

Guiding Principles

URI’s approach to responsible AI use is built on the following core principles:

  • Transparency: Clearly disclose when AI tools are used to create content or make decisions.
  • Accountability: Validate the accuracy of AI outputs and take responsibility for their use.
  • Fairness and Inclusivity: Mitigate biases and ensure AI systems respect diversity and equity.
  • Privacy and Security: Protect sensitive and institutional data from misuse and unauthorized access​.

Key Resources

  1. URI Data Classification Schema and Guidelines
    Details the classification and management of data, including restricted and sensitive information, relevant to AI practices.
  2. Guidelines for the Secure and Ethical Use of Artificial Intelligence
    Provides comprehensive guidance on AI transparency, accountability, and compliance with accessibility, privacy, and intellectual property standards​.

Policies and Standards

To ensure the secure and ethical use of AI, URI enforces the following standards:

  • Data Security and Privacy: Follow URI’s Data Classification Guidelines to avoid exposing restricted or sensitive information.
  • Accessibility: Ensure all AI tools comply with Web Content Accessibility Guidelines (WCAG) and the Americans with Disabilities Act (ADA).
  • Prohibited Data Use: Avoid inputting private or restricted data, such as Personally Identifiable Information (PII) or patient records, into public AI tools like ChatGPT.
  • Data Handling Awareness: Be cautious when using AI platforms, as data entered may be transmitted to third-party servers.
  • Disabling Data Logging: Where possible, disable history and logging features to prevent information from being stored or used for AI training without consent.
  • Intellectual Property Compliance: Respect copyrights and avoid using confidential data in AI systems without appropriate protections in place.

Education and Training

URI provides ongoing opportunities to learn about secure and ethical AI practices:

  • Workshops and Seminars: Attend sessions on responsible AI use tailored for faculty, staff, and students.
  • Training Materials: Access documents and guidelines like the AI Use Guidelines to deepen understanding.
  • Accessibility Resources: Learn how to make AI tools and outputs more inclusive through URI’s accessibility programs.
  • Understanding AI Functionality: Before using any AI platform, take the time to understand how it works, what data it collects, and how it handles that data. Awareness of these factors helps ensure responsible and secure use of AI tools.

Additional Guidelines

  • Educators: Incorporate URI’s guidelines into course design, clearly communicate acceptable AI use to students, and disclose when AI was used to create course materials​.
  • Students: Follow instructors’ AI policies, properly attribute AI-generated content, and avoid using AI in ways that breach URI’s Student Code of Conduct​.
  • Researchers: Consult with your department and comply with regulations when using AI tools in research, particularly for handling sensitive or proprietary data​.

Learn More

For more information about URI’s AI practices and resources, contact: karen_lokey@uri.edu