Similar to your Microsoft Outlook and Teams messages, AI interactions are legally discoverable. Any AI interaction is discoverable when:
- It exists within university-supported platforms (e.g. ChatGPT Enterprise, Microsoft Copilot, or similar enterprise tools).
- Employees use personal AI accounts to complete university work.
To ensure you’re not putting yourself or the university at risk, follow risk classification guidance and be mindful of what you converse with AI tools about. Do not assume your interactions are private. Treat every interaction as if it could be made public.
Follow the effective practices below when working with AI tools:
- Do not enter private or confidential information (e.g. FERPA, Financial account numbers, passwords, personnel or employment related details, etc.)
- Don’t discuss legal matters or internal investigations
- Do not input messages from or information about other people without consent, especially related to students, HR, or legal matters
- If you wouldn’t want someone to have access to content in a chat long term, delete the conversation immediately after the task has been completed so it is permanently deleted after 30 days.
- Be transparent about AI use in university materials
To learn more about discovery, visit the NU ITS Polices and Standards page.