Employees should follow responsible practices when using AI tools in their work. These practices help ensure that AI is used safely, ethically, and in compliance with company policies.
1. Use Only Approved AI Tools
Employees are expected to use AI systems that have been reviewed and approved by the organization.
Why? Approved tools have been evaluated for security, privacy, and compliance risks.
2. Protect Confidential and Sensitive Information
Employees are expected to ensure that internal documents, client data, and other sensitive information are not entered into AI systems unless explicitly permitted.
Why? Sensitive information entered into AI tools may be stored or processed outside the organization.
3. Verify AI Generated Outputs
Employees are expected to review and validate AI outputs before using them in reports, decisions, or external communications.
Why? AI systems can produce inaccurate, outdated, or misleading information.
4. Use AI as a Support Tool, Not a Decision Maker
Employees are expected to use AI to support drafting, research, and analysis, while final judgments remain their responsibility.
Why? Business decisions must remain under human oversight and accountability.
5. Review AI Generated Content Before Sharing
Employees are expected to check AI generated content for accuracy, bias, copyright concerns, and compliance with company standards before sharing it.
Why? AI generated material may contain errors, bias, or copyrighted content.