The organization should define and openly communicate its responsible AI objectives to ensure they are visible to all relevant stakeholders. Objectives should relate to fairness, accountability, explainability, safety, privacy, and accessibility.
The organization should document how it meets these objectives in order for decision-making to be clearly recorded and auditable. This should the rationale for choosing third-party or internal AI solutions. The organization can demonstrate transparency by embedding meaningful human oversight of its AI systems in the following ways:
- Giving human reviewers the authority to review and override AI decisions where necessary
- Disclosing compliance requirements in deployment documentation
- Monitoring system performance for the purposes of demonstrating ongoing accountability
- Maintaining clear reporting channels for concerns about AI outputs and promptly communicating changes in performance to stakeholders
- Ensuring automated decisions documented and subject to human review