The organisation should define and document the technical requirements for its AI systems. This documentation should specify the expected levels of accuracy, robustness, and cybersecurity based on the system's intended purpose and the risks involved. The requirements should be consistently reviewed and maintained throughout the AI system's entire lifecycle.
A process for testing and validating its AI systems should be established and documented. This process ensures the systems meet the required levels of accuracy, robustness, and cybersecurity before deployment. Validation activities should include performance testing against defined metrics, robustness testing with unexpected inputs, and cybersecurity vulnerability assessments.