Requirement

Article 13.2-3: Usage instructions format for deployed high-risk AI systems

Oh no! No description found. But not to worry. Read from Tasks below how to advance this topic.

2. High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to deployers.

3. The instructions for use shall contain at least the following information:

  1. the identity and the contact details of the provider and, where applicable, of its authorised representative;
  2. the characteristics, capabilities and limitations of performance of the high-risk AI system, including:
    1. its intended purpose;
    2. the level of accuracy, including its metrics, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity;
    3. any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights referred to in Article 9(2);
    4. where applicable, the technical capabilities and characteristics of the high-risk AI system to provide information that is relevant to explain its output;
    5. when appropriate, its performance regarding specific persons or groups of persons on which the system is intended to be used;
    6. when appropriate, specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purpose of the high-risk AI system;
    7. where applicable, information to enable deployers to interpret the output of the high-risk AI system and use it appropriately;
  3. the changes to the high-risk AI system and its performance which have been pre-determined by the provider at the moment of the initial conformity assessment, if any;
  4. the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of the high-risk AI systems by the deployers;
  5. the computational and hardware resources needed, the expected lifetime of the high-risk AI system and any necessary maintenance and care measures, including their frequency, to ensure the proper functioning of that AI system, including as regards software updates;
  6. where relevant, a description of the mechanisms included within the high-risk AI system that allows deployers to properly collect, store and interpret the logs in accordance with Article 12.
See how Cyberday guides you to fulfill this requirement:
This requirement is part of the framework:  
AI Act (Base)
Free compliance assessment:
Best practices
How to implement:
Article 13.2-3: Usage instructions format for deployed high-risk AI systems
This policy on
Article 13.2-3: Usage instructions format for deployed high-risk AI systems
provides a set concrete tasks you can complete to secure this topic. Follow these best practices to ensure compliance and strengthen your overall security posture.

2. High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to deployers.

3. The instructions for use shall contain at least the following information:

  1. the identity and the contact details of the provider and, where applicable, of its authorised representative;
  2. the characteristics, capabilities and limitations of performance of the high-risk AI system, including:
    1. its intended purpose;
    2. the level of accuracy, including its metrics, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity;
    3. any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights referred to in Article 9(2);
    4. where applicable, the technical capabilities and characteristics of the high-risk AI system to provide information that is relevant to explain its output;
    5. when appropriate, its performance regarding specific persons or groups of persons on which the system is intended to be used;
    6. when appropriate, specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purpose of the high-risk AI system;
    7. where applicable, information to enable deployers to interpret the output of the high-risk AI system and use it appropriately;
  3. the changes to the high-risk AI system and its performance which have been pre-determined by the provider at the moment of the initial conformity assessment, if any;
  4. the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of the high-risk AI systems by the deployers;
  5. the computational and hardware resources needed, the expected lifetime of the high-risk AI system and any necessary maintenance and care measures, including their frequency, to ensure the proper functioning of that AI system, including as regards software updates;
  6. where relevant, a description of the mechanisms included within the high-risk AI system that allows deployers to properly collect, store and interpret the logs in accordance with Article 12.
Read below what concrete actions you can take to improve this ->
Frameworks that include requirements for this topic:
No items found.

How to improve security around this topic

In Cyberday, requirements and controls are mapped to universal tasks. A set of tasks in the same topic create a Policy, such as this one.

Here's a list of tasks that help you improve your information and cyber security related to
Article 13.2-3: Usage instructions format for deployed high-risk AI systems
Task name
Priority
Task completes
Complete these tasks to increase your compliance in this policy.
Critical
No other tasks found.

How to comply with this requirement

In Cyberday, requirements and controls are mapped to universal tasks. Each requirement is fulfilled with one or multiple tasks.

Here's a list of tasks that help you comply with the requirement
Article 13.2-3: Usage instructions format for deployed high-risk AI systems
of the framework  
AI Act (Base)
Task name
Priority
Task completes
Complete these tasks to increase your compliance in this policy.
Critical
Documentation of usage instructions for AI systems
Critical
High
Normal
Low
1
requirements
AI governance
Transparency and user communication

Documentation of usage instructions for AI systems

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

Provider and operational details for AI system usage instructions
Critical
High
Normal
Low
1
requirements
AI governance
Transparency and user communication

Provider and operational details for AI system usage instructions

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

AI system data and performance transparency
Critical
High
Normal
Low
1
requirements
AI governance
Transparency and user communication

AI system data and performance transparency

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

Guidelines for human oversight of AI systems
Critical
High
Normal
Low
2
requirements
AI governance
Transparency and user communication

Guidelines for human oversight of AI systems

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

The ISMS component hierachy

When building an ISMS, it's important to understand the different levels of information hierarchy. Here's how Cyberday is structured.

Framework

Sets the overall compliance standard or regulation your organization needs to follow.

Requirements

Break down the framework into specific obligations that must be met.

Tasks

Concrete actions and activities your team carries out to satisfy each requirement.

Policies

Documented rules and practices that are created and maintained as a result of completing tasks.

Never duplicate effort. Do it once - improve compliance across frameworks.

Reach multi-framework compliance in the simplest possible way
Security frameworks tend to share the same core requirements - like risk management, backup, malware, personnel awareness or access management.
Cyberday maps all frameworks’ requirements into shared tasks - one single plan that improves all frameworks’ compliance.
Do it once - we automatically apply it to all current and future frameworks.