Requirement

Article 14: Effective human oversight of high-risk AI-systems

Oh no! No description found. But not to worry. Read from Tasks below how to advance this topic.

1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use.

2. Human oversight shall aim to prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular where such risks persist despite the application of other requirements set out in this Section.

3. The oversight measures shall be commensurate with the risks, level of autonomy and context of use of the high-risk AI system, and shall be ensured through either one or both of the following types of measures:

  1. measures identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service;
  2. measures identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the deployer.

4. For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate:

  1. to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance;
  2. to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (automation bias), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;
  3. to correctly interpret the high-risk AI system’s output, taking into account, for example, the interpretation tools and methods available;
  4. to decide, in any particular situation, not to use the high-risk AI system or to otherwise disregard, override or reverse the output of the high-risk AI system;
  5. to intervene in the operation of the high-risk AI system or interrupt the system through a ‘stop’ button or a similar procedure that allows the system to come to a halt in a safe state.

5. For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 of this Article shall be such as to ensure that, in addition, no action or decision is taken by the deployer on the basis of the identification resulting from the system unless that identification has been separately verified and confirmed by at least two natural persons with the necessary competence, training and authority.

The requirement for a separate verification by at least two natural persons shall not apply to high-risk AI systems used for the purposes of law enforcement, migration, border control or asylum, where Union or national law considers the application of this requirement to be disproportionate.

See how Cyberday guides you to fulfill this requirement:
This requirement is part of the framework:  
AI Act (Base)
Free compliance assessment:
Best practices
How to implement:
Article 14: Effective human oversight of high-risk AI-systems
This policy on
Article 14: Effective human oversight of high-risk AI-systems
provides a set concrete tasks you can complete to secure this topic. Follow these best practices to ensure compliance and strengthen your overall security posture.

1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use.

2. Human oversight shall aim to prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular where such risks persist despite the application of other requirements set out in this Section.

3. The oversight measures shall be commensurate with the risks, level of autonomy and context of use of the high-risk AI system, and shall be ensured through either one or both of the following types of measures:

  1. measures identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service;
  2. measures identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the deployer.

4. For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate:

  1. to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance;
  2. to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (automation bias), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;
  3. to correctly interpret the high-risk AI system’s output, taking into account, for example, the interpretation tools and methods available;
  4. to decide, in any particular situation, not to use the high-risk AI system or to otherwise disregard, override or reverse the output of the high-risk AI system;
  5. to intervene in the operation of the high-risk AI system or interrupt the system through a ‘stop’ button or a similar procedure that allows the system to come to a halt in a safe state.

5. For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 of this Article shall be such as to ensure that, in addition, no action or decision is taken by the deployer on the basis of the identification resulting from the system unless that identification has been separately verified and confirmed by at least two natural persons with the necessary competence, training and authority.

The requirement for a separate verification by at least two natural persons shall not apply to high-risk AI systems used for the purposes of law enforcement, migration, border control or asylum, where Union or national law considers the application of this requirement to be disproportionate.

Read below what concrete actions you can take to improve this ->
Frameworks that include requirements for this topic:
No items found.

How to improve security around this topic

In Cyberday, requirements and controls are mapped to universal tasks. A set of tasks in the same topic create a Policy, such as this one.

Here's a list of tasks that help you improve your information and cyber security related to
Article 14: Effective human oversight of high-risk AI-systems
Task name
Priority
Task completes
Complete these tasks to increase your compliance in this policy.
Critical
No other tasks found.

How to comply with this requirement

In Cyberday, requirements and controls are mapped to universal tasks. Each requirement is fulfilled with one or multiple tasks.

Here's a list of tasks that help you comply with the requirement
Article 14: Effective human oversight of high-risk AI-systems
of the framework  
AI Act (Base)
Task name
Priority
Task completes
Complete these tasks to increase your compliance in this policy.
Critical
Amount, competence and adequacy of human oversight personnel
Critical
High
Normal
Low
1
requirements
AI governance
AI risk and lifecycle management

Amount, competence and adequacy of human oversight personnel

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

Built-in human oversight mechanisms for high-risk AI systems
Critical
High
Normal
Low
1
requirements
AI governance
AI risk and lifecycle management

Built-in human oversight mechanisms for high-risk AI systems

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

Analysis of human oversight measures for high-risk AI systems
Critical
High
Normal
Low
1
requirements
AI governance
AI risk and lifecycle management

Analysis of human oversight measures for high-risk AI systems

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

Guidelines for human oversight of AI systems
Critical
High
Normal
Low
2
requirements
AI governance
Transparency and user communication

Guidelines for human oversight of AI systems

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

Dual verification for high-risk AI identification
Critical
High
Normal
Low
1
requirements
AI governance
AI risk and lifecycle management

Dual verification for high-risk AI identification

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

The ISMS component hierachy

When building an ISMS, it's important to understand the different levels of information hierarchy. Here's how Cyberday is structured.

Framework

Sets the overall compliance standard or regulation your organization needs to follow.

Requirements

Break down the framework into specific obligations that must be met.

Tasks

Concrete actions and activities your team carries out to satisfy each requirement.

Policies

Documented rules and practices that are created and maintained as a result of completing tasks.

Never duplicate effort. Do it once - improve compliance across frameworks.

Reach multi-framework compliance in the simplest possible way
Security frameworks tend to share the same core requirements - like risk management, backup, malware, personnel awareness or access management.
Cyberday maps all frameworks’ requirements into shared tasks - one single plan that improves all frameworks’ compliance.
Do it once - we automatically apply it to all current and future frameworks.