Requirement

Article 27.1-4: Fundamental rights impact assessment prior to deployment of high-risk AI systems

Oh no! No description found. But not to worry. Read from Tasks below how to advance this topic.

1. Prior to deploying a high-risk AI system referred to in Article 6(2), with the exception of high-risk AI systems intended to be used in the area listed in point 2 of Annex III, deployers that are bodies governed by public law, or are private entities providing public services, and deployers of high-risk AI systems referred to in points 5 (b) and (c) of Annex III, shall perform an assessment of the impact on fundamental rights that the use of such system may produce. For that purpose, deployers shall perform an assessment consisting of:

  1. a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose;
  2. a description of the period of time within which, and the frequency with which, each high-risk AI system is intended to be used;
  3. the categories of natural persons and groups likely to be affected by its use in the specific context;
  4. the specific risks of harm likely to have an impact on the categories of natural persons or groups of persons identified pursuant to point (c) of this paragraph, taking into account the information given by the provider pursuant to Article 13;
  5. a description of the implementation of human oversight measures, according to the instructions for use;
  6. the measures to be taken in the case of the materialisation of those risks, including the arrangements for internal governance and complaint mechanisms.

2. The obligation laid down in paragraph 1 applies to the first use of the high-risk AI system. The deployer may, in similar cases, rely on previously conducted fundamental rights impact assessments or existing impact assessments carried out by the provider. If, during the use of the high-risk AI system, the deployer considers that any of the elements listed in paragraph 1 has changed or is no longer up to date, the deployer shall take the necessary steps to update the information.

3. Once the assessment referred to in paragraph 1 of this Article has been performed, the deployer shall notify the market surveillance authority of its results, submitting the filled-out template referred to in paragraph 5 of this Article as part of the notification. In the case referred to in Article 46(1), deployers may be exempt from that obligation to notify.

4. If any of the obligations laid down in this Article is already met through the data protection impact assessment conducted pursuant to Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, the fundamental rights impact assessment referred to in paragraph 1 of this Article shall complement that data protection impact assessment.

See how Cyberday guides you to fulfill this requirement:
This requirement is part of the framework:  
AI Act (Base)
Free compliance assessment:
Best practices
How to implement:
Article 27.1-4: Fundamental rights impact assessment prior to deployment of high-risk AI systems
This policy on
Article 27.1-4: Fundamental rights impact assessment prior to deployment of high-risk AI systems
provides a set concrete tasks you can complete to secure this topic. Follow these best practices to ensure compliance and strengthen your overall security posture.

1. Prior to deploying a high-risk AI system referred to in Article 6(2), with the exception of high-risk AI systems intended to be used in the area listed in point 2 of Annex III, deployers that are bodies governed by public law, or are private entities providing public services, and deployers of high-risk AI systems referred to in points 5 (b) and (c) of Annex III, shall perform an assessment of the impact on fundamental rights that the use of such system may produce. For that purpose, deployers shall perform an assessment consisting of:

  1. a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose;
  2. a description of the period of time within which, and the frequency with which, each high-risk AI system is intended to be used;
  3. the categories of natural persons and groups likely to be affected by its use in the specific context;
  4. the specific risks of harm likely to have an impact on the categories of natural persons or groups of persons identified pursuant to point (c) of this paragraph, taking into account the information given by the provider pursuant to Article 13;
  5. a description of the implementation of human oversight measures, according to the instructions for use;
  6. the measures to be taken in the case of the materialisation of those risks, including the arrangements for internal governance and complaint mechanisms.

2. The obligation laid down in paragraph 1 applies to the first use of the high-risk AI system. The deployer may, in similar cases, rely on previously conducted fundamental rights impact assessments or existing impact assessments carried out by the provider. If, during the use of the high-risk AI system, the deployer considers that any of the elements listed in paragraph 1 has changed or is no longer up to date, the deployer shall take the necessary steps to update the information.

3. Once the assessment referred to in paragraph 1 of this Article has been performed, the deployer shall notify the market surveillance authority of its results, submitting the filled-out template referred to in paragraph 5 of this Article as part of the notification. In the case referred to in Article 46(1), deployers may be exempt from that obligation to notify.

4. If any of the obligations laid down in this Article is already met through the data protection impact assessment conducted pursuant to Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, the fundamental rights impact assessment referred to in paragraph 1 of this Article shall complement that data protection impact assessment.

Read below what concrete actions you can take to improve this ->
Frameworks that include requirements for this topic:
No items found.

How to improve security around this topic

In Cyberday, requirements and controls are mapped to universal tasks. A set of tasks in the same topic create a Policy, such as this one.

Here's a list of tasks that help you improve your information and cyber security related to
Article 27.1-4: Fundamental rights impact assessment prior to deployment of high-risk AI systems
Task name
Priority
Task completes
Complete these tasks to increase your compliance in this policy.
Critical
No other tasks found.

How to comply with this requirement

In Cyberday, requirements and controls are mapped to universal tasks. Each requirement is fulfilled with one or multiple tasks.

Here's a list of tasks that help you comply with the requirement
Article 27.1-4: Fundamental rights impact assessment prior to deployment of high-risk AI systems
of the framework  
AI Act (Base)
Task name
Priority
Task completes
Complete these tasks to increase your compliance in this policy.
Critical
Impact assessment procedure for AI systems
Critical
High
Normal
Low
3
requirements
AI governance
AI risk and lifecycle management

Impact assessment procedure for AI systems

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

Fundamental rights assessment -report publishing, informing and maintenance
Critical
High
Normal
Low
1
requirements
AI governance
AI risk and lifecycle management

Fundamental rights assessment -report publishing, informing and maintenance

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

Notification of fundamental rights impact assessment
Critical
High
Normal
Low
1
requirements
AI governance
AI risk and lifecycle management

Notification of fundamental rights impact assessment

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

Internal governance and complaint mechanisms for AI systems
Critical
High
Normal
Low
1
requirements
AI governance
AI risk and lifecycle management

Internal governance and complaint mechanisms for AI systems

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

Use of previous impact assessments for AI systems
Critical
High
Normal
Low
1
requirements
AI governance
AI risk and lifecycle management

Use of previous impact assessments for AI systems

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

Use of provider information in the fundamental rights impact assessment
Critical
High
Normal
Low
1
requirements
AI governance
AI risk and lifecycle management

Use of provider information in the fundamental rights impact assessment

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

The ISMS component hierachy

When building an ISMS, it's important to understand the different levels of information hierarchy. Here's how Cyberday is structured.

Framework

Sets the overall compliance standard or regulation your organization needs to follow.

Requirements

Break down the framework into specific obligations that must be met.

Tasks

Concrete actions and activities your team carries out to satisfy each requirement.

Policies

Documented rules and practices that are created and maintained as a result of completing tasks.

Never duplicate effort. Do it once - improve compliance across frameworks.

Reach multi-framework compliance in the simplest possible way
Security frameworks tend to share the same core requirements - like risk management, backup, malware, personnel awareness or access management.
Cyberday maps all frameworks’ requirements into shared tasks - one single plan that improves all frameworks’ compliance.
Do it once - we automatically apply it to all current and future frameworks.