Requirement

Article 26.5: Monitoring the operation of deployed high-risk AI systems

Oh no! No description found. But not to worry. Read from Tasks below how to advance this topic.

Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Article 79(1), they shall, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and shall suspend the use of that system. Where deployers have identified a serious incident, they shall also immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach the provider, Article 73 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of deployers of AI systems which are law enforcement authorities.

For deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law, the monitoring obligation set out in the first subparagraph shall be deemed to be fulfilled by complying with the rules on internal governance arrangements, processes and mechanisms pursuant to the relevant financial service law.

See how Cyberday guides you to fulfill this requirement:
This requirement is part of the framework:  
AI Act (Base)
Free compliance assessment:
Best practices
How to implement:
Article 26.5: Monitoring the operation of deployed high-risk AI systems
This policy on
Article 26.5: Monitoring the operation of deployed high-risk AI systems
provides a set concrete tasks you can complete to secure this topic. Follow these best practices to ensure compliance and strengthen your overall security posture.

Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Article 79(1), they shall, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and shall suspend the use of that system. Where deployers have identified a serious incident, they shall also immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach the provider, Article 73 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of deployers of AI systems which are law enforcement authorities.

For deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law, the monitoring obligation set out in the first subparagraph shall be deemed to be fulfilled by complying with the rules on internal governance arrangements, processes and mechanisms pursuant to the relevant financial service law.

Read below what concrete actions you can take to improve this ->
Frameworks that include requirements for this topic:
No items found.

How to improve security around this topic

In Cyberday, requirements and controls are mapped to universal tasks. A set of tasks in the same topic create a Policy, such as this one.

Here's a list of tasks that help you improve your information and cyber security related to
Article 26.5: Monitoring the operation of deployed high-risk AI systems
Task name
Priority
Task completes
Complete these tasks to increase your compliance in this policy.
Critical
No other tasks found.

How to comply with this requirement

In Cyberday, requirements and controls are mapped to universal tasks. Each requirement is fulfilled with one or multiple tasks.

Here's a list of tasks that help you comply with the requirement
Article 26.5: Monitoring the operation of deployed high-risk AI systems
of the framework  
AI Act (Base)
Task name
Priority
Task completes
Complete these tasks to increase your compliance in this policy.
Critical
Process for monitoring high-risk AI systems
Critical
High
Normal
Low
1
requirements
AI governance
AI risk and lifecycle management

Process for monitoring high-risk AI systems

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

Maintaining a log of monitoring activities of deployed high-risk AI systems
Critical
High
Normal
Low
1
requirements
AI governance
AI risk and lifecycle management

Maintaining a log of monitoring activities of deployed high-risk AI systems

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

Incident reporting for deployed high-risk AI systems
Critical
High
Normal
Low
1
requirements
AI governance
AI risk and lifecycle management

Incident reporting for deployed high-risk AI systems

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

Monitoring equivalencies for high-risk AI systems deployed by financial institutions
Critical
High
Normal
Low
2
requirements
AI governance
AI risk and lifecycle management

Monitoring equivalencies for high-risk AI systems deployed by financial institutions

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

The ISMS component hierachy

When building an ISMS, it's important to understand the different levels of information hierarchy. Here's how Cyberday is structured.

Framework

Sets the overall compliance standard or regulation your organization needs to follow.

Requirements

Break down the framework into specific obligations that must be met.

Tasks

Concrete actions and activities your team carries out to satisfy each requirement.

Policies

Documented rules and practices that are created and maintained as a result of completing tasks.

Never duplicate effort. Do it once - improve compliance across frameworks.

Reach multi-framework compliance in the simplest possible way
Security frameworks tend to share the same core requirements - like risk management, backup, malware, personnel awareness or access management.
Cyberday maps all frameworks’ requirements into shared tasks - one single plan that improves all frameworks’ compliance.
Do it once - we automatically apply it to all current and future frameworks.