Article 55: Obligations for providers of general-purpose AI models with systemic risk

Oh no! No description found. But not to worry. Read from Tasks below how to advance this topic.

1. In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall:

  1. perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks;
  2. assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, the placing on the market, or the use of general-purpose AI models with systemic risk;
  3. keep track of, document, and report, without undue delay, to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them;
  4. ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and the physical infrastructure of the model.

2. Providers of general-purpose AI models with systemic risk may rely on codes of practice within the meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Compliance with European harmonised standards grants providers the presumption of conformity to the extent that those standards cover those obligations. Providers of general-purpose AI models with systemic risks who do not adhere to an approved code of practice or do not comply with a European harmonised standard shall demonstrate alternative adequate means of compliance for assessment by the Commission.

3. Any information or documentation obtained pursuant to this Article, including trade secrets, shall be treated in accordance with the confidentiality obligations set out in Article 78.

See how Cyberday guides you to fulfill this requirement:
This requirement is part of the framework:  
AI Act (GPAI models)
Free compliance assessment:

Other requirements of the framework

52840
Article 55: Obligations for providers of general-purpose AI models with systemic risk
No items found.
Best practices
How to implement:
Article 55: Obligations for providers of general-purpose AI models with systemic risk
This policy on
Article 55: Obligations for providers of general-purpose AI models with systemic risk
provides a set concrete tasks you can complete to secure this topic. Follow these best practices to ensure compliance and strengthen your overall security posture.

1. In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall:

  1. perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks;
  2. assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, the placing on the market, or the use of general-purpose AI models with systemic risk;
  3. keep track of, document, and report, without undue delay, to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them;
  4. ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and the physical infrastructure of the model.

2. Providers of general-purpose AI models with systemic risk may rely on codes of practice within the meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Compliance with European harmonised standards grants providers the presumption of conformity to the extent that those standards cover those obligations. Providers of general-purpose AI models with systemic risks who do not adhere to an approved code of practice or do not comply with a European harmonised standard shall demonstrate alternative adequate means of compliance for assessment by the Commission.

3. Any information or documentation obtained pursuant to this Article, including trade secrets, shall be treated in accordance with the confidentiality obligations set out in Article 78.

Read below what concrete actions you can take to improve this ->
Frameworks that include requirements for this topic:
No items found.

How to improve security around this topic

In Cyberday, requirements and controls are mapped to universal tasks. A set of tasks in the same topic create a Policy, such as this one.

Here's a list of tasks that help you improve your information and cyber security related to
Article 55: Obligations for providers of general-purpose AI models with systemic risk
Task name
Priority
Task completes
Complete these tasks to increase your compliance in this policy.
Critical
No other tasks found.

How to comply with this requirement

In Cyberday, requirements and controls are mapped to universal tasks. Each requirement is fulfilled with one or multiple tasks.

Here's a list of tasks that help you comply with the requirement
Article 55: Obligations for providers of general-purpose AI models with systemic risk
of the framework  
AI Act (GPAI models)
Task name
Priority
Task completes
Complete these tasks to increase your compliance in this policy.
Critical
Evaluating and managing systemic risk in GPAI models
Critical
High
Normal
Low
1
requirements
AI governance
AI risk and lifecycle management

Evaluating and managing systemic risk in GPAI models

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

Confidentiality of GPAI model information and documentation
Critical
High
Normal
Low
1
requirements
AI governance
AI risk and lifecycle management

Confidentiality of GPAI model information and documentation

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

Personnel guidelines for GPAI incident reporting
Critical
High
Normal
Low
1
requirements
Incident management
Incident management and response

Personnel guidelines for GPAI incident reporting

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

Cybersecurity measures for the protection of GPAI models with systemic risk
Critical
High
Normal
Low
1
requirements
AI governance
AI data and model governance

Cybersecurity measures for the protection of GPAI models with systemic risk

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

Code of practice for systemic risk governance of GPAI models
Critical
High
Normal
Low
1
requirements
AI governance
AI risk and lifecycle management

Code of practice for systemic risk governance of GPAI models

Completing this task also progresses your compliance in all of the following frameworks and requirements. Cyberday automatically maps completed tasks to all of these current and future frameworks - so you do not have to do it again!

The ISMS component hierachy

When building an ISMS, it's important to understand the different levels of information hierarchy. Here's how Cyberday is structured.

Framework

Sets the overall compliance standard or regulation your organization needs to follow.

Requirements

Break down the framework into specific obligations that must be met.

Tasks

Concrete actions and activities your team carries out to satisfy each requirement.

Policies

Documented rules and practices that are created and maintained as a result of completing tasks.

Never duplicate effort. Do it once - improve compliance across frameworks.

Reach multi-framework compliance in the simplest possible way
Security frameworks tend to share the same core requirements - like risk management, backup, malware, personnel awareness or access management.
Cyberday maps all frameworks’ requirements into shared tasks - one single plan that improves all frameworks’ compliance.
Do it once - we automatically apply it to all current and future frameworks.