Skip to main content

EU AI Act Compliance

The EU Artificial Intelligence Act classifies AI systems by risk level and imposes requirements for transparency, documentation, human oversight, and data quality. Qarion provides an end-to-end framework to manage these obligations seamlessly.

AI Risk Classification & Prohibited Practices (Art. 5)

Identify unacceptable risks before development begins.

  • Article 5 Screening: A built-in questionnaire assesses whether a proposed AI use case falls under prohibited practices (e.g., social scoring, real-time biometric categorization).
  • Risk Tiers: Automatically categorize systems as Unacceptable, High, Limited, or Minimal risk based on their intended purpose and deployment context.

Conformity Assessment Workflows (Annex VI / VII)

Automate the rigorous documentation required for High-Risk AI systems.

  • Evidence Collection: Dedicated workflows prompt stakeholders to provide necessary technical documentation, quality management system details, and risk mitigation strategies.
  • Declaration of Conformity: Upon successful review and approval, Qarion automatically generates the required EU Declaration of Conformity.

Performance & Safety Cards (Art. 15)

Demonstrate that your AI systems meet the Act's requirements for accuracy, robustness, and cybersecurity.

  • Standardized Metrics: Log and visualize fairness, accuracy, and security metrics directly on the AI product's profile.
  • Temporal Tracking: Monitor how an AI system's performance drifts over time, triggering retraining or human review when thresholds are breached.

Regulatory Incident Reporting (Art. 62)

Ensure rapid response to serious incidents or malfunctions involving AI systems.

  • Deadline Tracking: Log regulatory incidents with an automated 15-day countdown for notifying national competent authorities.
  • Structured Follow-ups: Maintain an immutable log of investigation steps and root cause analyses.
  • PDF Export: Generate formatted incident reports ready for official submission.

AI Literacy Obligations (Art. 4)

Ensure every team member interacting with AI systems has the required level of AI literacy.

  • Per-User Tracking: Each user profile records their literacy status, training program, completion date, and next renewal date.
  • LMS Webhook Integration: Connect your Learning Management System to automatically update literacy records when users complete training.
  • Compliance Reporting: Filter and export literacy status across teams to demonstrate Article 4 compliance during audits.

Logging Requirements (Art. 12)

Document and govern the automatic logging capabilities of high-risk AI systems.

  • Logging Governance Fields: Record which event types are logged, when the logging configuration was last reviewed, and any notes on retention and access policies.
  • Conformity Assessment Evidence: Logging governance data is automatically linked as evidence when completing Article 12 conformity assessments.
  • Audit Trail: Maintain an immutable record of logging policy reviews and changes over time.

General-Purpose AI (GPAI) Obligations

Manage the unique transparency and systemic risk requirements for foundational models.

  • GPAI Dashboard: A centralized view to track compliance with copyright policies, energy consumption documentation, and downstream deployer transparency requirements.