AI Governance
Qarion enables organizations to register, classify, and monitor AI systems alongside traditional data assets. This page covers how the platform supports responsible AI governance, including risk classification aligned with the EU AI Act.
AI Systems as Data Products
AI systems — such as trained machine learning models, automated decision-making pipelines, and inference endpoints — are registered in the Data Catalog as a specialized product type. This means they benefit from all the standard catalog features: metadata management, lineage tracking, governance assignments, quality checks, and access controls.
By treating AI systems as first-class data products, organizations gain a unified view of their entire data and AI landscape within a single governance platform.
Risk Classification
EU AI Act Alignment
The EU AI Act establishes a risk-based framework for regulating AI systems. Qarion mirrors this framework by allowing organizations to assign a Risk Classification to each AI system product.
The available risk tiers are:
- Unacceptable Risk — AI systems that are prohibited under the EU AI Act (e.g., social scoring, real-time biometric identification in public spaces)
- High Risk — Systems requiring strict compliance measures (e.g., credit scoring, hiring tools, medical devices)
- Limited Risk — Systems with transparency obligations (e.g., chatbots, emotion recognition)
- Minimal Risk — Systems with no specific regulatory requirements (e.g., spam filters, content recommendation)
Assigning Risk Classification
When creating or editing an AI System product, select the appropriate risk tier from the Risk Classification dropdown. This classification drives downstream governance behaviors, including the level of documentation required, the approval workflows triggered, and the monitoring intensity applied.
Risk Assessment
What is a Risk Assessment?
A Risk Assessment is a structured evaluation of an AI system's potential risks and the mitigations in place to address them. Each assessment captures the identified risks, the severity and likelihood of each, the mitigation actions planned or completed, and an overall risk determination.
Creating an Assessment
Navigate to an AI System product's detail page, open the Risk Assessment tab, and click New Assessment. Fill in the assessment details, including the risk factors, scoring criteria, and proposed mitigations.
Mitigation Tracking
Each risk assessment contains a list of mitigation actions. These actions have their own lifecycle — they can be marked as Planned, In Progress, or Completed. This tracking provides a clear audit trail showing that identified risks have been actively managed.
Completing an Assessment
When all risk factors have been evaluated and mitigations documented, the assessment can be Completed. Upon completion, the resulting risk tier can optionally be synced back to the product's risk classification, ensuring that the catalog always reflects the most current assessment.
Continuous Monitoring
AI systems benefit from specialized quality monitoring capabilities. Data Drift Detection monitors for changes in input data distributions that could degrade model performance. Model Performance Checks track accuracy, precision, recall, or custom metrics over time. Alert Integration surfaces drift warnings and performance degradation in the centralized Alerts Center, ensuring timely intervention.
These monitoring capabilities use the same Quality Engine as traditional data quality checks, providing a consistent operational experience across data and AI governance.
AI Literacy Tracking
Article 4 Compliance
The EU AI Act (Article 4) requires that providers and deployers of AI systems ensure their staff have a sufficient level of AI literacy. Qarion tracks AI literacy compliance at the individual user level.
Literacy Fields
Each user profile includes:
| Field | Description |
|---|---|
| Status | Current literacy status: not_started, in_progress, or completed |
| Program | Name of the training program completed (e.g., "EU AI Act Fundamentals") |
| Completion Date | When the training was completed |
| Next Renewal | When the certification needs to be renewed |
Webhook Integration
Qarion integrates with external Learning Management Systems (LMS) via inbound webhooks. When a user completes an AI literacy course in your LMS, the platform receives a webhook payload that automatically updates the user's literacy status, completion date, program name, and renewal date.
This automation eliminates the need for manual status tracking and ensures that compliance records are kept current.
Operations Logging Governance
Article 12 Compliance
Article 12 of the EU AI Act requires high-risk AI systems to maintain automatic logging capabilities. Qarion provides structured fields to govern and document logging policies for each AI system.
Logging Governance Fields
On the AI system product detail page, the Operations Logging section captures:
| Field | Description |
|---|---|
| Logging Event Types | Categories of events being logged (e.g., "inference requests", "training runs", "data access") |
| Last Reviewed | Date the logging configuration was last reviewed for adequacy |
| Notes | Free-text documentation of logging policies, retention rules, and access controls |
Conformity Assessment Evidence
Operations logging governance data is linked as evidence in the conformity assessment workflow. When completing an Article 12 assessment, the logging governance fields are automatically surfaced as supporting documentation — demonstrating that the AI system's logging capabilities have been evaluated and maintained.
Compliance Benefits
Registering AI systems in Qarion and maintaining risk assessments supports compliance with several regulatory frameworks. For the EU AI Act, it provides documentation of risk classification, conformity assessments, AI literacy tracking, operations logging governance, and human oversight measures. For GDPR, it tracks the data used to train models and the purposes for which AI systems process personal data. Organizations benefit from a complete, auditable history of AI system governance decisions.
Learn More
- EU AI Act Compliance — Detailed guidance on EU AI Act support
- Mitigation & Remediation — Risk mitigation workflows
- Data Catalog Overview — Managing data products including AI systems
- AI Feedback Loop — Providing feedback on AI-generated suggestions