DQ Config (YAML)
Qarion supports configuration-driven data quality through YAML files. Rather than creating checks one-by-one through the UI or API, you define all your quality rules in a single version-controlled file, then use the CLI or SDK to sync them to the platform and execute them.
This approach brings several benefits:
- Version control — Quality rules live alongside your data transformation code, so every change is tracked in Git with a full audit trail.
- Code review — Rule changes go through pull requests, ensuring that thresholds, queries, and schedules are reviewed before deployment.
- Reproducibility — A new environment can be bootstrapped by running
qarion quality applyagainst the same config file. - CI/CD integration — Quality gates can be embedded directly into deployment pipelines.
File Format
A DQ config file is a YAML document with the following top-level keys:
| Key | Type | Required | Description |
|---|---|---|---|
version | string | no | Config schema version (default "1.0") |
space | string | yes | Target space slug — all checks will be created in this space |
defaults | object | no | Default values inherited by every check in the file |
checks | list | yes | One or more check definitions (minimum 1) |
Defaults
The defaults block lets you set values that every check inherits unless explicitly overridden. This avoids repeating the same connector or schedule across dozens of checks.
| Key | Type | Description |
|---|---|---|
connector | string | Default connector slug for execution |
schedule | string | Default cron schedule expression |
Check Definition
Each entry in the checks list defines a single quality rule:
| Key | Type | Required | API Mapping | Description |
|---|---|---|---|---|
slug | string | yes | slug | Unique identifier within the space (URL-safe) |
name | string | yes | name | Human-readable display name |
type | string | yes | check_type | Check type (see Supported Types) |
description | string | no | description | Explains the purpose of the check |
query | string | no | query | SQL query for SQL-based check types |
connector | string | no | — | Connector slug (overrides the default; resolved at apply time) |
product | string | no | product_slug | Target data product slug to link the check to |
schedule | string | no | schedule_cron | Cron expression (overrides the default) |
thresholds | object | no | threshold_config | Pass/fail threshold configuration (see Thresholds) |
configuration | object | no | configuration | Type-specific configuration (see per-type docs below) |
parameters | list | no | parameters | Parameterized query variable definitions (see Parameters) |
Supported Connectors
YAML-configured checks require a quality connector to execute SQL against your data source. The following connector types are supported:
| Connector | Key | Notes |
|---|---|---|
| PostgreSQL | quality_postgres | PostgreSQL 12+ |
| Snowflake | quality_snowflake | All Snowflake editions |
| BigQuery | quality_bigquery | Requires service account credentials |
| File / CSV | quality_file | For file-based quality checks |
Specify the connector by its slug (the human-readable identifier assigned when creating the connector in the UI or API). The slug goes in the defaults.connector or per-check connector field.
If a check's configuration omits table_name but the check is linked to a product, the executor automatically derives the table from the product's hosting_location field (format: ConnectorName.schema.table → uses schema.table).
Supported Check Types
The type field accepts any of the following values, organized into logical groups.
SQL Checks
These checks execute arbitrary SQL against your data source. The query is rendered with {{param}} placeholder substitution before execution.
sql_metric
Runs a query that returns a single numeric value, then evaluates it against a threshold. The first column of the first row is extracted as the metric value. This is the most flexible check type for custom business rules.
| Config Field | Required | Description |
|---|---|---|
query (or top-level query) | ✅ | SQL returning a single numeric value |
thresholds | Recommended | Pass/fail criteria (see Thresholds) |
Measures: The numeric value returned by the first column of the first row.
# Row count check with warning threshold
- slug: orders-row-count
name: Orders Row Count
type: sql_metric
query: "SELECT COUNT(*) FROM analytics.orders"
product: orders-table
thresholds:
operator: gte
value: 1000
warn: 500
# Revenue check — ensure daily revenue exceeds baseline
- slug: daily-revenue-check
name: Daily Revenue ≥ $10K
type: sql_metric
query: >
SELECT COALESCE(SUM(amount), 0)
FROM analytics.orders
WHERE order_date = '{{run_date}}'
thresholds:
operator: gte
value: 10000
parameters:
- name: run_date
type: string
default: "CURRENT_DATE"
sql_condition
Runs a query and fails if any rows are returned. The actual value is the row count. Useful for asserting "this should never happen" conditions — no threshold configuration needed.
| Config Field | Required | Description |
|---|---|---|
query (or top-level query) | ✅ | SQL selecting violating rows |
Measures: Count of rows returned (0 = pass, >0 = fail).
# Referential integrity — no orphaned foreign keys
- slug: no-orphaned-orders
name: No Orphaned Orders
type: sql_condition
query: >
SELECT o.id
FROM analytics.orders o
LEFT JOIN analytics.customers c ON o.customer_id = c.id
WHERE c.id IS NULL
# Business rule — no future-dated transactions
- slug: no-future-dates
name: No Future Order Dates
type: sql_condition
query: "SELECT * FROM analytics.orders WHERE order_date > CURRENT_DATE"
Field-Level Checks
Field-level checks target a single column in a table. They require a configuration block with at least field_name and table_name (or a linked product with a hosting_location).
Common configuration fields for all field checks:
| Config Field | Required | Description |
|---|---|---|
field_name | ✅ | Column name to validate |
table_name | ✅* | Fully qualified table name (schema.table). *Auto-derived from product if omitted. |
null_check
Measures: Null percentage (0–100%). Fails if null percentage exceeds max_null_percentage (default: 0%).
| Config Field | Default | Description |
|---|---|---|
max_null_percentage | 0 | Maximum acceptable null percentage (0 = no nulls allowed) |
# Strict — no nulls allowed (default)
- slug: users-email-not-null
name: Users Email Not Null
type: null_check
product: users-table
configuration:
field_name: email
table_name: analytics.users
# Lenient — allow up to 5% nulls
- slug: users-bio-nulls
name: Users Bio Null Tolerance
type: null_check
product: users-table
configuration:
field_name: bio
table_name: analytics.users
max_null_percentage: 5
uniqueness
Measures: Count of duplicate value groups. Fails if count > 0.
- slug: users-id-unique
name: Users ID Uniqueness
type: uniqueness
product: users-table
configuration:
field_name: id
table_name: analytics.users
type_check
Measures: Count of rows with type violations. Fails if count > 0.
| Config Field | Required | Description |
|---|---|---|
expected_type | ✅ | Expected data type (numeric, integer, date, timestamp, boolean, text) |
- slug: orders-amount-numeric
name: Orders Amount Type Check
type: type_check
product: orders-table
configuration:
field_name: amount
table_name: analytics.orders
expected_type: numeric
range_check
Measures: Count of rows with values outside the specified range. Fails if count > 0. Both min_value and max_value are optional — you can set just one for open-ended ranges.
| Config Field | Required | Description |
|---|---|---|
min_value | No | Minimum acceptable value (inclusive) |
max_value | No | Maximum acceptable value (inclusive) |
# Closed range — amount between 0 and 1M
- slug: orders-amount-range
name: Orders Amount Range
type: range_check
product: orders-table
configuration:
field_name: amount
table_name: analytics.orders
min_value: 0
max_value: 1000000
# Open-ended — age must be positive
- slug: users-age-positive
name: User Age Positive
type: range_check
product: users-table
configuration:
field_name: age
table_name: analytics.users
min_value: 0
pattern_check
Measures: Count of rows where the value does not match the regex. Fails if count > 0. Uses PostgreSQL ~ regex operator.
| Config Field | Required | Description |
|---|---|---|
pattern | ✅ | POSIX regular expression |
# Email format validation
- slug: users-email-format
name: Email Format Validation
type: pattern_check
product: users-table
configuration:
field_name: email
table_name: analytics.users
pattern: "^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$"
# Phone number format (E.164)
- slug: users-phone-format
name: Phone Number Format
type: pattern_check
product: users-table
configuration:
field_name: phone
table_name: analytics.users
pattern: "^\\+[1-9]\\d{1,14}$"
enum_check
Measures: Count of rows with values not in the allowed list. Fails if count > 0. NULL values are excluded from the check (not counted as violations).
| Config Field | Required | Description |
|---|---|---|
allowed_values | ✅ | List of valid values |
- slug: orders-status-enum
name: Order Status Validation
type: enum_check
product: orders-table
configuration:
field_name: status
table_name: analytics.orders
allowed_values:
- pending
- confirmed
- shipped
- delivered
- cancelled
length_check
Measures: Count of rows with string length outside bounds. Fails if count > 0.
| Config Field | Required | Description |
|---|---|---|
min_length | No | Minimum string length (inclusive) |
max_length | No | Maximum string length (inclusive) |
- slug: users-name-length
name: User Name Length
type: length_check
product: users-table
configuration:
field_name: display_name
table_name: analytics.users
min_length: 1
max_length: 255
freshness_check
Measures: Hours since the most recent timestamp in the column (computed as EXTRACT(EPOCH FROM (NOW() - MAX(column))) / 3600). Fails if the age exceeds max_age_hours.
| Config Field | Default | Description |
|---|---|---|
max_age_hours | 24 | Maximum acceptable age in hours |
# Standard 24h freshness
- slug: orders-freshness
name: Orders Table Freshness
type: freshness_check
product: orders-table
configuration:
field_name: updated_at
table_name: analytics.orders
max_age_hours: 24
# Strict SLA — data must be <1 hour old
- slug: events-realtime-freshness
name: Events Real-Time Freshness
type: freshness_check
product: events-stream
configuration:
field_name: event_timestamp
table_name: analytics.events
max_age_hours: 1
Composite Checks
field_checks
Bundles multiple field-level assertions against a single table into one check. The platform generates a single consolidated SQL query using CASE expressions, so all assertions are evaluated in one table scan — significantly more efficient than running separate checks.
| Config Field | Required | Description |
|---|---|---|
table_name | ✅* | Target table. *Auto-derived from linked product if omitted. |
field_checks | ✅ | List of assertion objects (see below) |
where_clause | No | Optional SQL WHERE clause to filter rows before checking |
Each assertion in the field_checks list:
| Field | Required | Description |
|---|---|---|
column | ✅ | Column name |
check_type | ✅ | One of: null_check, uniqueness, type_check, range_check, pattern_check, enum_check, length_check, freshness_check |
| type-specific fields | Varies | Same fields as the standalone check (e.g., pattern, allowed_values, min_value) |
Measures: Each sub-check is evaluated individually. The overall check fails if any sub-check fails. Results include per-field detail.
# Comprehensive table health check
- slug: users-field-suite
name: Users Field Quality Suite
type: field_checks
product: users-table
configuration:
table_name: analytics.users
field_checks:
- column: id
check_type: uniqueness
- column: email
check_type: null_check
- column: email
check_type: pattern_check
pattern: "^[a-zA-Z0-9._%+-]+@"
- column: status
check_type: enum_check
allowed_values: [active, inactive, suspended]
- column: age
check_type: range_check
min_value: 0
max_value: 150
# With WHERE clause — only check active records
- slug: active-users-quality
name: Active Users Quality
type: field_checks
product: users-table
configuration:
table_name: analytics.users
where_clause: "status = 'active'"
field_checks:
- column: email
check_type: null_check
- column: last_login
check_type: freshness_check
max_age_hours: 720 # 30 days
reconciliation
Compares results from two SQL queries. Both queries must return a single numeric value. Supports three comparison modes.
| Config Field | Required | Description |
|---|---|---|
source_query | ✅ | SQL returning a single numeric value (the "expected" value) |
target_query | ✅ | SQL returning a single numeric value (the "actual" value) |
comparison_mode | No | exact (default), percentage, or absolute |
tolerance | No | Acceptable difference threshold (default: 0) |
Comparison modes:
| Mode | Calculation | Pass condition |
|---|---|---|
exact | source == target | Values are identical |
percentage | abs(source - target) / abs(source) * 100 | Percentage diff ≤ tolerance |
absolute | abs(source - target) | Absolute diff ≤ tolerance |
# Exact match — staging vs production row counts
- slug: orders-count-reconciliation
name: Orders Count Match
type: reconciliation
configuration:
source_query: "SELECT COUNT(*) FROM staging.orders"
target_query: "SELECT COUNT(*) FROM prod.orders"
comparison_mode: exact
# Percentage tolerance — revenue comparison
- slug: revenue-reconciliation
name: Revenue Reconciliation
type: reconciliation
configuration:
source_query: "SELECT SUM(amount) FROM staging.revenue"
target_query: "SELECT SUM(amount) FROM prod.revenue"
comparison_mode: percentage
tolerance: 1 # Allow 1% difference
Other Check Types
| Type | Description |
|---|---|
anomaly | Statistical anomaly detection on a metric time series |
custom | Fully custom check logic (typically used with external execution) |
manual | Human-entered value — prompts for manual input when triggered |
Thresholds
The thresholds object defines the pass/fail criteria for checks that produce a numeric value (primarily sql_metric). If omitted, the platform uses type-specific default evaluation logic (e.g., null_check uses max_null_percentage, freshness_check uses max_age_hours).
Threshold Fields
| Key | Type | Description |
|---|---|---|
operator | string | Comparison operator (see table below) |
value | number | Threshold value (used with single-value operators) |
min | number | Minimum value (used with between operator) |
max | number | Maximum value (used with between operator) |
warn | number | Warning threshold — produces warning instead of fail |
warn_threshold | number | Alternative key for warning (field-level checks) |
fail_threshold | number | Alternative key for fail threshold (field-level checks) |
Supported Operators
| Operator | Meaning | Example Use Case |
|---|---|---|
eq | Equal to | Value must be exactly 100 |
ne | Not equal to | Value must not be 0 |
gt | Greater than | Value must be > 0 |
gte | Greater than or equal | Value must be ≥ 1000 |
lt | Less than | Value must be < 100 |
lte | Less than or equal | Value must be ≤ 5 |
between | Within range | Requires min and max instead of value |
Examples
# Simple — row count must be at least 1000
thresholds:
operator: gte
value: 1000
# With warning — fail <1000, warn <5000, pass ≥5000
thresholds:
operator: gte
value: 1000
warn: 5000
# Range — value must be between 100 and 10000
thresholds:
operator: between
min: 100
max: 10000
# Not equal — ensure value is never zero
thresholds:
operator: ne
value: 0
Field-level checks (null_check, range_check, etc.) also support warn_threshold and fail_threshold in the top-level threshold_config, which override the check-specific defaults. For example, a null_check with max_null_percentage: 0 will fail on any nulls, but you can add a fail_threshold to only fail above a certain absolute count.
Parameters
Parameterised queries let you define variables that are substituted at execution time. This is useful for date-partitioned checks or environment-specific values.
- slug: daily-row-count
name: Daily Row Count
type: sql_metric
query: "SELECT COUNT(*) FROM analytics.events WHERE event_date = '{{run_date}}'"
thresholds:
operator: gte
value: 10000
parameters:
- name: run_date
type: string
default: "2024-01-15"
description: "Target date partition"
Variables use double-brace syntax ({{variable_name}}) in the query and are resolved from the parameters list at runtime. You can override parameter values when triggering a run through the SDK or CLI.
Examples
Minimal Config
The simplest possible config file defines a single check:
version: "1.0"
space: acme-analytics
checks:
- slug: orders-exist
name: Orders Table Has Rows
type: sql_metric
query: "SELECT COUNT(*) FROM analytics.orders"
thresholds:
operator: gte
value: 1
E-Commerce Data Quality Suite
A real-world config for an e-commerce analytics team covering freshness, completeness, validity, and cross-source consistency:
version: "1.0"
space: ecommerce-analytics
defaults:
connector: warehouse-snowflake
schedule: "0 6 * * *" # Every day at 6 AM
checks:
# ─── FRESHNESS ───────────────────────────────────
- slug: orders-freshness
name: Orders Table Freshness
type: freshness_check
product: orders-table
schedule: "*/30 * * * *" # Every 30 minutes for critical tables
configuration:
field_name: updated_at
table_name: analytics.orders
max_age_hours: 2
# ─── COMPLETENESS ────────────────────────────────
- slug: orders-no-nulls
name: Orders Required Fields
type: field_checks
product: orders-table
configuration:
table_name: analytics.orders
field_checks:
- column: order_id
check_type: uniqueness
- column: customer_id
check_type: null_check
- column: amount
check_type: null_check
- column: order_date
check_type: null_check
- column: amount
check_type: range_check
min_value: 0
# ─── VALIDITY ────────────────────────────────────
- slug: orders-status-valid
name: Order Status Values
type: enum_check
product: orders-table
configuration:
field_name: status
table_name: analytics.orders
allowed_values:
- pending
- confirmed
- shipped
- delivered
- cancelled
- refunded
# ─── BUSINESS RULES ─────────────────────────────
- slug: no-future-orders
name: No Future-Dated Orders
type: sql_condition
product: orders-table
query: "SELECT * FROM analytics.orders WHERE order_date > CURRENT_DATE"
- slug: daily-revenue
name: Daily Revenue ≥ $10K
type: sql_metric
product: orders-table
query: >
SELECT COALESCE(SUM(amount), 0)
FROM analytics.orders
WHERE order_date = CURRENT_DATE - INTERVAL '1 day'
thresholds:
operator: gte
value: 10000
warn: 5000
# ─── CONSISTENCY ─────────────────────────────────
- slug: staging-prod-orders
name: Staging vs Prod Order Count
type: reconciliation
configuration:
source_query: "SELECT COUNT(*) FROM staging.orders"
target_query: "SELECT COUNT(*) FROM prod.orders"
comparison_mode: percentage
tolerance: 1
# ─── EXTERNAL (Great Expectations) ───────────────
- slug: ge-customer-validation
name: GE Customer Validation
type: custom
description: "Result pushed from GE checkpoint"
product: customers-table
Marketing Analytics with Parameters
Parameterised checks for a marketing team running daily validations against date partitions:
version: "1.0"
space: marketing-analytics
defaults:
connector: warehouse-bigquery
schedule: "0 8 * * *"
checks:
- slug: daily-impressions
name: Daily Ad Impressions
type: sql_metric
query: >
SELECT COUNT(*)
FROM marketing.ad_impressions
WHERE event_date = '{{run_date}}'
thresholds:
operator: gte
value: 100000
parameters:
- name: run_date
type: string
default: "CURRENT_DATE - INTERVAL 1 DAY"
description: "Partition date to validate"
- slug: utm-params-valid
name: UTM Parameters Not Null
type: field_checks
product: campaign-events
configuration:
table_name: marketing.campaign_events
where_clause: "channel = 'paid'"
field_checks:
- column: utm_source
check_type: null_check
- column: utm_medium
check_type: null_check
- column: utm_campaign
check_type: null_check
Multi-Product Config
You can define checks across multiple products in the same file, as long as they share the same space:
version: "1.0"
space: acme-analytics
defaults:
connector: warehouse-snowflake
checks:
- slug: customers-freshness
name: Customers Freshness
type: freshness_check
product: customers-table
schedule: "0 7 * * *"
configuration:
field_name: updated_at
table_name: analytics.customers
max_age_hours: 24
- slug: orders-freshness
name: Orders Freshness
type: freshness_check
product: orders-table
schedule: "0 8 * * *"
configuration:
field_name: created_at
table_name: analytics.orders
max_age_hours: 12
- slug: events-uniqueness
name: Events ID Uniqueness
type: uniqueness
product: events-stream
configuration:
field_name: event_id
table_name: analytics.events
Workflow
A typical workflow involves three steps: validate, apply, and run.
1. Validate
Check the file for structural errors and verify that referenced connectors and products exist on the platform:
qarion quality validate -f qarion-dq.yaml
This command parses the YAML, validates all field types, and checks that the target space, connectors, and products are resolvable. It does not create or modify anything.
2. Apply
Sync definitions to the platform — creates missing checks and updates existing ones:
qarion quality apply -f qarion-dq.yaml
The apply command is idempotent. Running it multiple times with the same config produces no changes. It matches checks by slug within the target space.
3. Run
Execute all checks defined in the config and record results:
qarion quality run-config -f qarion-dq.yaml
Use --no-record to execute checks without persisting results to the platform (useful for local testing):
qarion quality run-config -f qarion-dq.yaml --no-record
SDK Usage
The same workflow is available programmatically through the Python SDK:
from qarion import QarionSyncClient
from qarion.models.dq_config import DqConfig
# Parse the YAML file
config = DqConfig.from_yaml("qarion-dq.yaml")
client = QarionSyncClient(api_key="qk_...")
# Step 1: Validate
errors = client.quality.validate_config(config)
if errors:
for err in errors:
print(f"Error: {err}")
# Step 2: Apply (upsert)
summary = client.quality.apply_config(config)
print(summary) # {"created": [...], "updated": [...], "unchanged": [...]}
# Step 3: Run
results = client.quality.run_config(config)
for r in results:
print(f"{r.status}: {r.value}")
Use record_results=False to skip recording:
results = client.quality.run_config(config, record_results=False)
CI/CD Integration
GitHub Actions
Add a quality gate to your deployment pipeline that validates and runs checks after each push:
name: Data Quality Gate
on:
push:
branches: [main]
paths:
- "qarion-dq.yaml"
- "dbt/**"
jobs:
quality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- run: pip install qarion-cli
- name: Validate config
run: qarion quality validate -f qarion-dq.yaml
env:
QARION_API_KEY: ${{ secrets.QARION_API_KEY }}
- name: Apply check definitions
run: qarion quality apply -f qarion-dq.yaml
env:
QARION_API_KEY: ${{ secrets.QARION_API_KEY }}
- name: Run quality checks
run: qarion quality run-config -f qarion-dq.yaml
env:
QARION_API_KEY: ${{ secrets.QARION_API_KEY }}
Airflow / Orchestrator
Trigger a config-driven quality gate as a task in your pipeline:
from airflow.operators.python import PythonOperator
def run_quality_gate():
from qarion import QarionSyncClient
from qarion.models.dq_config import DqConfig
config = DqConfig.from_yaml("/opt/airflow/dags/qarion-dq.yaml")
client = QarionSyncClient(api_key="qk_...")
results = client.quality.run_config(config)
failed = [r for r in results if not r.is_passed]
if failed:
raise Exception(f"{len(failed)} quality check(s) failed")
quality_gate = PythonOperator(
task_id="quality_gate",
python_callable=run_quality_gate,
)
External Tool Integration
You can combine native YAML-defined checks with results from external validation tools — giving you a unified quality dashboard regardless of where checks run.
The pattern is:
- Define a
customcheck in your config file (this creates a check slot on the platform) - Run your external tool
- Push the outcome to Qarion via the CLI or SDK
Great Expectations
Define a custom check for each GE checkpoint you want to track:
checks:
- slug: ge-orders-validation
name: GE Orders Validation
type: custom
description: "Result pushed from Great Expectations checkpoint"
product: orders-table
After running your checkpoint, push the result:
qarion quality push acme-analytics ge-orders-validation \
--status passed \
--value 100.0
Or automate the entire flow in Python:
import great_expectations as gx
from qarion import QarionSyncClient
# Run checkpoint
context = gx.get_context()
result = context.run_checkpoint("my_checkpoint")
# Derive status and value
status = "passed" if result.success else "failed"
passed_pct = (
result.statistics["successful_expectations"]
/ result.statistics["evaluated_expectations"]
* 100
)
# Push to Qarion
client = QarionSyncClient(api_key="qk_...")
client.quality.push_result(
"acme-analytics",
"ge-orders-validation",
status=status,
value=passed_pct,
)
dbt Tests
Define a custom (or dbt_test) check for each dbt test:
checks:
- slug: dbt-not-null-orders-id
name: "dbt: not_null orders.id"
type: custom
product: orders-table
After dbt test, push results:
qarion quality push acme-analytics dbt-not-null-orders-id --status passed
For bulk automation, parse dbt test JSON output and push each result in a loop:
import json
from qarion import QarionSyncClient
client = QarionSyncClient(api_key="qk_...")
with open("target/run_results.json") as f:
run_results = json.load(f)
for result in run_results["results"]:
slug = f"dbt-{result['unique_id'].replace('.', '-')}"
status = "passed" if result["status"] == "pass" else "failed"
client.quality.push_result("acme-analytics", slug, status=status)
General Pattern
Any tool that produces a pass/fail outcome can integrate using the push endpoint:
qarion quality push SPACE SLUG --status passed|failed|error [--value N]
client.quality.push_result(space, slug, status="passed", value=99.5)
The pushed result is recorded as a new execution and triggers alerting if the check has threshold-based rules configured.
A single config file can contain both native checks (executed by Qarion) and custom check slots (populated by external tools). Use qarion quality apply to sync the full set, then run native checks with qarion quality run-config and push external results separately.
Related
- Quality Framework — Quality dimensions, severity, scheduling, and best practices
- Drift Detection Guide — Implement continuous monitoring for AI systems
- CLI Quality Commands — Full CLI command reference
- SDK Quality Resource — Python SDK method reference
- Quality Automation Tutorial — End-to-end programmatic setup guide