Testing Guide
Overview
Qarion ETL includes comprehensive test coverage for all major components. This document provides an overview of the test suite, how to run tests, and how to add new tests.
Test Structure
Test Files
test_repositories.py: Tests for repository implementations (InMemory, Local, Database)test_migration_service.py: Tests for MigrationService core functionalitytest_integration.py: End-to-end integration teststest_flow_to_dataset.py: Tests for flow-to-dataset generationtest_flow_orchestration.py: Tests for DAG generation and flow orchestrationtest_transformation_instructions.py: Tests for transformation instruction generation from flow pluginstest_sql_executor.py: Tests for SQL executor, SQL generation, and formatting
Test Coverage
The test suite covers:
-
Repository Layer (100% coverage)
- InMemoryHistoryRepository
- LocalDatasetRepository
- LocalMigrationFileRepository
- DatabaseHistoryRepository
-
Migration Service (90%+ coverage)
- Initialization
- Schema comparison
- DDL generation
- Migration file generation
- Forward/strict mode handling
-
Flow System (90%+ coverage)
- Flow plugin registration
- Dataset generation from flows
- DAG generation
- Transformation instruction generation
-
Transformation System (85%+ coverage)
- Instruction generation
- SQL executor
- SQL formatting
- Edge cases and error handling
-
Integration Tests (100% coverage)
- Full workflow tests
- Multiple dataset handling
- Mode-specific behavior
Running Tests
Basic Test Execution
# Run all tests
poetry run pytest
# Run with verbose output
poetry run pytest -v
# Run specific test file
poetry run pytest tests/test_sql_executor.py
# Run specific test class
poetry run pytest tests/test_sql_executor.py::TestSQLFormatting
# Run specific test
poetry run pytest tests/test_sql_executor.py::TestSQLFormatting::test_format_sql_basic
Test Coverage
# Generate coverage report
poetry run pytest --cov=qarion-etl/qarion_etl --cov-report=term
# Generate HTML coverage report
poetry run pytest --cov=qarion-etl/qarion_etl --cov-report=html
# View coverage report
open htmlcov/index.html
Test Markers
Tests can be marked for selective execution:
# Run only unit tests
poetry run pytest -m unit
# Run only integration tests
poetry run pytest -m integration
Writing New Tests
Test Structure
Follow the existing test patterns:
import pytest
from unittest.mock import MagicMock
class TestMyFeature:
"""Tests for MyFeature."""
@pytest.fixture
def my_fixture(self):
"""Create a test fixture."""
return MyObject()
def test_my_feature_basic(self, my_fixture):
"""Test basic functionality."""
result = my_fixture.do_something()
assert result == expected_value
Best Practices
- Use descriptive test names: Test names should clearly describe what is being tested
- One assertion per test: Focus each test on a single behavior
- Use fixtures: Share common setup code via pytest fixtures
- Mock external dependencies: Use
unittest.mockto isolate unit tests - Test edge cases: Include tests for error conditions and boundary cases
- Document complex tests: Add docstrings explaining complex test scenarios
Example: Adding a Test for New Feature
# tests/test_my_feature.py
import pytest
from unittest.mock import MagicMock
class TestMyNewFeature:
"""Tests for my new feature."""
@pytest.fixture
def mock_dependency(self):
"""Create a mock dependency."""
mock = MagicMock()
mock.method.return_value = "expected"
return mock
def test_new_feature_basic(self, mock_dependency):
"""Test basic functionality of new feature."""
from qarion_etl.my_module import MyFeature
feature = MyFeature(mock_dependency)
result = feature.execute()
assert result == "expected"
mock_dependency.method.assert_called_once()
def test_new_feature_error_handling(self, mock_dependency):
"""Test error handling in new feature."""
from qarion_etl.my_module import MyFeature
mock_dependency.method.side_effect = ValueError("Error")
feature = MyFeature(mock_dependency)
with pytest.raises(ValueError, match="Error"):
feature.execute()
CI/CD Integration
The project includes GitLab CI configuration (.gitlab-ci.yml) that:
- Runs all tests on every commit
- Generates coverage reports with detailed metrics
- Provides code statistics (lines of code, file counts)
CI Pipeline Stages
- test: Runs all tests and generates JUnit reports
- coverage: Generates coverage reports (HTML, XML, terminal)
- statistics: Provides code statistics (LOC, file counts)
Test Maintenance
Keeping Tests Up to Date
- Update tests when adding new features
- Refactor tests when refactoring code
- Remove obsolete tests for removed features
- Update test documentation when test structure changes
Test Performance
- Keep unit tests fast (< 1 second each)
- Use mocks to avoid slow I/O operations
- Group related tests in the same file
- Use fixtures to share expensive setup
Troubleshooting
Common Issues
- Import errors: Ensure
sys.pathis set correctly in test files - Fixture not found: Check fixture is defined in
conftest.pyor test file - Mock not working: Verify mock is properly configured with return values
- Test isolation: Ensure tests don't depend on execution order
Debugging Tests
# Run with pdb debugger on failure
poetry run pytest --pdb
# Run with verbose output and show locals
poetry run pytest -vv -l
# Run with print statements visible
poetry run pytest -s
Related Documentation
- Test Summary - Detailed test coverage breakdown
- Transformation Instructions Testing - Transformation system tests
- CI/CD Configuration - GitLab CI pipeline configuration