This document describes the testing infrastructure and guidelines for contributing to ethoscopy.
Ethoscopy uses pytest as its testing framework with comprehensive unit tests, integration tests, and fixtures to ensure code reliability and prevent regressions.
First, install the testing dependencies:
pip install -e ".[dev]"Run all tests:
pytestRun with coverage reporting:
pytest --cov=ethoscopy --cov-report=term-missingUse the convenience script:
python run_tests.pyTests are organized with markers:
@pytest.mark.unit- Fast unit tests for individual functions@pytest.mark.integration- Integration tests for workflows@pytest.mark.slow- Tests that take longer to run@pytest.mark.requires_data- Tests needing actual ethoscope data
Run specific categories:
# Only unit tests (fast)
pytest -m unit
# Only integration tests
pytest -m integration
# Skip slow tests
pytest -m "not slow"
# Run specific test file
pytest tests/test_load.py# Verbose output
pytest -v
# Stop on first failure
pytest -x
# Run tests in parallel
pytest -n auto
# Generate HTML coverage report
pytest --cov=ethoscopy --cov-report=htmltests/
├── __init__.py # Test package init
├── conftest.py # Shared fixtures and configuration
├── test_load.py # Tests for loading functions
├── test_analyse.py # Tests for analysis functions
├── test_behavpy.py # Tests for behavpy classes
└── test_integration.py # Integration tests
Key fixtures available in all tests:
sample_metadata_csv- Temporary CSV metadata filesample_ethoscope_data- Mock ethoscope tracking datamock_sqlite_db- SQLite database with ethoscope structuresample_behavpy_object- Pre-created behavpy objectlinked_metadata_sample- Linked metadata for load testing
- Test files:
test_*.py - Test classes:
Test* - Test functions:
test_*
@pytest.mark.unit
def test_function_success(self, fixture_name):
"""Test successful function execution."""
result = function_under_test(input_data)
assert isinstance(result, expected_type)
assert len(result) > 0
assert 'expected_column' in result.columns@pytest.mark.integration
def test_complete_workflow(self, sample_metadata_csv, tmp_path):
"""Test complete workflow from metadata to analysis."""
# Setup test data structure
setup_test_environment(tmp_path)
# Execute workflow
linked_data = link_meta_index(sample_metadata_csv, tmp_path)
loaded_data = load_ethoscope(linked_data)
analyzed_data = sleep_annotation(loaded_data)
# Verify results
assert len(analyzed_data) > 0
assert 'asleep' in analyzed_data.columns- Test Function Behavior: Test expected behavior, edge cases, and error conditions
- Use Descriptive Names: Test names should clearly describe what's being tested
- Keep Tests Isolated: Each test should be independent and not rely on others
- Use Appropriate Markers: Mark tests as unit/integration/slow as appropriate
- Mock External Dependencies: Use mocks for file I/O, databases, web requests
- Test Error Conditions: Ensure functions handle invalid inputs gracefully
- Verify State Changes: Test that functions modify data as expected
When testing data loading functions:
def test_load_function_with_mock_db(self, mock_sqlite_db):
\"\"\"Test loading with mocked database.\"\"\"
file_info = create_file_info(mock_sqlite_db)
result = read_single_roi(file_info)
assert result is not None
assert isinstance(result, pd.DataFrame)When testing analysis functions:
def test_analysis_function_edge_cases(self):
\"\"\"Test analysis function with edge cases.\"\"\"
# Test with empty data
empty_result = analysis_function(pd.DataFrame())
assert len(empty_result) == 0
# Test with all-same values
constant_data = create_constant_data()
constant_result = analysis_function(constant_data)
assert not constant_result['column'].any()The project includes comprehensive CI/CD pipelines:
- Triggers: Push to main/develop, PRs to main, daily scheduled runs
- Python versions: 3.10, 3.11, 3.12
- Jobs:
- Test: Unit tests, integration tests, linting, formatting
- Notebook Testing: Validates Jupyter notebooks
- Build: Package building and validation
- Security: Bandit and safety scans
- Triggers: GitHub releases and version tags
- Automated PyPI publishing: Test PyPI for tags, production PyPI for releases
Install pre-commit hooks for local development:
pip install pre-commit
pre-commit installConfigured hooks:
- Code formatting (Black)
- Linting (Ruff)
- Type checking (MyPy)
- Unit tests on commit
- Fast tests on push
- Minimum coverage target: 70% (configured in pytest.ini)
- All new functions must have tests
- Critical functions (load, analysis) require comprehensive test coverage
- View coverage reports:
pytest --cov=ethoscopy --cov-report=html
Long-running tests are marked with @pytest.mark.slow:
# Run performance tests
pytest -m slow
# Skip performance tests for faster development
pytest -m "not slow"# Run specific test
pytest tests/test_load.py::TestLoadEthoscope::test_load_ethoscope_success
# Debug with pdb
pytest --pdb tests/test_load.py::test_specific_test
# Capture print statements
pytest -s tests/test_load.py- Import Errors: Ensure ethoscopy is installed in development mode (
pip install -e .) - Missing Dependencies: Install test dependencies (
pip install -e ".[dev]") - Fixture Errors: Check that fixtures are properly defined in
conftest.py - Path Issues: Use
tmp_pathfixture for temporary files
When contributing new features:
- Write tests before or alongside implementation
- Ensure new tests follow existing patterns
- Add appropriate markers (
@pytest.mark.unit, etc.) - Update this documentation if adding new testing patterns
- Verify all tests pass:
pytest
- Use fixtures for reusable test data
- Create minimal test data that exercises the function
- Use
tmp_pathfor temporary files - Mock external dependencies (databases, web services)
- Use random seeds for reproducible test data
For more information on pytest, see the official documentation.