Repo for CDM input data loading and wrangling
The data loader utils package uses uv for python environment and package management. See the installation instructions to set up uv on your system.
The CDM data loaders run on python 3.13 and above.
Most python code can be run using the command
> uv run <path_to_file.py>This will automatically launch a virtual environment and install all required dependencies.
To manually set up the virtual environment and install dependencies (including python), run
> uv syncTo activate a virtual environment with these dependencies installed, run
> uv venv
# you will now be prompted to activate the virtual environment
> source .venv/bin/activateIf you are using IDEs like VSCode, they should pick up the creation of the new environment and offer it for executing python code.
The repo provides a Docker container that can be used to run several import pipelines or to run unit tests for the repo. The entrypoint script parses the container run arguments and launches the appropriate functions.
Current endpoints include:
test: run the unit tests that do not require external dependencies like Sparkuniprot: run the UniProtKB (UniProt protein database) import pipeline; see the UniProtKB pipeline for argumentsuniref: run the UniRef import pipeline; the the UniRef pipeline for arguments
Some parts of this codebase rely on having a Spark instance available. Spark dependencies are pulled in by the berdl-notebook-utils package from BERDataLakehouse/spark_notebook, and the Docker container generated by the same repo should be used for development and testing to mimic the container where code will be run.
Pull the docker image:
> docker pull ghcr.io/berdatalakehouse/spark_notebook:mainMount the current directory at /tmp/cdm and run the tests:
> docker run --rm -e NB_USER=runner -v .:/tmp/cdm ghcr.io/berdatalakehouse/spark_notebook:main /bin/bash /tmp/cdm/scripts/run_tests.shRun the container interactively as the user runner; current directory is mounted at /tmp/cdm:
> docker run --rm -e NB_USER=runner -it -v .:/tmp/cdm ghcr.io/berdatalakehouse/spark_notebook:mainThis will launch a bash shell; the contents of the cdm-data-loaders directory are mounted at /tmp/cdm.
Run the container and sleep:
> docker run --rm -e NB_USER=runner -it -v .:/tmp/cdm ghcr.io/berdatalakehouse/spark_notebook:main sleep 100000000The sleep command will run the container for long enough that you can then connect to it via Docker Desktop or the VSCode Containers extension.
See the BERDataLakehouse/spark_notebook repo for more information on the container and for a full docker-compose set up to mimic the BER Data Lakehouse container infrastructure.
Tests are categorised using pytest markers to allow developers to execute some or all the tests. See pyproject.toml for the markers used.
To run all tests (requires a running Spark instance), execute the command:
> uv run pytestTo run only tests that do not require Spark, run
> uv run pytest -m "not requires_spark"To generate coverage for the tests, run
> uv run pytest --cov=src --cov-report=xmlThe standard python coverage package is used and coverage can be generated as html or other formats by changing the parameters.
The genome loader can be used to load and integrate data from related GFF and FASTA files. Currently, the loader requires a GFF file and two FASTA files (one for amino acid seqs, one for nucleic acid seqs) for each genome. The list of files to be processed should be specified in the genome paths file, which has the following format:
{
"FW305-3-2-15-C-TSA1.1": {
"fna": "tests/data/FW305-3-2-15-C-TSA1/FW305-3-2-15-C-TSA1_scaffolds.fna",
"gff": "tests/data/FW305-3-2-15-C-TSA1/FW305-3-2-15-C-TSA1_genes.gff",
"protein": "tests/data/FW305-3-2-15-C-TSA1/FW305-3-2-15-C-TSA1_genes.faa"
},
"FW305-C-112.1": {
"fna": "tests/data/FW305-C-112.1/FW305-C-112.1_scaffolds.fna",
"gff": "tests/data/FW305-C-112.1/FW305-C-112.1_genes.gff",
"protein": "tests/data/FW305-C-112.1/FW305-C-112.1_genes.faa"
}
}run_tools.sh runs the stats script from bbmap and checkm2 on files with the suffix "fna". These tools can be installed using conda:
conda env create -f env.yml
conda activate genome_loader_env
# download the checkm2 database
checkm2 database --downloadRun the stats and checkm2 tools with the following command:
bash scripts/run_tools.sh path/to/genome_paths_file.json output_dirwhere path/to/genome_paths_file.json specifies the path to the genome paths file (format specified above) and output_dir is the directory for the results.