Skip to content

Comments

feat(imaging): SparsityAssessor for SNR-based severity classification (#183)#241

Merged
KedoKudo merged 4 commits intofeature/172-2d-imagingfrom
feature/183-sparsity-assessor
Feb 21, 2026
Merged

feat(imaging): SparsityAssessor for SNR-based severity classification (#183)#241
KedoKudo merged 4 commits intofeature/172-2d-imagingfrom
feature/183-sparsity-assessor

Conversation

@KedoKudo
Copy link
Collaborator

@KedoKudo KedoKudo commented Feb 20, 2026

Summary

  • Adds SparsityAssessor and SparsityMetrics in src/pleiades/imaging/assessor.py
  • Noise estimated via MAD of the spatial-mean spectrum (normalised to Gaussian σ)
  • L0–L4 severity levels based on SNR and zero-fraction thresholds
  • Recommendations per level: direct_fitting, bin_size=2, bin_size=4, physics_recovery
  • 36 unit tests covering all levels, boundary conditions, noise estimation, edge cases

Test plan

  • pixi run pytest tests/unit/pleiades/imaging/test_assessor.py -v → 36 passed
  • No regressions in existing tests

🤖 Generated with Claude Code

…tion (#183)

Adds SparsityAssessor and SparsityMetrics in src/pleiades/imaging/assessor.py.
Estimates noise via MAD of the spatial-mean spectrum and classifies datasets
into L0-L4 severity levels based on SNR and zero-fraction thresholds. Each
level carries processing recommendations (direct_fitting, bin_size=2/4,
physics_recovery). 36 unit tests cover all severity levels, boundary
conditions, noise estimation, and edge cases.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@KedoKudo KedoKudo force-pushed the feature/183-sparsity-assessor branch from 333fc2e to 4b0279e Compare February 21, 2026 01:59
KedoKudo and others added 3 commits February 20, 2026 21:06
…ve full-cube copy

- Compute min_transmission and resonance_depth from the spatial-mean
  spectrum (same summary used for MAD noise estimation) instead of the
  global voxel minimum.  This prevents a single outlier pixel from
  inflating SNR and shifting severity classification.
- Remove full-cube astype(np.float64) copy; use dtype= parameter in
  reductions to promote precision without duplicating the array.
- Export SparsityAssessor and SparsityMetrics from imaging __init__.py.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Transmission overshoot above 1.0 (valid baseline offset) produced
negative resonance_depth, yielding either negative SNR (→ L4) or
inf (→ L0) depending on noise — both incorrect.  Floor depth at 0.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When resonance_depth is clamped to 0 (no dip below 1.0), the zero-noise
branch was still returning SNR=inf → L0.  Now check resonance_depth
before noise: no signal means SNR=0 regardless of MAD.

Adds test for varying-overshoot path (nonzero MAD, zero depth).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds a SparsityAssessor component for analyzing hyperspectral neutron imaging data quality through SNR-based severity classification. The assessor computes noise via the Median Absolute Deviation (MAD) of the spatial-mean spectrum and classifies datasets into five severity levels (L0-L4) based on SNR and zero-fraction thresholds, providing processing recommendations for each level.

Changes:

  • Introduces SparsityAssessor class and SparsityMetrics dataclass for quantitative sparsity characterization
  • Implements MAD-based noise estimation normalized to Gaussian standard deviation (MAD/0.6745)
  • Defines five severity levels (L0: Clean through L4: Extreme sparse) with corresponding processing recommendations
  • Adds comprehensive test suite with 36 unit tests covering all severity levels, boundary conditions, noise estimation, and edge cases

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.

File Description
src/pleiades/imaging/assessor.py New module implementing SparsityAssessor, SparsityMetrics, and severity classification logic with MAD-based noise estimation
tests/unit/pleiades/imaging/test_assessor.py Comprehensive test suite covering all severity levels, metrics validation, boundary conditions, and edge cases
src/pleiades/imaging/init.py Exports SparsityAssessor and SparsityMetrics to public API
Comments suppressed due to low confidence (3)

src/pleiades/imaging/assessor.py:130

  • The MAD noise estimation could potentially fail or produce misleading results when the spatial-mean spectrum has very few energy bins. With a single energy bin (tested in line 293-299), MAD will be zero since there's only one value. This is handled correctly (SNR becomes inf at line 134), but consider documenting this behavior or adding a warning for datasets with very few energy bins (e.g., n_energy < 10), as MAD-based noise estimation becomes unreliable with sparse spectral sampling.
        mad = float(np.median(np.abs(spatial_mean - np.median(spatial_mean))))
        noise_estimate = mad / 0.6745  # normalise MAD → Gaussian std equivalent

tests/unit/pleiades/imaging/test_assessor.py:138

  • While the _classify_severity function is thoroughly tested for all levels L0-L4, only L0 has an explicit integration test that verifies the level is reachable through the full assess() method with real data (test_l0_clean_high_snr). Consider adding integration tests for L1, L2, and L3 that create synthetic datasets which naturally result in those levels through the full assessment pipeline. This would provide additional confidence that the end-to-end flow works correctly for all severity levels.
    def test_l0_clean_high_snr(self):
        # Near-perfect data: background close to 1.0, deep clean resonance
        # → very high SNR, near-zero zero_fraction → L0
        assessor = SparsityAssessor()
        data = _clean_data(n_e=200, h=16, w=16, t_min=0.01, t_bg=0.999)
        hs = _make_hyperspectral(data)
        m = assessor.assess(hs)
        assert m.severity_level == 0
        assert "L0" in m.severity_label
        assert "direct_fitting" in m.recommendations

src/pleiades/imaging/assessor.py:105

  • The documentation table and description are misleading. The comment "Severity classification rules (evaluated in order; first match wins)" is incorrect - the implementation actually evaluates both SNR and zero-fraction axes independently and takes the maximum level (line 192: return max(snr_level, zf_level)). The table format also suggests that both SNR and zero-fraction conditions must be satisfied together for a given level, but this is not how it works. For example, data with SNR=50 and zero_fraction=0.30 would be classified as L3 (from zero_fraction), not L0 (from SNR). Consider restructuring the documentation to show two separate tables (one for SNR thresholds, one for zero-fraction thresholds) and clearly state that the final level is the maximum of the two independent classifications. This would match the actual implementation and the clarifying text on lines 107-108.
        Severity classification rules (evaluated in order; first match wins):

        ====== ===== ============== =====================================
        Level  SNR   Zero fraction  Label
        ====== ===== ============== =====================================
        L0     >10   <1%            Clean
        L1     5–10  <5%            Mild noise
        L2     2–5   5–15%          Moderate sparse
        L3     1–2   15–40%         Heavy sparse
        L4     <1    >40%           Extreme sparse
        ====== ===== ============== =====================================

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@KedoKudo KedoKudo merged commit 83da31f into feature/172-2d-imaging Feb 21, 2026
7 checks passed
@KedoKudo KedoKudo deleted the feature/183-sparsity-assessor branch February 21, 2026 02:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant