This code is the official PyTorch implementation of our ICLR'25 paper: CATCH: Channel-Aware Multivariate Time Series Anomaly Detection via Frequency Patching.
If you find this project helpful, please don't forget to give it a ⭐ Star to show your support. Thank you!
🚩 News (2025.6) The evaluation framework TAB used in this study has been accepted by PVLDB 2025. Both the dataset and source code are available here.
🚩 News (2025.1) CATCH has been accepted by ICLR 2025.
CATCH, a framework based on frequency patching, flexibly utilizing the channel correlations to reconstruct all the frequency spectrums in a fine-grained way to achieve remarkable detection accuracy. Technically, we propose a Channel Fusion Module (CFM), which features a patch-wise mask generator and the masked-attention mechanism. Driven by a bi-level multi-objective optimization algorithm, the CFM is encouraged to iteratively discover appropriate patch-wise channel correlations and cluster similar channels in the hidden spaces while isolate the adverse effects from irrelevant channels, which provides both the capacity and robustness of the attention mechanism.
Given a python environment (note: this project is fully tested under python 3.8), install the dependencies with the following command:
pip install -r requirements.txt
Prepare Data. You can obtained the well pre-processed datasets from OneDrive or BaiduCloud. (This may take some time, please wait patiently.) Then place the downloaded data under the folder ./dataset
.
- To see the model structure of CATCH, click here.
- We provide the experiment scripts for CATCH and other baselines under the folder
./scripts/multivariate_detection
. For example you can reproduce a experiment result as the following:
sh ./scripts/multivariate_detection/detect_label/MSL_script/CATCH.sh
sh ./scripts/multivariate_detection/detect_score/MSL_script/CATCH.sh
Extensive experiments on 10 real-world datasets and 12 synthetic datasets demonstrate that CATCH achieves state-of-the-art performance. We show the main results of all the 10 real-world datasets, and report the mean results of the 6 types of synthetic datasets:
If you want to test all baseline models, please refer to the Time Series Anomaly Detection Benchmark TAB:
If you find this repo useful, please cite our paper.
@inproceedings{wu2024catch,
title = {{CATCH}: Channel-Aware multivariate Time Series Anomaly Detection via Frequency Patching},
author = {Wu, Xingjian and Qiu, Xiangfei and Li, Zhengyu and Wang, Yihang and Hu, Jilin and Guo, Chenjuan and Xiong, Hui and Yang, Bin},
booktitle = {ICLR},
year = {2025}
}
@inproceedings{qiu2025tab,
title = {{TAB}: Unified Benchmarking of Time Series Anomaly Detection Methods},
author = {Xiangfei Qiu and Zhe Li and Wanghui Qiu and Shiyan Hu and Lekui Zhou and Xingjian Wu and Zhengyu Li and Chenjuan Guo and Aoying Zhou and Zhenli Sheng and Jilin Hu and Christian S. Jensen and Bin Yang},
booktitle = {Proc. {VLDB} Endow.},
year = {2025}
}
If you have any questions or suggestions, feel free to contact:
Or describe it in Issues.