git clone https://github.com/lishenghui/blades
cd blades
pip install -v -e .
# "-v" means verbose, or more output
# "-e" means installing a project in editable mode,
# thus any local modifications made to the code will take effect without reinstallation.
cd blades/blades
python train.py file ./tuned_examples/fedsgd_cnn_fashion_mnist.yaml
Blades internally calls ray.tune; therefore, the experimental results are output to its default directory: ~/ray_results
.
To run blades on a cluster, you only need to deploy Ray cluster
according to the official guide.
In detail, the following strategies are currently implemented:
Strategy | Description | Sourse |
---|---|---|
Noise | Put random noise to the updates. | Sourse |
Labelflipping | Fang et al. Local Model Poisoning Attacks to Byzantine-Robust Federated Learning, USENIX Security' 20 | Sourse |
Signflipping | Li et al. RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets, AAAI' 19 | Sourse |
ALIE | Baruch et al. A little is enough: Circumventing defenses for distributed learning NeurIPS' 19 | Sourse |
IPM | Xie et al. Fall of empires: Breaking byzantine- tolerant sgd by inner product manipulation, UAI' 20 | Sourse |
Strategy | Description | Sourse |
---|---|---|
DistanceMaximization | Shejwalkar et al. Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning, NDSS' 21 | Sourse |
Please cite our paper (and the respective papers of the methods used) if you use this code in your own work:
@inproceedings{li2024blades, title={Blades: A Unified Benchmark Suite for Byzantine Attacks and Defenses in Federated Learning}, author={Li, Shenghui and Ngai, Edith and Ye, Fanghua and Ju, Li and Zhang, Tianru and Voigt, Thiemo}, booktitle={2024 IEEE/ACM Ninth International Conference on Internet-of-Things Design and Implementation (IoTDI)}, year={2024} }