Skip to content
/ FedPromo Public

FedPromo enables efficient adaptation of large foundation models to new domains via federated learning of lightweight proxy models on edge devices, transferring their knowledge back to the foundation model without accessing user data.

License

Notifications You must be signed in to change notification settings

LTTM/FedPromo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

5 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

FedPromo

Federated Learning (FL) is an established paradigm for training deep learning models on decentralized data. However, as the size of the models grows, conventional FL approaches often require significant computational resources on client devices, which may not be feasible. We introduce FedPromo, a novel framework that enables efficient adaptation of large-scale foundation models stored on a central server to new domains encountered only by remote clients. Instead of directly training the large model on client devices, FedPromo optimizes lightweight proxy models via FL, significantly reducing computational overhead while maintaining privacy. Our method follows a two-stage process: first, server-side knowledge distillation aligns the representations of a large-scale foundation model (e.g., a transformer) with those of a compact counterpart (e.g., a CNN). Then, the compact model encoder is deployed to client devices, where trainable classifiers are learned locally. These classifiers are subsequently aggregated and seamlessly transferred back to the foundation model, facilitating personalized adaptation without requiring direct access to user data. Through novel regularization strategies, our framework enables decentralized multi-domain learning, balancing performance, privacy, and resource efficiency. Extensive experiments on five image classification benchmarks demonstrate that FedPromo outperforms existing methods while assuming limited-resource clients.

πŸ“Š Graphical Abstract

FedPromo Graphical Abstract

⚠️ Repository Notice

This repository is associated with our academic paper currently under peer review. It is being made temporarily available in connection with the corresponding arXiv preprint to allow readers and reviewers to reference the project.

At this stage, the repository does not yet contain the full source code. The codebase is undergoing final preparations and will be released shortly after acceptance of the manuscript to the target venue. Our goal is to ensure full reproducibility and ease of use for the community upon release.

πŸ“Œ Stay Informed

If you are interested in FedPromo and would like to be notified when the code is released:

  • Watch this repository using the GitHub "Watch" feature
  • Star the repository to bookmark it for later

We appreciate your interest and your patience.

πŸ“„ Citation

If you wish to cite this work, please refer to the following BibTeX entry (temporary, subject to update upon publication):

@misc{caligiuri2025fedpromofederatedlightweightproxy,
      title={FedPromo: Federated Lightweight Proxy Models at the Edge Bring New Domains to Foundation Models}, 
      author={Matteo Caligiuri and Francesco Barbato and Donald Shenaj and Umberto Michieli and Pietro Zanuttigh},
      year={2025},
      eprint={2508.03356},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2508.03356},
}

Built with ❀️ by the MEDIALab Research Group

About

FedPromo enables efficient adaptation of large foundation models to new domains via federated learning of lightweight proxy models on edge devices, transferring their knowledge back to the foundation model without accessing user data.

Topics

Resources

License

Stars

Watchers

Forks