This repository contains code on running quantile membership inference attacks on finetuned LLMs. It is based on the paper Order of Magnitude Speedups for LLM Membership Inference
see test.sh for a line by line execution to fine-tune an llm, run a quantile membership inference attack on it, and plot the results
@inproceedings{zhang-etal-2024-order,
title = "Order of Magnitude Speedups for {LLM} Membership Inference",
author = "Zhang, Rongting and
Bertran, Martin Andres and
Roth, Aaron",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.253",
pages = "4431--4443"
}
See CONTRIBUTING for more information.
This project is licensed under the Apache-2.0 License.