Skip to content

amazon-science/llm-qmia

LLM QMIA

This repository contains code on running quantile membership inference attacks on finetuned LLMs. It is based on the paper Order of Magnitude Speedups for LLM Membership Inference

see test.sh for a line by line execution to fine-tune an llm, run a quantile membership inference attack on it, and plot the results

Citing Our Work

@inproceedings{zhang-etal-2024-order,
    title = "Order of Magnitude Speedups for {LLM} Membership Inference",
    author = "Zhang, Rongting  and
      Bertran, Martin Andres  and
      Roth, Aaron",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.emnlp-main.253",
    pages = "4431--4443"
}

Security

See CONTRIBUTING for more information.

License

This project is licensed under the Apache-2.0 License.

About

No description, website, or topics provided.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published