Chair-for-Security-Engineering/Revisiting-Prime-Prune-Probe
Folders and files
| Name | Name | Last commit date | ||
|---|---|---|---|---|
Repository files navigation
Artifact for the paper "Revisiting Prime+Prune+Probe: Pitfalls and Remedies"
Requirements:
Ubuntu 22.04
Python 3
First Step:
Make sure to first initialize and update git submodule by running `git submodule init && git submodule update` prior running the install script or building the docker container.
Alternatevly, you can clone the repository with the `--recursive` flag.
Docker Setup:
1. Have Docker installed
2. Run `docker build . -t acsac_artifact` to build the image
3. Run `docker run -it acsac_artifact` to start a container with our artifact and get a bash shell
Local Setup:
Simply execute `install.sh`. You may need to make it executabel with `chmod +x install.sh` to launch it directly or use `bash install.sh`.
Execution Guide:
In the artifact directory, two major pyhton 3 scripts exist:
1. eval.py
2. e2e_eval.py
The first is to generate the results for the confirmation in Section 3.3 and the evaluation in Section 5 regarding victim detection rates for different caches using different remedies.
The second is to generate the results in Section 5.2 and Table 1.
eval.py:
Launch with `python3 eval.py`. All settings are done inside the script by commenting in or out values in the "Parameter Sets" section. Results are printed and stored in the log/ directory as csv files.
The csv structure is as follows:
cache_size, ways, cache, remedi, noise level, number of rotating eviction sets, catches (iteration), catches (total), catches (victim), self evictions, misses (iteration), noise accesses, avg. eviction set size, cachefx return code, trace (hits per iteration)
The plot for Section 3.3 was generated by using catches (victim) divided by 10 (total of 1000 iterations) to get the percentage of successfull victim detections. No remedies were enabled.
Plots for Section 5 were generated by dividing catches (victim) by 10 to get the percentage, as we did a total of 1000 iterations, with different remedies enabled.
e2e_eval.py:
Launch with `python3 e2e_eval.py`. Use the "-h" or "--help" to get all available commandline parameters:
"-n" controls the noise levels to test, e.g., `python3 e2e_eval.py -n 0.1 0.2` tests the AES attack with 10% and 20% noise.
"-f" enables the additional flush of the eviciton set prior to accessing noise (Flush+Purge).
"-r" controls the number of repeated exeperiments for each noise level.
"-i" prioritize invalid cache lines
The output should be sufficiently detailed to be self-explanatory.