Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't remove docker layers (set forcerm = false) #9718

Open
joshka opened this issue Mar 23, 2025 · 3 comments · May be fixed by #9727
Open

Don't remove docker layers (set forcerm = false) #9718

joshka opened this issue Mar 23, 2025 · 3 comments · May be fixed by #9727

Comments

@joshka
Copy link
Contributor

joshka commented Mar 23, 2025

Currently the test runner intentionally removes each layer unconditionally in a docker file.

This prevents using the layers for caching build output of deps. E.g. in a Rust docker container you can do something like:

COPY Cargo.toml Cargo.lock ./
RUN mkdir src && echo "fn main() {}" > src/main.rs && cargo build --release && rm -rf src

COPY . .
RUN RUSTFLAGS="-C target-cpu=native" cargo install --path . --locked

This will only build the dependencies once and then cache that for all future builds. This allows faster experimentation (cutting out 30s-minutes of each build test cycle depending on the tree / CPU speed). Cache busting is fairly simple (update the toml / lock file, or more generally just touch them so they're detected as newer), and the same technique applies to other languages not just Rust.

There was a unanswered question about this in 2018 about whether to remove it. #3574 (comment) by @msmith-techempower. I'm curious if there's any known blockers to doing so?

@msmith-techempower
Copy link
Member

The issue is that we were constantly running out of space mid-run on Citrine because of all the Docker images laying around. We eventually settled on "let's just forcerm since it's simple and it'll get the job done." That wasn't even true, it turns out, and we still run out of space from time to time, so the continuous scripts system prune before each new run to ensure space gets reclaimed.

There is probably something we can tweak to try and reuse some of those layers, but stepping back a bit is this an issue running either locally or in CICD?

If this is just an issue for contributors trying to get their framework running and passing locally, we should build in a flag to control forcerm and default it to false for contributors, and we'll just set it to true on Citrine.

@c-cesar
Copy link
Contributor

c-cesar commented Mar 24, 2025

Maybe the best solution is simply tell deve to tweak this option, or add a flag to tfb command to diable it.
This way we can have fast local testing and don't mess with citrine setup.

@joshka
Copy link
Contributor Author

joshka commented Mar 24, 2025

If this is just an issue for contributors trying to get their framework running and passing locally, we should build in a flag to control forcerm and default it to false for contributors, and we'll just set it to true on Citrine.

Thanks for the background. Yep, my concern and perspective was purely iteration speed. I'm locally tweaking a rust benchmark, and each run has to re-download and recompile all the crates. Turning off forcerm allows me to skip that which cuts a chunk of time off each benchmark cycle.

Maybe the best solution is simply tell deve to tweak this option, or add a flag to tfb command to diable it.
This way we can have fast local testing and don't mess with citrine setup.

Makes sense - PR incoming

joshka added a commit to joshka/techempower-benchmarks that referenced this issue Mar 24, 2025
… layers

This commit makes it possible to turn off the removal of intermediate
docker layers when building the tfb containers by adding a `--force-rm`
flag to the tfb script. This is useful in situations where you want to
inspect the intermediate layers for debugging purposes, or for caching
builds of dependencies as a docker layer to speed up the build process.

Note that the default behavior is to remove the intermediate layers to
avoid filling up the disk with unused layers on the citrine server.

Fixes: TechEmpower#9718
@joshka joshka linked a pull request Mar 24, 2025 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants