-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
suggestion: parallel builds #8888
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Great suggestion. We also previously patched the build process to enable parallel compilation. However, on Windows, signing would often fail due to file locks, causing the entire build to fail. We later switched to running multiple Docker containers for parallel compilation. If we can resolve the issue of file locks during signing, it should be possible to implement concurrent compilation. |
Do you mind explaining this or reference a relevant commit so I can take a look?
I will think about that to see if I can come up with something. |
@beyondkmp do you think the builds could be temporarily placed in specific folders and the lockfile could be cloned for each one of the builds and then we execute signing, which can use their specific lockfiles? |
@mmaietta would love it if you got some suggestions |
I don't think this can blankly be supported, as some programs lock usage during execution. I'm thinking
Also @beyondkmp, this is incorrect. Please refer to electron-builder as a
I also don't know what this is referring to haha, can you please elaborate? |
I completely agree with it being a significant refactor. Why can't we wait for some tasks to finish concurrently and after waiting for them, when they finish, execute the tasks sequentially that need sync? I believe more than 95% of the packaging can be concurrent but I am pretty sure it is gonna be really big refactor. |
Just to help me get started on brainstorming a refactor, what areas are you aware of that can be concurrent? My biggest fear here are edge cases / race conditions / file locks that will make it super difficult to debug end-user/dev issue reports. |
I'll try to make up a list and make some efforts in a fork I create so that I can tell what parts could be concurrent without raising race conditions.
I don't think debugging will be difficult but handling the race conditions would be pretty much the most challenging part. Especially since JavaScript/TypeScript ecosystem has near zero atomics support. This would actually need refactors to the core architecture but I'm pretty sure what we'd end up with is a much more optimized packager. |
I think most of the concurrent logic needs to be written out of js/ts itself if it makes sense. Although if we do get asynchronous stuff working correctly, we can do this in js land I suppose. See #8405 (relevant) |
As in migrate to |
Yes. But not migrate, just write some part in Go that we can call from JS through IPC or something similar. Something like we had with |
Hmmmm. We're ironically trying to move away from app-builder-bin to JS so that it's easier to maintain the codebase and make modifications (example: node module collector to support additional package managers with hoisting) that we weren't able to make to app-builder-bin. Same thing goes for electron artifact downloading (migrating to the official |
@mmaietta I get it but you also need to understand that JS/TS is not designed for concurrent operations. Of course, you could always use web workers but the overhead of spawning a web worker will make it significantly slower. So I am really skeptical of this working well on JS but I'll give it a spin! |
I know that and acknowledge that. As a test run, would you be willing to try this
|
@staniel359 I'll try this with my fork of muffon. @mmaietta I'll give it a try soon enough but I'm a little busy with some stuff right now :) |
So I've spent some significant time researching this further, and overall, with minor modifications, it works really well and REALLY fast. The downside that currently has me blocked is that the artifact publishing logic and update info generator are keyed off of the order of events it receives, which normally is the exact order of the arch(s) configured to build. When running in an async pool, it loses that ordering as some artifacts unsurprisingly finish faster (such as arm64 In the interim, to test progress, I did write a test suite for concurrent builds in #8920 But I can't merge that test suite currently since it increases the duration of the CI suite by 2-3x. When running with my sync pool implementation, the impact on CI duration is pretty much not noticeable from initial tests. Just need to fix the publishing-ordering logic first |
Great to hear that!
That's just reordering the publishing to be in the end after all builds have completed, am I right?
Looks great!
Understandable! I appreciate your efforts mike! Thank you for this! |
Yes, but it's not just a return of the promises. It's a generic electron-builder/packages/app-builder-lib/src/publish/PublishManager.ts Lines 131 to 141 in 9ffe333
|
So recently, while working on ci, I had to configure electron builder to build for both x64 and aarch64/arm64.
I was pretty frustrated by the build times (we have to build for a lot of linux platforms + 2 architectures) which was around 16 minutes. I tried something to reduce it. I broke down the 2 architecture into 2 commands (for linux only):
I then used bash to execute both simultaneously
This executed both at the same time and the total time was now down to 9 minutes from 16.
What this experiment has proven is that electron builder could explore the possibilities of building binaries as parallel jobs, which may speed up build times exponentially depending on cpu.
That about configuration on number of parallel jobs, by default I suggest setting it to
NUM_CPU_CORES
and if required, modify through electron-builder.json through some property likeI've also thought it out and figured out a possibility of race conditions while publishing builds to github (or similar service). For this I suggest waiting for all builds to finish and then uploading all the assets at once in one single job.
The text was updated successfully, but these errors were encountered: