-
-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build Error (Arch Linux - current aur version) #546
Comments
Hi, |
Something doesn't look right to me. First you have a typo in the title, this is evidently about Arch Linux not ARC linux (which is something different). Second, the title says "current git version" but the log file you posted is from building the AUR recipe for the stable 1.3.0 release. Third, I just built both the Since clean CHROOT builds with just the speced dependencies works properly the issue with checks is probably a conflict with something extra or some unsupported version of something on your host system. I highly recommend you switch to building AUR packages in chroots. There is tooling available to automate the process so it is very easy. You'll run into lots of AUR builds that are invalid because they are missing dependencies, but you'll also get much more reliable builds from ones that are correct because the build is isolated to the correct environment and when they do work you'll have a higher confidence that they will work as expected, that the built packages can be installed on other systems, etc. |
Thank you. I understand, I will try with clean chroot. But just to let you know I have many other AUR packages that I regularly update without any error. |
Looking more closely at your logs, you are using This is arguably a bug in this projects test suite as it should do it's own cleanup in the tests directory and not be reliant on a clean starting position for test runs, but it is also very hard to cope with artifacts from previous releases. No project should be expected to do that, a clean build tree is expected for building tagged versions. Suggestions about building using chroots still apply. Paru even has an option to do this automatically. Arch packages in specific are NOT supposed to clear the build tree as part of the packaging because we want to be able to restart builds where they left off when working on the packaging. The expectation is that the user will cleanup after old version builds but is allowed to rebuild the current version in a dirty tree if they want/need to. |
I have totally removed ~/.cache/paru and rebuilded but got the same error |
I use manjaro, with yay. I deleted ~/.cache/yay/gittyup, tried again and got the exact same build problem. Error in test 17 |
Then we're back to this: something on your systems that is not a dependency is confusing the test suite. First it would be interesting no know if the build works if you |
I'm in the same boat as zimudec, but was able to install it with |
I think indeed the vendoring is part of the problem here, but maybe not all of it. I'm seemingly facing the same problem on Fedora 38, and worse problems with Fedora 40. They both also have tests sporadically failing. Both utilise OpenSSL 3, which results in a linker warning since it's a dependency in Qt and I guess it sees there are clashing symbols. To fix that, Gittyup needs to be compiled and linked towards the system version of OpenSSL, not even a vendored copy of it (assuming current instructions are follows). Just out of curiosity, I tried updating the OpenSSL submodule to version 3, and compile that, but that actually breaks tooling such as cmake when run in the build folder. It's probably down to different compile or link settings, but still, a clear demonstration that it's probably easier to just rely on the system provided copy than to run down the pathway of managing it yourself. That said, there must be more to this. Compiling against the system libraries for the ones I do have installed, still doesn't get rid of the problems. Now I've cleaned both the clone and the build directory quite a few times across two machines, and unless the test data leaks outside the build structure, there's no getting rid of the problems on an unmodified master branch. If I switch to the Qt6 branch, this particular issue disappears on Fedora 38 and all tests pass most of the time. On Fedora 40 I'm still left with sporadic failure of test 18 and always failure of test 23. I tried to see if the Qt6 branch has something that sticks out that could explain it, but aside from Qt6 itself, I don't spot anything. I'm of course not particularly familiar with the code, so that doesn't prove much. |
After quite some head scratching, I think this particular issue might be tied to some Qt fun. If I run I would agree that this also could be a timing issue, but injecting 2 second delay right before the failing line doesn't help things. So unless something is pulled away too quickly, it seems less likely. @alerque I guess the chroot setup you ran was based on Qt 5.15.2, or was it some other version? |
It looks like the package I have around was built with Qt-5.15.9 with some Arch patches. You can always tell from an Arch package by extracting the .BUILDINFO file from the package and looking at the list of things that were installed on the builder. I think I re-tried the build a few times above when people commented and there was probably a newer Qt at those times, but I don't have the final packages or build log handy to check. The current version of Qt in Arch is 5.15.13 plus some Arch patches. If one were to re-run a chroot build now that's what it would try to build against. I don't have time to retest a build myself at the moment but all you need is devtools and |
Thanks! I finally got around to spinning up an Arch VM, and it of course has GCC14 now, so I had to run #759 This is pretty interesting, because I can both reliably reproduce this and not depending on what's installed. A chroot build seems to always succeed, although I must say that test 1 and 2 seem to be quite disk I/O performance sensitive. I had to move the VM from a spin disk to an SSD to get the tests to reliably pass. Anyway, what I notice is that the binaries in the chroot are unable to run with xcb as the Qt platform plugin, and by default run with the offscreen plugin. That's not surprising since it's a limited environment, lacking any GUI display capabilities. However, the moment I remove any display capabilities from the non-chroot environment, the test passes there as well. Install for example plasma-desktop, and the same binaries fail. No rebooting nor rebuild needed. This all happens in over SSH while the VM itself is booted into multi-user text/console mode. So with that, I'm pretty interested to hear if anyone is even able to get these tests to pass in a GUI enabled Linux environment without actually rendering the GUI (aka. run with the xcb platform plugin instead of offscreen). |
The text was updated successfully, but these errors were encountered: