chatgpt insists that several smaller files are faster to download than one big archive: https://chatgpt.com/share/685e6234-5ef8-8013-97e9-ca9f503a1a82 (the big archive being the qrc packed into webassembly).
compilation time will be smaller (but i've read that this usually doesn't matter). for some of the data, we could defer usage and thereby speed up the startup (create qml and opengl context while it's still downloading).
the label scheduler could wait until the font arrives.
the aabb decorator could wait until the height cache arrives (and decorate with -0 - 9k until then).
with the qml context, we'll have to check.