-
Notifications
You must be signed in to change notification settings - Fork 283
Use -Zpackage-workspace
for packing
#2349
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
4cf12d6
to
5966cc7
Compare
0d665a1
to
03b6b04
Compare
56711e1
to
b190a32
Compare
b190a32
to
346da05
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 4 out of 7 changed files in this pull request and generated no comments.
Files not reviewed (3)
- eng/scripts/Pack-Crates.ps1: Language not supported
- eng/scripts/Update-PackageVersion.ps1: Language not supported
- eng/scripts/Yank-Crates.ps1: Language not supported
Comments suppressed due to low confidence (3)
eng/scripts/update-pathversions.rs:27
- The revised relative path for repo_root now excludes one level compared to the previous version; please verify that this change aligns with the intended repository structure.
let repo_root = script_root.join("../..").canonicalize()?;
eng/scripts/update-pathversions.rs:41
- [nitpick] Consider using a formal logging framework instead of println for production-level logging to better manage log verbosity and output.
println!("Processing {}", toml_file.path.display());
eng/pipelines/templates/jobs/pack.yml:35
- [nitpick] The removal of the dedicated Rust toolchain template could affect environment setup; please ensure that the correct Rust version is still being applied in the build pipeline.
- template: /eng/pipelines/templates/steps/use-rust.yml@self
/azp run |
You have several pipelines (over 10) configured to build pull requests in this repository. Specify which pipelines you would like to run by using /azp run [pipelines] command. You can specify multiple pipelines using a comma separated list. |
/azp run rust - template |
Azure Pipelines successfully started running 1 pipeline(s). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's not clear from the PR description is why. Why are we doing all this work - often redoing what cargo pack
and/or cargo publish
would I'm really not crazy about the cost of owning all this and keeping it in sync with any changes that the cargo
team might make. They don't really document what's in a .crate
file which means it could change and all our stuff breaks.
I'd really like to understand why we're doing all these changes.
@@ -20,6 +20,7 @@ members = [ | |||
] | |||
exclude = [ | |||
"eng/scripts", | |||
"target", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is always ignored. Why did you add it? This is going to raise a lot of eyebrows.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's not ignored 😬. The targets folder will get package folder in it. If you run any cargo
commands on the packages in /targets, it complains about the package thinking its running in a workspace but not being a member or excluded
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are you running commands against anything targets
? That's not our place space. For package analysis - though I maintain there's nothing to analyze (it's zipped source) - that should probably be done outside the source repo in the temp working dir AzDO, GHA, etc. give us. Let's not unnecessarily complicate things.
function New-ApiPutFile($crateMetadata, $crateFilePath) { | ||
$metadataBytes = [Text.Encoding]::Utf8.GetBytes($crateMetadata) | ||
$metadataLengthBytes = [BitConverter]::GetBytes([UInt32]$metadataBytes.Length) | ||
$crateBytes = [IO.File]::ReadAllBytes($crateFilePath) | ||
$crateLengthBytes = [BitConverter]::GetBytes([UInt32]$crateBytes.Length) | ||
|
||
#write an empty checksum file | ||
'{"files":{}}' | Out-File -FilePath "$LocalRegistryPath/$packageName-$packageVersion/.cargo-checksum.json" -Encoding utf8 | ||
$bytes += $metadataLengthBytes + $metadataBytes + $crateLengthBytes + $crateBytes | ||
|
||
return $bytes | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not just use Get-FileHash
? It's been around forever. Best not to reinvent the wheel and own its maintenance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this isn't computing a hash. Its constructing a binary file for posting to the crates.io api
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
...which comes back to my original question: why are we doing that? I get why we crack open .nupkg
files: there's binaries to check for signing, viruses, etc. .crate
files are zipped source. We spending so much time on redoing what cargo publish
already does which not only prevents some other work from getting done, but I worry creates a dependency on undocumented implementation details we'll forever have to maintain ourselves.
@epage said that we can only rely on the crate file in target and not the package folder
So our release process needs to be .crate centric. Also, by running |
Also, it feels like maintaining the API call will be less maintenance than all of the powershell we have around orchestrating the dependencies during build. |
All the powershell around dependencies was necessary, though, because monorepos aren't well-supported by cargo. I still don't understand the point of us pulling apart a What are we even analyzing for a zipped source archive? Anything we could scan for we should against the source itself during PRs, CIs, etc. All that can already be done and not at the last minute when we're trying to publish. /cc @RickWinter I think we should leave this PR as-is for now while we discuss. I know there are some other high-pri EngSys work items we need to address yet e.g., caching, whether Azure Pipelines' caching is good enough or if we should use sccache. |
cargo publish is only right there if we publish an uncompressed folder. We only get an uncompressed folder by checking out the repo or by decompressing the .tgz file. We can't checkout the repo in the deployment job.
|
Because we use workspace references, our package folders can't be published in isolation. The contents of the tarball are publishable in isolation and the only effect of "double publishing" the tarball is the mangling of the Cargo.toml.orig file. If we publish the original .tgz with a direct api call, we publish with the original Cargo.toml in Cargo.toml.orig. |
Until we can get toolset changes, I think we need to seek an exemption. That's not how |
I'm documenting all of the current dependency version and path behaviors in issue #2475 "Document dependency requirements". |
target
to the workspace exclusionsPack-Crates.ps1
.crate
file instead of the unpacked package folder in/target
after acargo publish --dry-run
+nightly -Zpackage-workspace
instead of a vendor folder and powershell to order and stage unpublished dependencies--no-verify
unless enabled with the-Verify
switch