diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index 7cb818ae8e..742a189671 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -1,47 +1,44 @@ # Contributing -Thank you for considering to help out with the source code! We welcome -contributions from anyone on the internet, and are grateful for even the -smallest of fixes! - -If you'd like to contribute to subnet-evm, please fork, fix, commit and send a -pull request for the maintainers to review and merge into the main code base. If -you wish to submit more complex changes though, please check up with the core -devs first on [Discord](https://chat.avalabs.org) to -ensure those changes are in line with the general philosophy of the project -and/or get some early feedback which can make both your efforts much lighter as -well as our review and merge procedures quick and simple. +Thank you for considering to help out with the source code! We welcome contributions from anyone on the internet, and are grateful for even the smallest of fixes! + +If you'd like to contribute to subnet-evm, please fork, fix, commit and send a pull request for the maintainers to review and merge into the main code base. If you wish to submit more complex changes though, please check up with the core devs first on [Discord](https://chat.avalabs.org) to ensure those changes are in line with the general philosophy of the project and/or get some early feedback which can make both your efforts much lighter as well as our review and merge procedures quick and simple. ## Coding guidelines -Please make sure your contributions adhere to our coding and documentation -guidelines: +Please make sure your contributions adhere to our coding guidelines: - Code must adhere to the official Go [formatting](https://go.dev/doc/effective_go#formatting) guidelines (i.e. uses [gofmt](https://pkg.go.dev/cmd/gofmt)). +- Code must be documented adhering to the official Go + [commentary](https://go.dev/doc/effective_go#commentary) guidelines. - Pull requests need to be based on and opened against the `master` branch. - Pull reuqests should include a detailed description -- Commits are required to be signed. See [here](https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-commits) +- Commits are required to be signed. See the [commit signature verification documentation](https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-commits) for information on signing commits. - Commit messages should be prefixed with the package(s) they modify. - E.g. "eth, rpc: make trace configs optional" -### Mocks +## Can I have feature X + +Before you submit a feature request, please check and make sure that it isn't possible through some other means. + +## Mocks Mocks are auto-generated using [mockgen](https://pkg.go.dev/go.uber.org/mock/mockgen) and `//go:generate` commands in the code. -- To **re-generate all mocks**, use the command below from the root of the project: +- To **re-generate all mocks**, use the task below from the root of the project: - ```sh - go generate -run mockgen ./... - ``` + ```sh + task generate-mocks + ``` -* To **add** an interface that needs a corresponding mock generated: - * if the file `mocks_generate_test.go` exists in the package where the interface is located, either: - * modify its `//go:generate go tool -modfile=tools/go.mod mockgen` to generate a mock for your interface (preferred); or - * add another `//go:generate go tool -modfile=tools/go.mod mockgen` to generate a mock for your interface according to specific mock generation settings - * if the file `mocks_generate_test.go` does not exist in the package where the interface is located, create it with content (adapt as needed): +- To **add** an interface that needs a corresponding mock generated: + - if the file `mocks_generate_test.go` exists in the package where the interface is located, either: + - modify its `//go:generate go tool -modfile=tools/go.mod mockgen` to generate a mock for your interface (preferred); or + - add another `//go:generate go tool -modfile=tools/go.mod mockgen` to generate a mock for your interface according to specific mock generation settings + - if the file `mocks_generate_test.go` does not exist in the package where the interface is located, create it with content (adapt as needed): ```go // Copyright (C) 2019-2025, Ava Labs, Inc. All rights reserved. @@ -58,8 +55,8 @@ Mocks are auto-generated using [mockgen](https://pkg.go.dev/go.uber.org/mock/moc - To **remove** an interface from having a corresponding mock generated: 1. Edit the `mocks_generate_test.go` file in the directory where the interface is defined 1. If the `//go:generate` mockgen command line: - * generates a mock file for multiple interfaces, remove your interface from the line - * generates a mock file only for the interface, remove the entire line. If the file is empty, remove `mocks_generate_test.go` as well. + - generates a mock file for multiple interfaces, remove your interface from the line + - generates a mock file only for the interface, remove the entire line. If the file is empty, remove `mocks_generate_test.go` as well. ## Tool Dependencies @@ -67,17 +64,20 @@ This project uses `go tool` to manage development tool dependencies in `tools/go ### Managing Tools -* To **add a new tool**: +- To **add a new tool**: + ```sh go get -tool -modfile=tools/go.mod example.com/tool/cmd/toolname@version ``` -* To **upgrade a tool**: +- To **upgrade a tool**: + ```sh go get -tool -modfile=tools/go.mod example.com/tool/cmd/toolname@newversion ``` -* To **run a tool manually**: +- To **run a tool manually**: + ```sh go tool -modfile=tools/go.mod toolname [args...] ``` diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md index a06a6658c8..66282cc3f0 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.md +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -31,4 +31,4 @@ Which OS you used to reveal the bug. **Additional context** Add any other context about the problem here. -Avalanche Bug Bounty program can be found [here](https://immunefi.com/bug-bounty/avalanche/information/). +You can submit a bug on the [Avalanche Bug Bounty program page](https://immunefi.com/bug-bounty/avalanche/information/). diff --git a/.github/ISSUE_TEMPLATE/feature_spec.md b/.github/ISSUE_TEMPLATE/feature_spec.md index a219c86753..660752cf6f 100644 --- a/.github/ISSUE_TEMPLATE/feature_spec.md +++ b/.github/ISSUE_TEMPLATE/feature_spec.md @@ -16,4 +16,4 @@ Include a description of the changes to be made to the code along with alternati that were considered, including pro/con analysis where relevant. **Open questions** -Questions that are still being discussed. \ No newline at end of file +Questions that are still being discussed diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md index 7751a8146f..d1b4cd3a7b 100644 --- a/.github/pull_request_template.md +++ b/.github/pull_request_template.md @@ -1,3 +1,4 @@ + ## Why this should be merged ## How this works diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index efb32391b0..6c747e7543 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -35,6 +35,14 @@ jobs: env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} run: ./scripts/run_task.sh check-avalanchego-version + links-lint: + name: Markdown Links Lint + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + - uses: umbrelladocs/action-linkspector@de84085e0f51452a470558693d7d308fbb2fa261 #v1.2.5 + with: + fail_level: any unit_test: name: Golang Unit Tests (${{ matrix.os }}) diff --git a/README.md b/README.md index f8d058450f..ec0584ae9b 100644 --- a/README.md +++ b/README.md @@ -5,11 +5,11 @@ [![CodeQL](https://github.com/ava-labs/subnet-evm/actions/workflows/codeql-analysis.yml/badge.svg)](https://github.com/ava-labs/subnet-evm/actions/workflows/codeql-analysis.yml) [![License](https://img.shields.io/github/license/ava-labs/subnet-evm)](https://github.com/ava-labs/subnet-evm/blob/master/LICENSE) -[Avalanche](https://docs.avax.network/avalanche-l1s) is a network composed of multiple blockchains. +[Avalanche](https://build.avax.network/docs/avalanche-l1s) is a network composed of multiple blockchains. Each blockchain is an instance of a Virtual Machine (VM), much like an object in an object-oriented language is an instance of a class. That is, the VM defines the behavior of the blockchain. -Subnet EVM is the [Virtual Machine (VM)](https://docs.avax.network/learn/virtual-machines) that defines the Subnet Contract Chains. Subnet EVM is a simplified version of [Coreth VM (C-Chain)](https://github.com/ava-labs/coreth). +Subnet EVM is the [Virtual Machine (VM)](https://build.avax.network/docs/quick-start/virtual-machines) that defines the Subnet Contract Chains. Subnet EVM is a simplified version of [Coreth VM (C-Chain)](https://github.com/ava-labs/coreth). This chain implements the Ethereum Virtual Machine and supports Solidity smart contracts as well as most other Ethereum client functionality. @@ -44,11 +44,11 @@ The Subnet EVM supports the following API namespaces: Only the `eth` namespace is enabled by default. Subnet EVM is a simplified version of [Coreth VM (C-Chain)](https://github.com/ava-labs/coreth). -Full documentation for the C-Chain's API can be found [here](https://build.avax.network/docs/api-reference/c-chain/api). +Full documentation for the C-Chain's API can be found in the [builder docs](https://build.avax.network/docs/rpcs/c-chain). ## Compatibility -The Subnet EVM is compatible with almost all Ethereum tooling, including [Remix](https://docs.avax.network/build/dapp/smart-contracts/remix-deploy), [Metamask](https://docs.avax.network/build/dapp/chain-settings), and [Foundry](https://docs.avax.network/build/dapp/smart-contracts/toolchains/foundry). +Subnet-EVM is compatible with almost all Ethereum tooling, including [Foundry](https://build.avax.network/academy/blockchain/solidity-foundry/03-smart-contracts/03-foundry-quickstart) and [Remix](https://build.avax.network/docs/avalanche-l1s/add-utility/deploy-smart-contract#using-remix). **Note:** Subnet-EVM and Avalanche C-Chain currently implement the Ethereum Cancun fork and do not yet support newer hardforks (such as Pectra). Since Solidity v0.8.30 switched its default target EVM version to Pectra, contracts compiled with default settings may emit bytecode using instructions/features that Avalanche does not support. To avoid this mismatch, explicitly set the Solidity compiler’s `evmVersion` to `cancun` when deploying to Subnet-EVM or the C-Chain. @@ -73,7 +73,7 @@ To support these changes, there have been a number of changes to the SubnetEVM b ### Clone Subnet-evm -First install Go 1.24.9 or later. Follow the instructions [here](https://go.dev/doc/install). You can verify by running `go version`. +First install Go 1.24.9 or later. Follow the instructions on the [go docs](https://go.dev/doc/install). You can verify by running `go version`. Set `$GOPATH` environment variable properly for Go to look for Go Workspaces. Please read [this](https://go.dev/doc/code) for details. You can verify by running `echo $GOPATH`. @@ -97,8 +97,8 @@ To run a local network, it is recommended to use the [avalanche-cli](https://git There are two options when using the Avalanche-CLI: -1. Use an official Subnet-EVM release: -2. Build and deploy a locally built (and optionally modified) version of Subnet-EVM: +1. Use an official Subnet-EVM release: +1. Build and deploy a locally built (and optionally modified) version of Subnet-EVM: ## Releasing diff --git a/RELEASES.md b/RELEASES.md index a9429b40c9..71371e75fa 100644 --- a/RELEASES.md +++ b/RELEASES.md @@ -1,3 +1,5 @@ + + # Release Notes ## [v0.8.1](https://github.com/ava-labs/subnet-evm/releases/tag/v0.8.1) @@ -201,7 +203,7 @@ The plugin version is unchanged at 37 and is compatible with AvalancheGo version - Added following new database options: - `"use-standalone-database"` (`bool`): If true it enables creation of standalone database. If false it uses the GRPC Database provided by AvalancheGo. Default is nil and creates the standalone database only if there is no accepted block in the AvalancheGo database (node has not accepted any blocks for this chain) - `"database-type"` (`string`): Specifies the type of database to use. Must be one of `pebbledb`, `leveldb` or `memdb`. memdb is an in-memory, non-persisted database. Default is `pebbledb` - - `"database-config-file"` (`string`): Path to the database config file. Config file is changed for every database type. See [docs](https://docs.avax.network/api-reference/avalanche-go-configs-flags#database-config) for available configs per database type. Ignored if --config-file-content is specified + - `"database-config-file"` (`string`): Path to the database config file. Config file is changed for every database type. See [docs](https://build.avax.network/docs/nodes/configure/configs-flags#database-config) for available configs per database type. Ignored if --config-file-content is specified - `"database-config-file-content"` (`string`): As an alternative to `database-config-file`, it allows specifying base64 encoded database config content - `"database-path"` (`string`): Specifies the directory to which the standalone database is persisted. Defaults to "`$HOME/.avalanchego/chainData/{chainID}`" - `"database-read-only"` (`bool`) : Specifies if the standalone database should be a read-only type. Defaults to false diff --git a/cmd/precompilegen/template-readme.md b/cmd/precompilegen/template-readme.md index 09aa152658..b5591b83c1 100644 --- a/cmd/precompilegen/template-readme.md +++ b/cmd/precompilegen/template-readme.md @@ -2,7 +2,7 @@ There are some must-be-done changes waiting in the generated file. Each area req Additionally there are other files you need to edit to activate your precompile. These areas are highlighted with comments "ADD YOUR PRECOMPILE HERE". For testing take a look at other precompile tests in contract_test.go and config_test.go in other precompile folders. -See the tutorial in for more information about precompile development. +See the tutorial in for more information about precompile development. General guidelines for precompile development: diff --git a/cmd/simulator/README.md b/cmd/simulator/README.md index b199106934..b64e286770 100644 --- a/cmd/simulator/README.md +++ b/cmd/simulator/README.md @@ -24,11 +24,11 @@ To confirm that you built successfully, run the simulator and print the version: This should give the following output: -``` +```bash v0.1.0 ``` -To run the load simulator, you must first start an EVM based network. The load simulator works on both the C-Chain and Subnet-EVM, so we will start a single node network and run the load simulator on the C-Chain. +To run the load simulator, you must first start an EVM based network. The load simulator works on both the C-Chain and Subnet-EVM, so we will start a single node network and run the load simulator on a Subnet-EVM blockchain. To start a single node network, follow the instructions from the AvalancheGo [README](https://github.com/ava-labs/avalanchego#building-avalanchego) to build from source. @@ -45,9 +45,9 @@ The `--sybil-protection-enabled=false` flag is only suitable for local testing. 1. Ignore stake weight on the P-Chain and count each connected peer as having a stake weight of 1 2. Automatically opts in to validate every Subnet -Once you have AvalancheGo running locally, it will be running an HTTP Server on the default port `9650`. This means that the RPC Endpoint for the C-Chain will be http://127.0.0.1:9650/ext/bc/C/rpc and ws://127.0.0.1:9650/ext/bc/C/ws for WebSocket connections. +Once you have AvalancheGo running locally, it will be running an HTTP Server on the default port `9650`. This means that the RPC Endpoint for your Subnet-EVM blockchain will be `http://127.0.0.1:9650/ext/bc/BLOCKCHAIN_ID/rpc` and `ws://127.0.0.1:9650/ext/bc/BLOCKCHAIN_ID/ws` for WebSocket connections, where `BLOCKCHAIN_ID` is the blockchain ID of your deployed Subnet-EVM instance. -Now, we can run the simulator command to simulate some load on the local C-Chain for 30s: +Now, we can run the simulator command to simulate some load on the local Subnet-EVM blockchain: ```bash ./simulator --timeout=1m --workers=1 --max-fee-cap=300 --max-tip-cap=10 --txs-per-worker=50 diff --git a/consensus/dummy/README.md b/consensus/dummy/README.md index f2269e3019..d49ff0c350 100644 --- a/consensus/dummy/README.md +++ b/consensus/dummy/README.md @@ -1,6 +1,6 @@ # Consensus -Disclaimer: the consensus package in subnet-evm is a complete misnomer. +Disclaimer: the consensus package in Subnet-EVM is a complete misnomer. The consensus package in go-ethereum handles block validation and specifically handles validating the PoW portion of consensus - thus the name. @@ -12,7 +12,7 @@ The dummy consensus engine is responsible for performing verification on the hea ## Dynamic Fees -Subnet-EVM includes a dynamic fee algorithm based off of (EIP-1559)[https://eips.ethereum.org/EIPS/eip-1559]. This introduces a field to the block type called `BaseFee`. The Base Fee sets a minimum gas price for any transaction to be included in the block. For example, a transaction with a gas price of 49 gwei, will be invalid to include in a block with a base fee of 50 gwei. +Subnet-EVM includes a dynamic fee algorithm based off of [EIP-1559](https://eips.ethereum.org/EIPS/eip-1559). This introduces a field to the block type called `BaseFee`. The Base Fee sets a minimum gas price for any transaction to be included in the block. For example, a transaction with a gas price of 49 gwei, will be invalid to include in a block with a base fee of 50 gwei. The dynamic fee algorithm aims to adjust the base fee to handle network congestion. Subnet-EVM sets a target utilization on the network, and the dynamic fee algorithm adjusts the base fee accordingly. If the network operates above the target utilization, the dynamic fee algorithm will increase the base fee to make utilizing the network more expensive and bring overall utilization down. If the network operates below the target utilization, the dynamic fee algorithm will decrease the base fee to make it cheaper to use the network. @@ -30,4 +30,4 @@ The FinalizeAndAssemble callback is used as the final step in building a block w ### Finalize -Finalize is called as the final step in processing a block [here](../../core/state_processor.go). Since either Finalize or FinalizeAndAssemble are called, but not both, when building or verifying/processing a block they need to perform the exact same processing/verification step to ensure that a block produced by the miner where FinalizeAndAssemble is called will be processed and verified in the same way when Finalize gets called. +Finalize is called as the final step in processing a block in [state_processor.go](../../core/state_processor.go). Since either Finalize or FinalizeAndAssemble are called, but not both, when building or verifying/processing a block they need to perform the exact same processing/verification step to ensure that a block produced by the miner where FinalizeAndAssemble is called will be processed and verified in the same way when Finalize gets called. diff --git a/contracts/README.md b/contracts/README.md index 3e37fea846..db0284bf83 100644 --- a/contracts/README.md +++ b/contracts/README.md @@ -21,11 +21,13 @@ This project requires Go 1.21 or later. Install from [golang.org](https://golang The Solidity compiler version 0.8.30 is required to compile contracts. In CI, this is installed automatically via the [setup-solc](https://github.com/ARR4N/setup-solc) GitHub Action. For local development, install solc 0.8.30: -- **macOS**: `brew install solidity` + +- **macOS**: `brew install solidity` - **Linux**: Follow instructions at [solidity docs](https://docs.soliditylang.org/en/latest/installing-solidity.html) - **CI**: Automatically installed via GitHub Actions After installation, create a version-specific alias or symlink: + ```bash # Option 1: Symlink (works in all contexts including go generate) sudo ln -sf $(which solc) /usr/local/bin/solc-v0.8.30 # Linux @@ -37,7 +39,7 @@ echo "alias solc-v0.8.30='solc'" >> ~/.bashrc # or ~/.zshrc ### Solidity and Avalanche -It is also helpful to have a basic understanding of [Solidity](https://docs.soliditylang.org) and [Avalanche](https://docs.avax.network). +It is also helpful to have a basic understanding of [Solidity](https://docs.soliditylang.org) and [Avalanche](https://build.avax.network/docs/quick-start). ## Dependencies @@ -62,8 +64,9 @@ From the repository root, run: ``` This will: + 1. Compile all Solidity contracts in `contracts/contracts/` to ABIs and bytecode -2. Generate Go bindings in `contracts/bindings/` +1. Generate Go bindings in `contracts/bindings/` The compilation artifacts (`.abi` and `.bin` files) are stored in `contracts/artifacts/` (gitignored). The generated Go bindings in `contracts/bindings/` are committed to the repository. @@ -79,8 +82,9 @@ go generate ./... # Compile contracts and generate bindings ``` All compilation and code generation is configured in `contracts/contracts/compile.go` using `go:generate` directives. The directives execute in order: + 1. First, `solc` compiles `.sol` files to `.abi` and `.bin` files in `artifacts/` -2. Then, `abigen` generates Go bindings from the artifacts to `bindings/*.go` +1. Then, `abigen` generates Go bindings from the artifacts to `bindings/*.go` ## Write Contracts @@ -98,7 +102,7 @@ For more information about precompiles see [subnet-evm precompiles](https://gith ## Hardhat Config -Hardhat uses `hardhat.config.js` as the configuration file. You can define tasks, networks, compilers and more in that file. For more information see [here](https://hardhat.org/config/). +Hardhat uses `hardhat.config.js` as the configuration file. You can define tasks, networks, compilers and more in that file. For more information see [the hardhat configuration docs](https://hardhat.org/config/). In Subnet-EVM, we provide a pre-configured file [hardhat.config.ts](https://github.com/ava-labs/subnet-evm/blob/master/contracts/hardhat.config.ts). diff --git a/core/README.md b/core/README.md index f8b71693f5..a094dab6d8 100644 --- a/core/README.md +++ b/core/README.md @@ -6,7 +6,7 @@ The core package maintains the backend for the blockchain, transaction pool, and The [BlockChain](./blockchain.go) struct handles the insertion of blocks into the maintained chain. It maintains a "canonical chain", which is essentially the preferred chain (the chain that ends with the block preferred by the AvalancheGo consensus engine). -When the consensus engine verifies blocks as they are ready to be issued into consensus, it calls `Verify()` on the ChainVM Block interface implemented [here](../plugin/evm/block.go). This calls `InsertBlockManual` on the BlockChain struct implemented in this package, which is the first entrypoint of a block into the blockchain. +When the consensus engine verifies blocks as they are ready to be issued into consensus, it calls `Verify()` on the ChainVM Block interface implemented in [wrapped_block.go](../plugin/evm/wrapped_block.go). This calls `InsertBlockManual` on the BlockChain struct implemented in this package, which is the first entrypoint of a block into the blockchain. InsertBlockManual verifies the block, inserts it into the state manager to track the merkle trie for the block, and adds it to the canonical chain if it extends the currently preferred chain. @@ -20,7 +20,7 @@ The transaction pool maintains the set of transactions that need to be issued in ## State Manager -The State Manager manages the [TrieDB](../trie/database.go). The TrieDB tracks a merkle forest of all of the merkle tries for the last accepted block and processing blocks. When a block is processed, the state transition results in a new merkle trie added to the merkle forest. The State Manager can operate in either archival or pruning mode. +The State Manager manages references to state roots in the TrieDB implementations (see [`triedb`](../triedb/) for hashdb, pathdb, and firewood implementations). The TrieDB stores trie nodes (the individual components of state tries) in memory and on disk. When a block is processed, the state transition results in a new state root, and the TrieDB updates or inserts the trie nodes that compose this state. The State Manager tracks which state roots are referenced by processing blocks and manages when to commit trie nodes to disk or dereference them. The State Manager can operate in either archival or pruning mode. ### Archival Mode diff --git a/docs/releasing/README.md b/docs/releasing/README.md index be480a2a93..3887dd1fb1 100644 --- a/docs/releasing/README.md +++ b/docs/releasing/README.md @@ -18,7 +18,7 @@ export VERSION_RC=v0.7.3-rc.0 export VERSION=v0.7.3 ``` -Remember to use the appropriate versioning for your release. +Remember to use the appropriate versioning for your release. 1. Create your branch, usually from the tip of the `master` branch: @@ -28,16 +28,18 @@ Remember to use the appropriate versioning for your release. git checkout -b "releases/$VERSION_RC" ``` -2. Update the [RELEASES.md](../../RELEASES.md) file with the new release version `$VERSION`. -3. Modify the [plugin/evm/version.go](../../plugin/evm/version.go) `Version` global string variable and set it to the desired `$VERSION`. -4. Ensure the AvalancheGo version used in [go.mod](../../go.mod) is [its last release](https://github.com/ava-labs/avalanchego/releases). If not, upgrade it with, for example: +1. Update the [RELEASES.md](../../RELEASES.md) file with the new release version `$VERSION`. +1. Modify the [plugin/evm/version.go](../../plugin/evm/version.go) `Version` global string variable and set it to the desired `$VERSION`. +1. Ensure the AvalancheGo version used in [go.mod](../../go.mod) is [its last release](https://github.com/ava-labs/avalanchego/releases). If not, upgrade it with, for example: + ```bash go get github.com/ava-labs/avalanchego@v1.13.0 go mod tidy ``` + And fix any errors that may arise from the upgrade. If it requires significant changes, you may want to create a separate PR for the upgrade and wait for it to be merged before continuing with this procedure. -5. Add an entry in the object in [compatibility.json](../../compatibility.json), adding the target release `$VERSION` as key and the AvalancheGo RPC chain VM protocol version as value, to the `"rpcChainVMProtocolVersion"` JSON object. For example, we would add: +1. Add an entry in the object in [compatibility.json](../../compatibility.json), adding the target release `$VERSION` as key and the AvalancheGo RPC chain VM protocol version as value, to the `"rpcChainVMProtocolVersion"` JSON object. For example, we would add: ```json "v0.7.3": 39, @@ -55,15 +57,15 @@ Remember to use the appropriate versioning for your release. compatibility.json has subnet-evm version v0.7.3 stated as compatible with RPC chain VM protocol version 0 but AvalancheGo protocol version is 39 ``` - This message can help you figure out what the correct RPC chain VM protocol version (here `39`) has to be in compatibility.json for your current release. Alternatively, you can refer to the [Avalanchego repository `version/compatibility.json` file](https://github.com/ava-labs/avalanchego/blob/main/version/compatibility.json) to find the RPC chain VM protocol version matching the AvalancheGo version we use here. -6. Specify the AvalancheGo compatibility in the [README.md relevant section](../../README.md#avalanchego-compatibility). For example we would add: + This message can help you figure out what the correct RPC chain VM protocol version (here `39`) has to be in compatibility.json for your current release. Alternatively, you can refer to the [Avalanchego repository `version/compatibility.json` file](https://github.com/ava-labs/avalanchego/blob/master/version/compatibility.json) to find the RPC chain VM protocol version matching the AvalancheGo version we use here. +1. Specify the AvalancheGo compatibility in the [README.md relevant section](../../README.md#avalanchego-compatibility). For example we would add: ```text ... [v0.7.3] AvalancheGo@v1.12.2/1.13.0-fuji/1.13.0 (Protocol Version: 39) ``` -7. Commit your changes and push the branch +1. Commit your changes and push the branch ```bash git add . @@ -71,26 +73,27 @@ Remember to use the appropriate versioning for your release. git push -u origin "releases/$VERSION_RC" ``` -8. Create a pull request (PR) from your branch targeting master, for example using [`gh`](https://cli.github.com/): +1. Create a pull request (PR) from your branch targeting master, for example using [`gh`](https://cli.github.com/): ```bash gh pr create --repo github.com/ava-labs/subnet-evm --base master --title "chore: release $VERSION_RC" ``` -9. Wait for the PR checks to pass with +1. Wait for the PR checks to pass with ```bash gh pr checks --watch ``` -10. Squash and merge your release branch into `master`, for example: +1. Squash and merge your release branch into `master`, for example: ```bash gh pr merge "releases/$VERSION_RC" --squash --subject "chore: release $VERSION_RC" --body "\n- Update AvalancheGo from v1.1X.X to v1.1X.X" ``` + Ensure you properly label the AvalancheGo version. -11. Create and push a tag from the `master` branch: +1. Create and push a tag from the `master` branch: ```bash git fetch origin master @@ -111,7 +114,7 @@ Once the tag is created, you need to test it on the Fuji testnet both locally an 1. Find the Dispatch and Echo L1s blockchain ID and subnet ID: - [Dispatch L1 details](https://subnets-test.avax.network/dispatch/details). Its subnet id is `7WtoAMPhrmh5KosDUsFL9yTcvw7YSxiKHPpdfs4JsgW47oZT5`. - [Echo L1 details](https://subnets-test.avax.network/echo/details). Its subnet id is `i9gFpZQHPLcGfZaQLiwFAStddQD7iTKBpFfurPFJsXm1CkTZK`. -2. Get the blockchain ID and VM ID of the Echo and Dispatch L1s with: +1. Get the blockchain ID and VM ID of the Echo and Dispatch L1s with: - Dispatch: ```bash @@ -152,13 +155,13 @@ Once the tag is created, you need to test it on the Fuji testnet both locally an VM id: meq3bv7qCMZZ69L8xZRLwyKnWp6chRwyscq8VPtHWignRQVVF ``` -3. In the subnet-evm directory, build the VM using +1. In the subnet-evm directory, build the VM using ```bash ./scripts/build.sh vm.bin ``` -4. Copy the VM binary to the plugins directory, naming it with the VM ID: +1. Copy the VM binary to the plugins directory, naming it with the VM ID: ```bash mkdir -p ~/.avalanchego/plugins @@ -167,20 +170,20 @@ Once the tag is created, you need to test it on the Fuji testnet both locally an rm vm.bin ``` -5. Clone [AvalancheGo](https://github.com/ava-labs/avalanchego): +1. Clone [AvalancheGo](https://github.com/ava-labs/avalanchego): ```bash git clone git@github.com:ava-labs/avalanchego.git ``` -6. Checkout correct AvalancheGo version, the version should match the one used in Subnet-EVM `go.mod` file +1. Checkout correct AvalancheGo version, the version should match the one used in Subnet-EVM `go.mod` file ```bash cd avalanchego git checkout v1.13.0 ``` -7. Get upgrades for each L1 and write them out to `~/.avalanchego/configs/chains//upgrade.json`: +1. Get upgrades for each L1 and write them out to `~/.avalanchego/configs/chains//upgrade.json`: ```bash mkdir -p ~/.avalanchego/configs/chains/2D8RG4UpSXbPbvPCAWppNJyqTG2i2CAXSkTgmTBBvs7GKNZjsY @@ -206,32 +209,32 @@ Once the tag is created, you need to test it on the Fuji testnet both locally an jq -r '.result.upgrades' > ~/.avalanchego/configs/chains/98qnjenm7MBd8G2cPZoRvZrgJC33JGSAAKghsQ6eojbLCeRNp/upgrade.json ``` -8. (Optional) You can tweak the `config.json` for each L1 if you want to test a particular feature for example. +1. (Optional) You can tweak the `config.json` for each L1 if you want to test a particular feature for example. - Dispatch: `~/.avalanchego/configs/chains/2D8RG4UpSXbPbvPCAWppNJyqTG2i2CAXSkTgmTBBvs7GKNZjsY/config.json` - Echo: `~/.avalanchego/configs/chains/98qnjenm7MBd8G2cPZoRvZrgJC33JGSAAKghsQ6eojbLCeRNp/config.json` -9. (Optional) If you want to reboostrap completely the chain, you can remove `~/.avalanchego/chainData//db/pebbledb`, for example: +1. (Optional) If you want to reboostrap completely the chain, you can remove `~/.avalanchego/chainData//db/pebbledb`, for example: - Dispatch: `rm -r ~/.avalanchego/chainData/2D8RG4UpSXbPbvPCAWppNJyqTG2i2CAXSkTgmTBBvs7GKNZjsY/db/pebbledb` - Echo: `rm -r ~/.avalanchego/chainData/98qnjenm7MBd8G2cPZoRvZrgJC33JGSAAKghsQ6eojbLCeRNp/db/pebbledb` AvalancheGo keeps its database in `~/.avalanchego/db/fuji/v1.4.5/*.ldb` which you should not delete. -10. Build AvalancheGo: +1. Build AvalancheGo: ```bash ./scripts/build.sh ``` -11. Run AvalancheGo tracking the Dispatch and Echo Subnet IDs: +1. Run AvalancheGo tracking the Dispatch and Echo Subnet IDs: ```bash ./build/avalanchego --network-id=fuji --partial-sync-primary-network --public-ip=127.0.0.1 \ --track-subnets=7WtoAMPhrmh5KosDUsFL9yTcvw7YSxiKHPpdfs4JsgW47oZT5,i9gFpZQHPLcGfZaQLiwFAStddQD7iTKBpFfurPFJsXm1CkTZK ``` -12. Follow the logs and wait until you see the following lines: +1. Follow the logs and wait until you see the following lines: - line stating the health `check started passing` - line containing `consensus started` - line containing `bootstrapped healthy nodes` -13. In another terminal, check you can obtain the current block number for both chains: +1. In another terminal, check you can obtain the current block number for both chains: - Dispatch: @@ -329,7 +332,7 @@ Following the previous example in the [Release candidate section](#release-candi git push origin "$VERSION" ``` -2. Create a new release on Github, either using: +1. Create a new release on Github, either using: - the [Github web interface](https://github.com/ava-labs/subnet-evm/releases/new) 1. In the "Choose a tag" box, select the tag previously created `$VERSION` (`v0.7.3`) 2. Pick the previous tag, for example as `v0.7.2`. @@ -373,29 +376,34 @@ Following the previous example in the [Release candidate section](#release-candi gh release create "$VERSION" --notes-start-tag "$PREVIOUS_VERSION" --notes-from-tag "$VERSION" --title "$VERSION" --notes "$NOTES" --verify-tag ``` -3. Monitor the [release Github workflow](https://github.com/ava-labs/subnet-evm/actions/workflows/release.yml) to ensure the GoReleaser step succeeds and check the binaries are then published to [the releases page](https://github.com/ava-labs/subnet-evm/releases). In case this fails, you can trigger the workflow manually: +1. Monitor the [release Github workflow](https://github.com/ava-labs/subnet-evm/actions/workflows/release.yml) to ensure the GoReleaser step succeeds and check the binaries are then published to [the releases page](https://github.com/ava-labs/subnet-evm/releases). In case this fails, you can trigger the workflow manually: 1. Go to [github.com/ava-labs/subnet-evm/actions/workflows/release.yml](https://github.com/ava-labs/subnet-evm/actions/workflows/release.yml) 1. Click on the "Run workflow" button 1. Enter the branch name, usually with goreleaser related fixes 1. Enter the tag name `$VERSION` (i.e. `v0.7.3`) -4. Monitor the [Publish Docker image workflow](https://github.com/ava-labs/subnet-evm/actions/workflows/publish_docker.yml) succeeds. Note this workflow is triggered when pushing the tag, unlike Goreleaser which triggers when publishing the release. -5. Finally, [create a release for precompile-evm](https://github.com/ava-labs/precompile-evm/blob/main/docs/releasing/README.md) +1. Monitor the [Publish Docker image workflow](https://github.com/ava-labs/subnet-evm/actions/workflows/publish_docker.yml) succeeds. Note this workflow is triggered when pushing the tag, unlike Goreleaser which triggers when publishing the release. +1. Finally, [create a release for precompile-evm](https://github.com/ava-labs/precompile-evm/blob/main/docs/releasing/README.md) ### Post-release + After you have successfully released a new subnet-evm version, you need to bump all of the versions again in preperation for the next release. Note that the release here is not final, and will be reassessed, and possibly changer prior to release. Some releases require a major version update, but this will usually be `$VERSION` + `0.0.1`. For example: + ```bash export P_VERSION=v0.7.4 ``` + 1. Create a branch, from the tip of the `master` branch after the release PR has been merged: + ```bash git fetch origin master git checkout master git checkout -b "prep-$P_VERSION-release" ``` + 1. Bump the version number to the next pending release version, `$P_VERSION` - - Update the [RELEASES.md](../../RELEASES.md) file with `$P_VERSION`, creating a space for maintainers to place their changes as they make them. - - Modify the [plugin/evm/version.go](../../plugin/evm/version.go) `Version` global string variable and set it to `$P_VERSION`. -1. Add an entry in the object in [compatibility.json](../../compatibility.json), adding the next pending release versionas key and the AvalancheGo RPC chain VM protocol version as value, to the `"rpcChainVMProtocolVersion"` JSON object. For example, we would add: + - Update the [RELEASES.md](../../RELEASES.md) file with `$P_VERSION`, creating a space for maintainers to place their changes as they make them. + - Modify the [plugin/evm/version.go](../../plugin/evm/version.go) `Version` global string variable and set it to `$P_VERSION`. + - Add an entry in the object in [compatibility.json](../../compatibility.json), adding the next pending release versionas key and the AvalancheGo RPC chain VM protocol version as value, to the `"rpcChainVMProtocolVersion"` JSON object. For example, we would add: ```json "v0.7.4": 39, @@ -413,7 +421,7 @@ export P_VERSION=v0.7.4 compatibility.json has subnet-evm version v0.7.4 stated as compatible with RPC chain VM protocol version 0 but AvalancheGo protocol version is 39 ``` - This message can help you figure out what the correct RPC chain VM protocol version (here `39`) has to be in compatibility.json for your current release. Alternatively, you can refer to the [Avalanchego repository `version/compatibility.json` file](https://github.com/ava-labs/avalanchego/blob/main/version/compatibility.json) to find the RPC chain VM protocol version matching the AvalancheGo version we use here. + This message can help you figure out what the correct RPC chain VM protocol version (here `39`) has to be in compatibility.json for your current release. Alternatively, you can refer to the [Avalanchego repository `version/compatibility.json` file](https://github.com/ava-labs/avalanchego/blob/master/version/compatibility.json) to find the RPC chain VM protocol version matching the AvalancheGo version we use here. 1. Commit your changes and push the branch ```bash @@ -421,16 +429,23 @@ export P_VERSION=v0.7.4 git commit -S -m "chore: prep release $P_VERSION" git push -u origin "prep-$P_VERSION-release" ``` + 1. Create a pull request (PR) from your branch targeting master, for example using [`gh`](https://cli.github.com/): + ```bash gh pr create --repo github.com/ava-labs/subnet-evm --base master --title "chore: prep next release $P_VERSION" ``` + 1. Wait for the PR checks to pass with + ```bash gh pr checks --watch ``` + 1. Squash and merge your branch into `master`, for example: + ```bash gh pr merge "prep-$P_VERSION-release" --squash --subject "chore: prep next release $P_VERSION" ``` + 1. Pat yourself on the back for a job well done. diff --git a/plugin/evm/README.md b/plugin/evm/README.md index cb180bafc7..688cfbcd5f 100644 --- a/plugin/evm/README.md +++ b/plugin/evm/README.md @@ -8,16 +8,16 @@ The VM creates the Ethereum backend and provides basic block building, parsing, ## APIs -The VM creates APIs for the node through the function `CreateHandlers()`. CreateHandlers returns the `Service` struct to serve Subnet-EVM specific APIs. Additionally, the Ethereum backend APIs are also returned at the `/rpc` extension. +The VM creates APIs for the node through the function `CreateHandlers()`. CreateHandlers returns the `Service` struct to serve subnet-evm specific APIs. Additionally, the Ethereum backend APIs are also returned at the `/rpc` extension. ## Block Handling -The VM implements `buildBlock`, `parseBlock`, and `getBlock` and uses the `chain` package from AvalancheGo to construct a metered state, which uses these functions to implement an efficient caching layer and maintain the required invariants for blocks that get returned to the consensus engine. +The VM implements `buildBlock`, `parseBlock`, and `getBlock` which are used by the `chain` package from AvalancheGo to construct a metered state. The metered state wraps blocks returned by these functions with an efficient caching layer and maintains the required invariants for blocks that get returned to the consensus engine. -To do this, the VM uses a modified version of the Ethereum RLP block type [here](../../core/types/block.go) and uses the core package's BlockChain type [here](../../core/blockchain.go) to handle the insertion and storage of blocks into the chain. +The VM uses the block type from [`libevm/core/types`](https://github.com/ava-labs/libevm/tree/master/core/types) and extends it with Avalanche-specific fields (such as `ExtDataHash`, `BlockGasCost`, and `Version`) using libevm's extensibility mechanism (defined in [`customtypes`](customtypes/)), then wraps it with [`wrappedBlock`](wrapped_block.go) to implement the AvalancheGo Block interface. The core package's BlockChain type in [blockchain.go](../../core/blockchain.go) handles the insertion and storage of blocks into the chain. ## Block The Block type implements the AvalancheGo ChainVM Block interface. The key functions for this interface are `Verify()`, `Accept()`, `Reject()`, and `Status()`. -The Block type wraps the stateless block type [here](../../core/types/block.go) and implements these functions to allow the consensus engine to verify blocks as valid, perform consensus, and mark them as accepted or rejected. See the documentation in AvalancheGo for the more detailed VM invariants that are maintained here. +The Block type (implemented as [`wrappedBlock`](wrapped_block.go)) wraps the block type from [`libevm/core/types`](https://github.com/ava-labs/libevm/tree/master/core/types) and implements these functions to allow the consensus engine to verify blocks as valid, perform consensus, and mark them as accepted or rejected. Blocks contain standard Ethereum transactions that enable cross-chain asset transfers. Blocks may also include optional block extensions for extensible VM functionality. See the documentation in AvalancheGo for the more detailed VM invariants that are maintained here. diff --git a/plugin/evm/config/config.md b/plugin/evm/config/config.md index 3db78b0fba..1bc7377403 100644 --- a/plugin/evm/config/config.md +++ b/plugin/evm/config/config.md @@ -55,7 +55,7 @@ Configuration is provided as a JSON object. All fields are optional unless other | `api-max-duration` | duration | Maximum duration for API calls (0 = no limit) | `0` | | `api-max-blocks-per-request` | int64 | Maximum number of blocks per getLogs request (0 = no limit) | `0` | | `http-body-limit` | uint64 | Maximum size of HTTP request bodies | - | -| `batch-request-limit` | uint64 | Maximum number of requests that can be batched in an RPC call. For no limit, set either this or `batch-response-max-size` to 0 | `1000` | +| `batch-request-limit` | uint64 | Maximum number of requests that can be batched in an RPC call. For no limit, set either this or `batch-response-max-size` to 0 | `1000` | | `batch-response-max-size` | uint64 | Maximum size (in bytes) of response that can be returned from a batched RPC call. For no limit, set either this or `batch-request-limit` to 0. Defaults to `25 MB`| `1000` | ### WebSocket Settings @@ -105,6 +105,8 @@ Configuration is provided as a JSON object. All fields are optional unless other ## Pruning and State Management + > **Note**: If a node is ever run with `pruning-enabled` as `false` (archival mode), setting `pruning-enabled` to `true` will result in a warning and the node will shut down. This is to protect against unintentional misconfigurations of an archival node. To override this and switch to pruning mode, in addition to `pruning-enabled: true`, `allow-missing-tries` should be set to `true` as well. + ### Basic Pruning | Option | Type | Description | Default | @@ -123,6 +125,8 @@ Configuration is provided as a JSON object. All fields are optional unless other ### Offline Pruning +> **Note**: If offline pruning is enabled it will run on startup and block until it completes (approximately one hour on Mainnet). This will reduce the size of the database by deleting old trie nodes. **While performing offline pruning, your node will not be able to process blocks and will be considered offline.** While ongoing, the pruning process consumes a small amount of additional disk space (for deletion markers and the bloom filter). For more information see the [disk space considerations documentation](https://build.avax.network/docs/nodes/maintain/reduce-disk-usage#disk-space-considerations). Since offline pruning deletes old state data, this should not be run on nodes that need to support archival API requests. This is meant to be run manually, so after running with this flag once, it must be toggled back to false before running the node again. Therefore, you should run with this flag set to true and then set it to false on the subsequent run. + | Option | Type | Description | Default | |--------|------|-------------|---------| | `offline-pruning-enabled` | bool | Enable offline pruning | `false` | @@ -223,6 +227,8 @@ Configuration is provided as a JSON object. All fields are optional unless other ### State Sync +> **Note:** If state-sync is enabled, the peer will download chain state from peers up to a recent block near tip, then proceed with normal bootstrapping. Please note that if you need historical data, state sync isn't the right option. However, it is sufficient if you are just running a validator. + | Option | Type | Description | Default | |--------|------|-------------|---------| | `state-sync-enabled` | bool | Enable state sync | `false` | @@ -251,7 +257,7 @@ Failing to set these options will result in errors on VM initialization. Additio | `database-config-file` | string | Path to database configuration file | - | | `use-standalone-database` | bool | Use standalone database instead of shared one | - | | `inspect-database` | bool | Inspect database on startup | `false` | -| `state-scheme` | string | EXPERIMENTAL: specifies the database scheme to store state data; can be one of `hash` or `firewood` | `hash` | +| `state-scheme` | string | EXPERIMENTAL: specifies the database scheme to store state data; can be one of `hash` or `firewood` | `hash` | ## Transaction Indexing diff --git a/precompile/contracts/warp/README.md b/precompile/contracts/warp/README.md index ea270e7bec..c0d40ab1aa 100644 --- a/precompile/contracts/warp/README.md +++ b/precompile/contracts/warp/README.md @@ -25,7 +25,7 @@ The Avalanche Warp Precompile enables this flow to send a message from blockchai ### Warp Precompile -The Warp Precompile is broken down into three functions defined in the Solidity interface file [here](../../../contracts/contracts/interfaces/IWarpMessenger.sol). +The Warp Precompile is broken down into three functions defined in the Solidity interface file [IWarpMessenger.sol](../../../contracts/contracts/interfaces/IWarpMessenger.sol). #### sendWarpMessage @@ -59,7 +59,7 @@ This leads to the following advantages: 1. The EVM execution does not need to verify the Warp Message at runtime (no signature verification or external calls to the P-Chain) 2. The EVM can deterministically re-execute and re-verify blocks assuming the predicate was verified by the network (e.g., in bootstrapping) -This pre-verification is performed using the ProposerVM Block header during [block verification](../../../plugin/evm/block.go#L355) & [block building](../../../miner/worker.go#L200). +This pre-verification is performed using the ProposerVM Block header during [block verification](../../../plugin/evm/wrapped_block.go) & [block building](../../../miner/worker.go). #### getBlockchainID @@ -67,7 +67,7 @@ This pre-verification is performed using the ProposerVM Block header during [blo This is different from the conventional Ethereum ChainID registered to [ChainList](https://chainlist.org/). -The `blockchainID` in Avalanche refers to the txID that created the blockchain on the Avalanche P-Chain ([docs](https://docs.avax.network/specs/platform-transaction-serialization#unsigned-create-chain-tx)). +The `sourceChainID` in Avalanche refers to the txID that created the blockchain on the Avalanche P-Chain ([docs](https://build.avax.network/docs/cross-chain/avalanche-warp-messaging/deep-dive#icm-serialization)). ### Predicate Encoding @@ -75,7 +75,7 @@ Avalanche Warp Messages are encoded as a signed Avalanche [Warp Message](https:/ Since the predicate is encoded into the [Transaction Access List](https://eips.ethereum.org/EIPS/eip-2930), it is packed into 32 byte hashes intended to declare storage slots that should be pre-warmed into the cache prior to transaction execution. -Therefore, we use the [Predicate Utils](https://github.com/ava-labs/subnet-evm/blob/master/predicate/Predicate.md) package to encode the actual byte slice of size N into the access list. +Therefore, we use the [`predicate`](https://github.com/ava-labs/avalanchego/tree/master/vms/evm/predicate) package to encode the actual byte slice of size N into the access list. ### Performance Optimization: Primary Network to Avalanche L1 @@ -85,7 +85,7 @@ The Primary Network has a large validator set compared to most Subnets and L1s, Recall that Avalanche Subnet validators must also validate the Primary Network, so it tracks all of the blockchains in the Primary Network (X, C, and P-Chains). -When an Avalanche Subnet receives a message from a blockchain on the Primary Network, we use the validator set of the receiving Subnet instead of the entire network when validating the message. +When an Avalanche Subnet receives a message from a blockchain on the Primary Network, we use the validator set of the receiving Subnet instead of the entire network when validating the message. Sending messages from the X, C, or P-Chain remains unchanged. However, when the Subnet receives the message, it changes the semantics to the following: diff --git a/sync/README.md b/sync/README.md index 97efdaa305..1ee1473208 100644 --- a/sync/README.md +++ b/sync/README.md @@ -1,6 +1,7 @@ # State sync ## Overview + Normally, a node joins the network through bootstrapping: First it fetches all blocks from genesis to the chain's last accepted block from peers, then it applies the state transition specified in each block to reach the state necessary to join consensus. State sync is an alternative in which a node downloads the state of the chain from its peers at a specific _syncable_ block height. Then, the node processes the rest of the chain's blocks (from syncable block to tip) via normal bootstrapping. @@ -8,18 +9,22 @@ Blocks at heights divisible by `defaultSyncableInterval` (= 16,384 or 2**14) are _Note: `defaultSyncableInterval` must be divisible by `CommitInterval` (= 4096). This is so the state corresponding to syncable blocks is available on nodes with pruning enabled._ State sync is faster than bootstrapping and uses less bandwidth and computation: + - Nodes joining the network do not process all the state transitions. - The amount of data sent over the network is proportionate to the amount of state not the chain's length _Note: nodes joining the network through state sync will not have historical state prior to the syncable block._ ## What is the chain state? + The node needs the following data from its peers to continue processing blocks from a syncable block: -- Accounts trie & storage tries for all accounts (at the state root corresponding to the syncable block), -- Contract code referenced in the account trie, + +- Accounts trie & storage tries for all accounts (at the state root corresponding to the syncable block) +- Contract code referenced in the account trie - 256 parents of the syncable block (required for the BLOCKHASH opcode) ## Code structure + State sync code is structured as follows: - `sync/handlers`: Nodes that have joined the network are expected to respond to valid requests for the chain state: @@ -35,8 +40,8 @@ State sync code is structured as follows: - `peer`: Contains abstractions used by `sync/statesync` to send requests to peers (`AppRequest`) and receive responses from peers (`AppResponse`). - `message`: Contains structs that are serialized and sent over the network during state sync. - ## Sync summaries & engine involvement + When a new node wants to join the network via state sync, it will need a few pieces of information as a starting point so it can make valid requests to its peers: - Number (height) and hash of the latest available syncable block, @@ -44,22 +49,24 @@ When a new node wants to join the network via state sync, it will need a few pie The above information is called a _state summary_, and each syncable block corresponds to one such summary (see `message.Summary`). The engine and VM interact as follows to find a syncable state summary: - 1. The engine calls `StateSyncEnabled`. The VM returns `true` to initiate state sync, or `false` to start bootstrapping. In `subnet-evm`, this is controlled by the `state-sync-enabled` flag. 1. The engine calls `GetOngoingSyncStateSummary`. If the VM has a previously interrupted sync to resume it returns that summary. Otherwise, it returns `ErrNotFound`. By default, `subnet-evm` will resume an interrupted sync. -1. The engine samples peers for their latest available summaries, then verifies the correctness and availability of each sampled summary with validators. The messaging flow is documented [here](https://github.com/ava-labs/avalanchego/blob/master/snow/engine/snowman/block/README.md). +1. The engine samples peers for their latest available summaries, then verifies the correctness and availability of each sampled summary with validators. The messaging flow is documented in the [block engine README](https://github.com/ava-labs/avalanchego/blob/master/snow/engine/snowman/block/README.md). 1. The engine calls `Accept` on the chosen summary. The VM may return `false` to skip syncing to this summary (`subnet-evm` skips state sync for less than `defaultStateSyncMinBlocks = 300_000` blocks). If the VM decides to perform the sync, it must return `true` without blocking and fetch the state from its peers asynchronously. 1. The VM sends `common.StateSyncDone` on the `toEngine` channel on completion. 1. The engine calls `VM.SetState(Bootstrapping)`. Then, blocks after the syncable block are processed one by one. ## Syncing state + The following steps are executed by the VM to sync its state from peers (see `stateSyncClient.StateSync`): + 1. Wipe snapshot data 1. Sync 256 parents of the syncable block (see `BlockRequest`), 1. Sync the EVM state: account trie, code, and storage tries, 1. Update in-memory and on-disk pointers. Steps 3 and 4 involve syncing tries. To sync trie data, the VM will send a series of `LeafRequests` to its peers. Each request specifies: + - Type of trie (`NodeType`): - `statesync.StateTrieNode` (account trie and storage tries share the same database) - `Root` of the trie to sync, @@ -68,17 +75,20 @@ Steps 3 and 4 involve syncing tries. To sync trie data, the VM will send a serie Peers responding to these requests send back trie leafs (key/value pairs) beginning at `Start` and up to `End` (or a maximum number of leafs). The response must also contain include a merkle proof for the range of leafs it contains. Nodes serving state sync data are responsible for constructing these proofs (see `sync/handlers/leafs_request.go`) `client.GetLeafs` handles sending a single request and validating the response. This method will retry the request from a different peer up to `maxRetryAttempts` (= 32) times if the peer's response is: + - malformed, - does not contain a valid merkle proof, -- or is not received in time. - +- not received in time. If there are more leafs in a trie than can be returned in a single response, the client will make successive requests to continue fetching data (with `Start` set to the last key received) until the trie is complete. `CallbackLeafSyncer` manages this process and does a callback on each batch of received leafs. ### EVM state: Account trie, code, and storage tries + `sync/statesync.stateSyncer` uses `CallbackLeafSyncer` to sync the account trie. When the leaf callback is invoked, each leaf represents an account: + - If the account has contract code, it is requested from peers using `client.GetCode` - If the account has a storage root, it is added to the list of trie roots returned from the callback. `CallbackLeafSyncer` has `defaultNumThreads` (= 4) goroutines to fetch these tries concurrently. + If the account trie encounters a new storage trie task and there are already 4 in-progress trie tasks (1 for the account trie and 3 for in-progress storage trie tasks), then the account trie worker will block until one of the storage trie tasks finishes and it can create a new task. When an account leaf is received, it is converted to `SlimRLP` format and written to the snapshot. @@ -88,16 +98,17 @@ When the trie is complete, an `OnFinish` callback is called and we hash any rema When a storage trie leaf is received, it is stored in the account's storage snapshot. A `StackTrie` is used here to reconstruct intermediary trie nodes & root as well. ### Updating in-memory and on-disk pointers + `plugin/evm.stateSyncClient.StateSyncSetLastSummaryBlock` is the last step in state sync. Once the tries have been synced, this method: -- Verifies the block the engine has received matches the expected block hash and block number in the summary, -- Adds a checkpoint to the `core.ChainIndexer` (to avoid indexing missing blocks) -- Resets in-memory and on disk pointers on the `core.BlockChain` struct. -- Updates VM's last accepted block. - +1. Verifies the block the engine has received matches the expected block hash and block number in the summary, +1. Adds a checkpoint to the `core.ChainIndexer` (to avoid indexing missing blocks) +1. Resets in-memory and on disk pointers on the `core.BlockChain` struct +1. Updates VM's last accepted block ## Resuming a partial sync operation + While state sync is faster than normal bootstrapping, the process may take several hours to complete. In case the node is shut down in the middle of a state sync, progress on syncing the account trie and storage tries is preserved: - When starting a sync, `stateSyncClient` persists the state summary to disk. This is so if the node is shut down while the sync is ongoing, this summary can be found and returned to the engine from `GetOngoingSyncStateSummary` upon node restart. @@ -114,4 +125,4 @@ While state sync is faster than normal bootstrapping, the process may take sever | `state-sync-skip-resume` | `bool` | set to true to avoid resuming an ongoing sync | `false` | | `state-sync-min-blocks` | `uint64` | Minimum number of blocks the chain must be ahead of local state to prefer state sync over bootstrapping | `300,000` | | `state-sync-server-trie-cache` | `int` | Size of trie cache to serve state sync data in MB. Should be set to multiples of `64`. | `64` | -| `state-sync-ids` | `string` | a comma separated list of `NodeID-` prefixed node IDs to sync data from. If not provided, peers are randomly selected. | | \ No newline at end of file +| `state-sync-ids` | `string` | a comma separated list of `NodeID-` prefixed node IDs to sync data from. If not provided, peers are randomly selected. | | diff --git a/tests/README.md b/tests/README.md index 54cc0497d0..04350921d4 100644 --- a/tests/README.md +++ b/tests/README.md @@ -43,11 +43,9 @@ test run, require binary dependencies. One way of making these dependencies avai to use a nix shell which will give access to the dependencies expected by the test tooling: - - Install [nix](https://nixos.org/). The [determinate systems - installer](https://github.com/DeterminateSystems/nix-installer?tab=readme-ov-file#install-nix) - is recommended. - - Use ./scripts/dev_shell.sh to start a nix shell - - Execute the dependency-requiring command (e.g. `ginkgo -v ./tests/warp -- --start-collectors`) +- Install [nix](https://nixos.org/). The [determinate systems installer](https://github.com/DeterminateSystems/nix-installer?tab=readme-ov-file#install-nix) is recommended. +- Use ./scripts/dev_shell.sh to start a nix shell +- Execute the dependency-requiring command (e.g. `ginkgo -v ./tests/warp -- --start-collectors`) This repo also defines a `.envrc` file to configure [devenv](https://direnv.net/). With `devenv` and `nix` installed, a shell at the root of the repo will automatically start a nix dev diff --git a/tests/antithesis/README.md b/tests/antithesis/README.md index 0924b9f63d..9f0c39b689 100644 --- a/tests/antithesis/README.md +++ b/tests/antithesis/README.md @@ -1,7 +1,7 @@ # Antithesis Testing This package supports testing with -[Antithesis](https://antithesis.com/docs/introduction/introduction.html), +[Antithesis](https://antithesis.com/docs/), a SaaS offering that enables deployment of distributed systems (such as Avalanche) to a deterministic and simulated environment that enables discovery and reproduction of anomalous behavior.