A Rust CLI that fetches WASM components (typically from IPFS) and executes them
under WASI Preview 2. The ww run command wires host stdio directly to the
guest, so operators can interact with components exactly as if they were local.
- IPFS DHT Bootstrap: Automatically discovers and connects to IPFS peers from local Kubo node
- Protocol Compatibility: Uses standard IPFS protocols (
/ipfs/kad/1.0.0,/ipfs/id/1.0.0) for full network compatibility - RSA Key Support: Includes RSA support for connecting to legacy IPFS peers
- Creates libp2p Host: Generates Ed25519 identity and listens on TCP with IPFS-compatible protocols
- DHT Operations: Participates in IPFS DHT operations (provide/query) after bootstrap
- Structured Logging: Comprehensive logging with configurable levels and performance metrics
-
Kubo (IPFS) daemon running locally
kubo daemon
-
Rust toolchain
rustup install stable rustup default stable
The application now uses a subcommand structure. The main command is ww with a run subcommand for starting a wetware node.
ww <COMMAND>
Commands:
run Run a wetware node
help Print this message or the help of the given subcommand(s)-
Start Kubo daemon (in a separate terminal):
kubo daemon
-
Run the application using the
runsubcommand:# Use defaults (http://localhost:5001, info log level) cargo run -- run # Custom IPFS endpoint cargo run -- run --ipfs http://127.0.0.1:5001 cargo run -- run --ipfs http://192.168.1.100:5001 # Custom log level cargo run -- run --loglvl debug cargo run -- run --loglvl trace # Combine both cargo run -- run --ipfs http://192.168.1.100:5001 --loglvl debug # Or use environment variables export WW_IPFS=http://192.168.1.100:5001 export WW_LOGLVL=debug cargo run -- run
The run subcommand supports the following options:
--ipfs <IPFS>: IPFS node HTTP API endpoint (e.g., http://127.0.0.1:5001)--loglvl <LEVEL>: Log level (trace, debug, info, warn, error)--preset <PRESET>: Use preset configuration (minimal, development, production)--env-config: Use configuration from environment variables
WASM examples are built automatically when you run make or make examples. The default-kernel example is included by default.
Note: The
wasm32-wasip2target is currently nightly-only. Insideexamples/default-kernel/, runrustup override set nightly(once) andrustup target add wasm32-wasip2before building the guest.
Quick start:
# Build everything (including examples)
make
# Run the default-kernel example
./target/release/ww run /boot --volume examples/default-kernel/target/wasm32-wasip2/release:/bootTo test ww run with the default-kernel example:
-
Build the WASM file (already done if you ran
make):make example-default-kernel
Or run directly from the example directory:
make -C examples/default-kernel build
The example Makefile handles building and ensuring the output is named
main.wasm. -
Run with local filesystem mount:
cargo run -- run /app \ --volume examples/default-kernel/target/wasm32-wasip2/release:/app
-
Export to IPFS and run from IPFS (optional):
# Build and export to IPFS make example-default-kernel-ipfs # Or run directly: make -C examples/default-kernel ipfs # This will output an IPFS hash like: /ipfs/QmHash... # Then run with: cargo run -- run /ipfs/QmHash...
Note: The ww run command expects {path}/main.wasm, so when using volume mounts, ensure main.wasm exists in the mounted directory. The example Makefiles handle this automatically.
The project includes a multi-stage Docker build for containerized deployment and distribution.
# Build the container image
make podman-build
# or
podman build -t wetware:latest .
# Run the container
make podman-run
# or
podman run --rm -it wetware:latest
# Clean up container images
make podman-clean- Multi-stage build: Optimizes image size by separating build and runtime stages
- Security: Runs as non-root user (
wetware) - Efficient caching: Leverages container layer caching for faster builds
- Minimal runtime: Based on Debian Bookworm slim for smaller footprint
Note: When running the container, you'll need to use the run subcommand:
# Run the container with the run subcommand
podman run --rm -it wetware:latest run
# With custom options
podman run --rm -it wetware:latest run --ipfs http://host.docker.internal:5001 --loglvl debugCreate a docker-compose.yml for easy development (works with both Docker and Podman):
version: '3.8'
services:
wetware:
build: .
ports:
- "8080:8080"
environment:
- WW_IPFS=http://host.docker.internal:5001
- WW_LOGLVL=info
volumes:
- ./config:/app/config
command: ["run"] # Use the run subcommandThe project includes GitHub Actions workflows for automated testing, building, and publishing.
- Automated Testing: Runs on every push and pull request
- Code Quality: Includes formatting checks and clippy linting
- Release Automation: Automatically builds and publishes artifacts on releases
- Docker Integration: Builds and pushes Docker images to registry
- Artifact Publishing: Creates distributable binaries and archives
- Create a GitHub release with a semantic version tag (e.g.,
v1.0.0) - Workflow automatically:
- Builds the Rust application
- Creates release artifacts (binary + tarball)
- Builds and pushes Docker images
- Uploads artifacts to GitHub releases
For Docker publishing, set these repository secrets:
DOCKER_USERNAME: Your Docker Hub usernameDOCKER_PASSWORD: Your Docker Hub access token
# Test only
gh workflow run rust.yml --ref main
# Build Docker image (on main branch)
gh workflow run rust.yml --ref mainThe application uses structured logging with the tracing crate. You can configure log levels using environment variables:
-
WW_IPFS: IPFS node HTTP API endpoint (defaults to http://localhost:5001)# Use default localhost endpoint export WW_IPFS=http://localhost:5001 # Use custom IPFS node export WW_IPFS=http://192.168.1.100:5001 # Use remote IPFS node export WW_IPFS=https://ipfs.example.com:5001
-
WW_LOGLVL: Controls the log level (trace, debug, info, warn, error)# Set log level for all components export WW_LOGLVL=info # More verbose logging export WW_LOGLVL=debug export WW_LOGLVL=trace # Only show warnings and errors export WW_LOGLVL=warn export WW_LOGLVL=error
error: Errors that need immediate attentionwarn: Warnings about potential issuesinfo: General information about application flowdebug: Detailed debugging informationtrace: Very detailed tracing (very verbose)
- Configuration: Determines IPFS endpoint from command line, environment variable, or default
- Peer Discovery: Queries the configured IPFS node's HTTP API to discover connected peers
- Host Creation: Generates Ed25519 keypair and creates libp2p swarm with IPFS-compatible protocols
- DHT Bootstrap: Adds discovered peers to Kademlia routing table and establishes connections
- Network Integration: Joins the IPFS DHT network and participates in DHT operations
- DHT Operations: Can provide content and query for providers in the IPFS network
The application implements a sophisticated DHT bootstrap process:
- Peer Discovery: Queries the local Kubo node's
/api/v0/swarm/peersendpoint to discover connected peers - Routing Table Population: Adds discovered peers to the Kademlia routing table before establishing connections
- Connection Establishment: Dials discovered peers to establish TCP connections
- Protocol Handshake: Performs identify and Kademlia protocol handshakes using standard IPFS protocols
- Bootstrap Trigger: Triggers the Kademlia bootstrap process to populate the routing table
- Network Participation: Begins participating in DHT operations (provide/query)
This approach ensures rapid integration into the IPFS network by leveraging the local Kubo node's peer knowledge.
The application is designed for full IPFS network compatibility:
- Kademlia DHT: Uses
/ipfs/kad/1.0.0protocol for DHT operations - Identify: Uses
/ipfs/id/1.0.0protocol for peer identification - Transport: Supports TCP with Noise encryption and Yamux multiplexing
- Key Types: Supports both Ed25519 (modern) and RSA (legacy) key types
- Multiaddr: Handles standard IPFS multiaddresses with peer IDs
This ensures the application can communicate with any IPFS node in the network, regardless of their specific configuration.
- "IPFS API file not found": Make sure Kubo is running (
kubo daemon) - Connection errors: Check if Kubo is listening on the expected port and endpoint
- DHT bootstrap failures: Ensure Kubo has peers and the API endpoint is correct
- Protocol compatibility: The application uses standard IPFS protocols for full compatibility
- RSA connection errors: RSA support is included for legacy IPFS peers
- Configuration issues: Check
WW_IPFSenvironment variable for correct IPFS endpoint - Logging issues: Check
WW_LOGLVLenvironment variable and ensure tracing is properly initialized
libp2p: P2P networking stack with IPFS protocol supportlibp2p-kad: Kademlia DHT implementation for IPFS compatibilitylibp2p-identify: Peer identification protocol for IPFS compatibilityreqwest: HTTP client for Kubo API integrationtokio: Async runtime for concurrent operationsanyhow: Error handling and propagationserde: JSON serialization/deserialization for API responsestracing: Structured logging framework with performance metricstracing-subscriber: Logging subscriber with environment-based configuration