Skip to content

Releases: ipfs/kubo

v0.38.1

08 Oct 21:34
v0.38.1
6bf52ae

Choose a tag to compare

Note

This release was brought to you by the Shipyard team.

Overview

Kubo 0.38 simplifies content announcement configuration, introduces an experimental sweeping DHT provider for efficient large-scale operations, and includes various performance improvements.

v0.38.1 includes fixes for migrations on Windows and Pebble datastore – if you are using either, make sure to use .1 release.

🔦 Highlights

🚀 Repository migration: simplified provide configuration

This release migrates the repository from version 17 to version 18, simplifying how you configure content announcements.

The old Provider and Reprovider sections are now combined into a single Provide section. Your existing settings are automatically migrated - no manual changes needed.

Migration happens automatically when you run ipfs daemon --migrate. For manual migration: ipfs repo migrate --to=18.

Read more about the new system below.

🧹 Experimental Sweeping DHT Provider

A new experimental DHT provider is available as an alternative to both the default provider and the resource-intensive accelerated DHT client. Enable it via Provide.DHT.SweepEnabled.

How it works: Instead of providing keys one-by-one, the sweep provider systematically explores DHT keyspace regions in batches.

Reprovide Cycle Comparison

The diagram shows how sweep mode avoids the hourly traffic spikes of Accelerated DHT while maintaining similar effectiveness. By grouping CIDs into keyspace regions and processing them in batches, sweep mode reduces memory overhead and creates predictable network patterns.

Benefits for large-scale operations: Handles hundreds of thousands of CIDs with reduced memory and network connections, spreads operations evenly to eliminate resource spikes, maintains state across restarts through persistent keystore, and provides better metrics visibility.

Monitoring and debugging: Legacy mode (SweepEnabled=false) tracks provider_reprovider_provide_count and provider_reprovider_reprovide_count, while sweep mode (SweepEnabled=true) tracks total_provide_count_total. Enable debug logging with GOLOG_LOG_LEVEL=error,provider=debug,dht/provider=debug to see detailed logs from either system.

Note

This feature is experimental and opt-in. In the future, it will become the default and replace the legacy system. Some commands like ipfs stats provide and ipfs routing provide are not yet available with sweep mode. Run ipfs provide --help for alternatives.

For configuration details, see Provide.DHT. For metrics documentation, see Provide metrics.

📊 Exposed DHT metrics

Kubo now exposes DHT metrics from go-libp2p-kad-dht, including total_provide_count_total for sweep provider operations and RPC metrics prefixed with rpc_inbound_ and rpc_outbound_ for DHT message traffic. See Kubo metrics documentation for details.

🚨 Improved gateway error pages with diagnostic tools

Gateway error pages now provide more actionable information during content retrieval failures. When a 504 Gateway Timeout occurs, users see detailed retrieval state information including which phase failed and a sample of providers that were attempted:

Improved gateway error page showing retrieval diagnostics

  • Gateway.DiagnosticServiceURL (default: https://check.ipfs.network): Configures the diagnostic service URL. When set, 504 errors show a "Check CID retrievability" button that links to this service with ?cid=<failed-cid> for external diagnostics. Set to empty string to disable.
  • Enhanced error details: Timeout errors now display the retrieval phase where failure occurred (e.g., "connecting to providers", "fetching data") and up to 3 peer IDs that were attempted but couldn't deliver the content, making it easier to diagnose network or provider issues.
  • Retry button on all error pages: Every gateway error page now includes a retry button for quick page refresh without manual URL re-entry.

🎨 Updated WebUI

The Web UI has been updated to v4.9 with a new Diagnostics screen for troubleshooting and system monitoring. Access it at http://127.0.0.1:5001/webui when running your local IPFS node.

Diagnostics: Logs Files: Check Retrieval Diagnostics: Retrieval Results
Diagnostics logs Retrieval check interface Retrieval check results
Debug issues in real-time by adjusting log level without restart (global or per-subsystem like bitswap) Check if content is available to other peers directly from Files screen Find out why content won't load or who is providing it to the network
Peers: Agent Versions Files: Custom Sorting
Peers with Agent Version File sorting options
Know what software peers run Find files faster with new sorting

Additional improvements include a close button in the file viewer, better error handling, and fixed navigation highlighting.

📌 Pin name improvements

ipfs pin ls <cid> --names now correctly returns pin names for specific CIDs (#10649, boxo#1035), RPC no longer incorrectly returns names from other pins (#10966), and pin names are now limited to 255 bytes for better cross-platform compatibility (#10981).

🛠️ Identity CID size enforcement and ipfs files write fixes

Identity CID size limits are now enforced

Identity CIDs use multihash 0x00 to embed data directly in the CID without hashing. This experimental optimization was designed for tiny data where a CID reference would be larger than the data itself, but without size limits it was easy to misuse and could turn into an anti-pattern that wastes resources and enables abuse. This release enforces a maximum of 128 bytes for identity CIDs - attempting to exceed this limit will return a clear error message.

  • ipfs add --inline-limit and --hash=identity now enforce the 128-byte maximum (error when exceeded)
  • ipfs files write prevents creation of oversized identity CIDs

Multiple ipfs files write bugs have been fixed

This release resolves several long-standing MFS issues: raw nodes now preserve their codec instead of being forced to dag-pb, append operations on raw nodes work correctly by converting to UnixFS when needed, and identity CIDs properly inherit the full CID prefix from parent directories.

📤 Provide Filestore and Urlstore blocks on write

Improvements to the providing system in the last release (provide blocks according to the configured Strategy) left out Filestore and [Urlstore](https:/...

Read more

v0.38.0

02 Oct 01:46
v0.38.0
34debcb

Choose a tag to compare

Warning

  • ⚠️ Windows users should update to 0.38.1 due to #11009
  • ⚠️ Pebble users should update to 0.38.1 due to #11011
  • 🟢 macOS and Linux are free to upgrade

Note

This release was brought to you by the Shipyard team.

Overview

Kubo 0.38.0 simplifies content announcement configuration, introduces an experimental sweeping DHT provider for efficient large-scale operations, and includes various performance improvements.

🔦 Highlights

🚀 Repository migration: simplified provide configuration

This release migrates the repository from version 17 to version 18, simplifying how you configure content announcements.

The old Provider and Reprovider sections are now combined into a single Provide section. Your existing settings are automatically migrated - no manual changes needed.

Migration happens automatically when you run ipfs daemon --migrate. For manual migration: ipfs repo migrate --to=18.

Read more about the new system below.

🧹 Experimental Sweeping DHT Provider

A new experimental DHT provider is available as an alternative to both the default provider and the resource-intensive accelerated DHT client. Enable it via Provide.DHT.SweepEnabled.

How it works: Instead of providing keys one-by-one, the sweep provider systematically explores DHT keyspace regions in batches.

Reprovide Cycle Comparison

The diagram shows how sweep mode avoids the hourly traffic spikes of Accelerated DHT while maintaining similar effectiveness. By grouping CIDs into keyspace regions and processing them in batches, sweep mode reduces memory overhead and creates predictable network patterns.

Benefits for large-scale operations: Handles hundreds of thousands of CIDs with reduced memory and network connections, spreads operations evenly to eliminate resource spikes, maintains state across restarts through persistent keystore, and provides better metrics visibility.

Monitoring and debugging: Legacy mode (SweepEnabled=false) tracks provider_reprovider_provide_count and provider_reprovider_reprovide_count, while sweep mode (SweepEnabled=true) tracks total_provide_count_total. Enable debug logging with GOLOG_LOG_LEVEL=error,provider=debug,dht/provider=debug to see detailed logs from either system.

Note

This feature is experimental and opt-in. In the future, it will become the default and replace the legacy system. Some commands like ipfs stats provide and ipfs routing provide are not yet available with sweep mode. Run ipfs provide --help for alternatives.

For configuration details, see Provide.DHT. For metrics documentation, see Provide metrics.

📊 Exposed DHT metrics

Kubo now exposes DHT metrics from go-libp2p-kad-dht, including total_provide_count_total for sweep provider operations and RPC metrics prefixed with rpc_inbound_ and rpc_outbound_ for DHT message traffic. See Kubo metrics documentation for details.

🚨 Improved gateway error pages with diagnostic tools

Gateway error pages now provide more actionable information during content retrieval failures. When a 504 Gateway Timeout occurs, users see detailed retrieval state information including which phase failed and a sample of providers that were attempted:

Improved gateway error page showing retrieval diagnostics

  • Gateway.DiagnosticServiceURL (default: https://check.ipfs.network): Configures the diagnostic service URL. When set, 504 errors show a "Check CID retrievability" button that links to this service with ?cid=<failed-cid> for external diagnostics. Set to empty string to disable.
  • Enhanced error details: Timeout errors now display the retrieval phase where failure occurred (e.g., "connecting to providers", "fetching data") and up to 3 peer IDs that were attempted but couldn't deliver the content, making it easier to diagnose network or provider issues.
  • Retry button on all error pages: Every gateway error page now includes a retry button for quick page refresh without manual URL re-entry.

🎨 Updated WebUI

The Web UI has been updated to v4.9 with a new Diagnostics screen for troubleshooting and system monitoring. Access it at http://127.0.0.1:5001/webui when running your local IPFS node.

Diagnostics: Logs Files: Check Retrieval Diagnostics: Retrieval Results
Diagnostics logs Retrieval check interface Retrieval check results
Debug issues in real-time by adjusting log level without restart (global or per-subsystem like bitswap) Check if content is available to other peers directly from Files screen Find out why content won't load or who is providing it to the network
Peers: Agent Versions Files: Custom Sorting
Peers with Agent Version File sorting options
Know what software peers run Find files faster with new sorting

Additional improvements include a close button in the file viewer, better error handling, and fixed navigation highlighting.

📌 Pin name improvements

ipfs pin ls <cid> --names now correctly returns pin names for specific CIDs (#10649, boxo#1035), RPC no longer incorrectly returns names from other pins (#10966), and pin names are now limited to 255 bytes for better cross-platform compatibility (#10981).

🛠️ Identity CID size enforcement and ipfs files write fixes

Identity CID size limits are now enforced

Identity CIDs use multihash 0x00 to embed data directly in the CID without hashing. This experimental optimization was designed for tiny data where a CID reference would be larger than the data itself, but without size limits it was easy to misuse and could turn into an anti-pattern that wastes resources and enables abuse. This release enforces a maximum of 128 bytes for identity CIDs - attempting to exceed this limit will return a clear error message.

  • ipfs add --inline-limit and --hash=identity now enforce the 128-byte maximum (error when exceeded)
  • ipfs files write prevents creation of oversized identity CIDs

Multiple ipfs files write bugs have been fixed

This release resolves several long-standing MFS issues: raw nodes now preserve their codec instead of being forced to dag-pb, append operations on raw nodes work correctly by converting to UnixFS when needed, and identity CIDs properly inherit the full CID prefix from parent directories.

📤 Provide Filestore and Urlstore blocks on write

Improvements to the providing system in the last release (provide blocks according to the configured [Strategy](https://github.c...

Read more

v0.38.0-rc2

27 Sep 02:49
v0.38.0-rc2
070177b

Choose a tag to compare

v0.38.0-rc2 Pre-release
Pre-release

This release was brought to you by the Shipyard team.

v0.38.0-rc1

19 Sep 21:17
v0.38.0-rc1
d4b446b

Choose a tag to compare

v0.38.0-rc1 Pre-release
Pre-release

This release was brought to you by the Shipyard team.

v0.37.0

27 Aug 20:03
v0.37.0
6898472

Choose a tag to compare

Note

This release was brought to you by the Shipyard team.

Overview

Kubo 0.37.0 introduces embedded repository migrations, gateway resource protection, complete AutoConf control, improved reprovider strategies, and anonymous telemetry for better feature prioritization. This release significantly improves memory efficiency, network configuration flexibility, and operational reliability while maintaining full backward compatibility.

🔦 Highlights

🚀 Repository migration from v16 to v17 with embedded tooling

This release migrates the Kubo repository from version 16 to version 17. Migrations are now built directly into the binary - completing in milliseconds without internet access or external downloads.

ipfs daemon --migrate performs migrations automatically. Manual migration: ipfs repo migrate --to=17 (or --to=16 --allow-downgrade for compatibility). Embedded migrations apply to v17+; older versions still require external tools.

Legacy migration deprecation: Support for legacy migrations that download binaries from the internet will be removed in a future version. Only embedded migrations for the last 3 releases will be supported. Users with very old repositories should update in stages rather than skipping multiple versions.

🚦 Gateway concurrent request limits and retrieval timeouts

New configurable limits protect gateway resources during high load:

  • Gateway.RetrievalTimeout (default: 30s): Maximum duration for content retrieval. Returns 504 Gateway Timeout when exceeded - applies to both initial retrieval (time to first byte) and between subsequent writes.
  • Gateway.MaxConcurrentRequests (default: 4096): Limits concurrent HTTP requests. Returns 429 Too Many Requests when exceeded. Protects nodes from traffic spikes and resource exhaustion, especially useful behind reverse proxies without rate-limiting.

New Prometheus metrics for monitoring:

  • ipfs_http_gw_concurrent_requests: Current requests being processed
  • ipfs_http_gw_responses_total: HTTP responses by status code
  • ipfs_http_gw_retrieval_timeouts_total: Timeouts by status code and truncation status

Tuning tips:

  • Monitor metrics to understand gateway behavior and adjust based on observations
  • Watch ipfs_http_gw_concurrent_requests for saturation
  • Track ipfs_http_gw_retrieval_timeouts_total vs success rates to identify timeout patterns indicating routing or storage provider issues

🔧 AutoConf: Complete control over network defaults

Configuration fields now support ["auto"] placeholders that resolve to network defaults from AutoConf.URL. These defaults can be inspected, replaced with custom values, or disabled entirely. Previously, empty configuration fields like Routing.DelegatedRouters: [] would use hardcoded defaults - this system makes those defaults explicit through "auto" values. When upgrading to Kubo 0.37, custom configurations remain unchanged.

New --expand-auto flag shows resolved values for any config field:

ipfs config show --expand-auto                      # View all resolved endpoints
ipfs config Bootstrap --expand-auto                 # Check specific values
ipfs config Routing.DelegatedRouters --expand-auto
ipfs config DNS.Resolvers --expand-auto

Configuration can be managed via:

  • Replace "auto" with custom endpoints or set [] to disable features
  • Switch modes with --profile=autoconf-on|autoconf-off
  • Configure via AutoConf.Enabled and custom manifests via AutoConf.URL
# Enable automatic configuration
ipfs config profiles apply autoconf-on

# Or manually set specific fields
ipfs config Bootstrap '["auto"]'
ipfs config --json DNS.Resolvers '{".": ["https://dns.example.com/dns-query"], "eth.": ["auto"]}'

Organizations can host custom AutoConf manifests for private networks. See AutoConf documentation and format spec at https://conf.ipfs-mainnet.org/

🗑️ Clear provide queue when reprovide strategy changes

Changing Reprovider.Strategy and restarting Kubo now automatically clears the provide queue. Only content matching the new strategy will be announced.

Manual queue clearing is also available:

  • ipfs provide clear - clear all queued content announcements

Note

Upgrading to Kubo 0.37 will automatically clear any preexisting provide queue. The next time Reprovider.Interval hits, Reprovider.Strategy will be executed on a clean slate, ensuring consistent behavior with your current configuration.

🪵 Revamped ipfs log level command

The ipfs log level command has been completely revamped to support both getting and setting log levels with a unified interface.

New: Getting log levels

  • ipfs log level - Shows default level only
  • ipfs log level all - Shows log level for every subsystem, including default level
  • ipfs log level foo - Shows log level for a specific subsystem only
  • Kubo RPC API: POST /api/v0/log/level?arg=<subsystem>

Enhanced: Setting log levels

  • ipfs log level foo debug - Sets "foo" subsystem to "debug" level
  • ipfs log level all info - Sets all subsystems to "info" level (convenient, no escaping)
  • ipfs log level '*' info - Equivalent to above but requires shell escaping
  • ipfs log level foo default - Sets "foo" subsystem to current default level

The command now provides full visibility into your current logging configuration while maintaining full backward compatibility. Both all and * work for specifying all subsystems, with all being more convenient since it doesn't require shell escaping.

🧷 Named pins in ipfs add command

Added --pin-name flag to ipfs add for assigning names to pins.

$ ipfs add --pin-name=testname cat.jpg
added bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi cat.jpg

$ ipfs pin ls --names
bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi recursive testname

📝 New IPNS publishing options

Added support for controlling IPNS record publishing strategies with new command flags and configuration.

New command flags:

# Publish without network connectivity (local datastore only)
ipfs name publish --allow-offline /ipfs/QmHash

# Publish without DHT connectivity (uses local datastore and HTTP delegated publishers)
ipfs name publish --allow-delegated /ipfs/QmHash

Delegated publishers configuration:

Ipns.DelegatedPublishers configures HTTP endpoints for IPNS publishing. Supports "auto" for network defaults or custom HTTP endpoints. The --allow-delegated flag enables publishing through these endpoints without requiring DHT connectivity, useful for nodes behind restrictive networks or during testing.

🔢 Custom sequence numbers in ipfs name publish

Added --sequence flag to ipfs name publish for setting custom sequence numbers in IPNS records. This enables advanced use cases like manually coordinating updates across multiple nodes. See ipfs name publish --help for details.

⚙️ Reprovider.Strategy is now consistently respected

Prior to this version, files added, blocks received etc. were "provided" to the network (announced on the DHT) regardless of the "reproviding strategy" setting. For example:

  • Strategy set to "pi...
Read more

v0.37.0-rc1

21 Aug 21:02
v0.37.0-rc1
255bc88

Choose a tag to compare

v0.37.0-rc1 Pre-release
Pre-release

This release was brought to you by the Shipyard team.

Draft release notes: docs/changelogs/v0.37.md
Release status: #10867

v0.36.0

14 Jul 18:59
v0.36.0
37b8411

Choose a tag to compare

Note

This release was brought to you by the Shipyard team.

Overview

🔦 Highlights

HTTP Retrieval Client Now Enabled by Default

This release promotes the HTTP Retrieval client from an experimental feature to a standard feature that is enabled by default. When possible, Kubo will retrieve blocks over plain HTTPS (HTTP/2) without any extra user configuration.

See HTTPRetrieval for more details.

Bitswap Broadcast Reduction

The Bitswap client now supports broadcast reduction logic, which is enabled by default. This feature significantly reduces the number of broadcast messages sent to peers, resulting in lower bandwidth usage during load spikes.

The overall logic works by sending to non-local peers only if those peers have previously replied that they want data blocks. To minimize impact on existing workloads, by default, broadcasts are still always sent to peers on the local network, or the ones defined in Peering.Peers.

At Shipyard, we conducted A/B testing on our internal Kubo staging gateway with organic CID requests to ipfs.io. While these results may not exactly match your specific workload, the benefits proved significant enough to make this feature default. Here are the key findings:

  • Dramatic Resource Usage Reduction: Internal testing demonstrated a reduction in Bitswap broadcast messages by 80-98% and network bandwidth savings of 50-95%, with the greatest improvements occurring during high traffic and peer spikes. These efficiency gains lower operational costs of running Kubo under high load and improve the IPFS Mainnet (which is >80% Kubo-based) by reducing ambient traffic for all connected peers.
  • Improved Memory Stability: Memory stays stable even during major CID request spikes that increase peer count, preventing the out-of-memory (OOM) issues found in earlier Kubo versions.
  • Data Retrieval Performance Remains Strong: Our tests suggest that Kubo gateway hosts with broadcast reduction enabled achieve similar or better HTTP 200 success rates compared to version 0.35, while maintaining equivalent or higher want-have responses and unique blocks received.

For more information about our A/B tests, see kubo#10825.

To revert to the previous behavior for your own A/B testing, set Internal.Bitswap.BroadcastControl.Enable to false and monitor relevant metrics (ipfs_bitswap_bcast_skips_total, ipfs_bitswap_haves_received, ipfs_bitswap_unique_blocks_received, ipfs_bitswap_wanthaves_broadcast, HTTP 200 success rate).

For a description of the configuration items, see the documentation of Internal.Bitswap.BroadcastControl.

Update go-log to v2

go-log v2 has been out for quite a while now and it's time to deprecate v1.

  • Replace all use of go-log with go-log/v2
  • Makes /api/v0/log/tail useful over HTTP
  • Fixes ipfs log tail
  • Removes support for ContextWithLoggable as this is not needed for tracing-like functionality

Kubo now uses AutoNATv2 as a client

This Kubo release starts utilizing AutoNATv2 client functionality. go-libp2p v0.42 supports and depends on both AutoNATv1 and v2, and Autorelay feature continues to use v1. go-libp2p v0.43+ will discontinue internal use of AutoNATv1. We will maintain support for both v1 and v2 until then, though v1 will gradually be deprecated and ultimately removed.

Smarter AutoTLS registration

This update to libp2p and AutoTLS incorporates AutoNATv2 changes. It aims to reduce false-positive scenarios where AutoTLS certificate registration occurred before a publicly dialable multiaddr was available. This should result in fewer error logs during node start, especially when IPv6 and/or IPv4 NATs with UPnP/PCP/NAT-PMP are at play.

Overwrite option for files cp command

The ipfs files cp command has a --force option to allow it to overwrite existing files. Attempting to overwrite an existing directory results in an error.

Gateway now supports negative HTTP Range requests

The latest update to boxo/gateway adds support for negative HTTP Range requests, achieving [email protected] compatibility.
This provides greater interoperability with generic HTTP-based tools. For example, WebRecorder's https://replayweb.page/ can now directly load website snapshots from Kubo-backed URLs.

Option for filestore command to remove bad blocks

The experimental filestore command has a new option, --remove-bad-blocks, to verify objects in the filestore and remove those that fail verification.

ConnMgr.SilencePeriod configuration setting exposed

This connection manager option controls how often connections are swept and potentially terminated. See the ConnMgr documentation.

Fix handling of EDITOR env var

The ipfs config edit command did not correctly handle the EDITOR environment variable when its value contains flags and arguments, i.e. EDITOR=emacs -nw. The command was treating the entire value of $EDITOR as the name of the editor command. This has been fixed to parse the value of $EDITOR into separate args, respecting shell quoting.

📦️ Important dependency updates

  • update go-libp2p to v0.42.0
  • update go-libp2p-kad-dht to v0.33.0
  • update boxo to v0.33.0 (incl. v0.32.0)
  • update gateway-conformance to v0.8
  • update p2p-forge/client to v0.6.0
  • update github.com/cockroachdb/pebble/v2 to v2.0.6 for Go 1.25 support

📝 Changelog

Full Changelog
Read more

v0.36.0-rc2

08 Jul 23:25
v0.36.0-rc2
ee9a76f

Choose a tag to compare

v0.36.0-rc2 Pre-release
Pre-release

This release was brought to you by the Shipyard team.

Draft release notes: docs/changelogs/v0.36.md
Release status: #10816

v0.36.0-rc1

18 Jun 20:22
v0.36.0-rc1
127da7c

Choose a tag to compare

v0.36.0-rc1 Pre-release
Pre-release

This release was brought to you by the Shipyard team.

Draft release notes: docs/changelogs/v0.36.md
Release status: #10816

v0.35.0

21 May 18:00
v0.35.0
a78d155

Choose a tag to compare

Note

This release was brought to you by the Shipyard team.

Overview

This release brings significant UX and performance improvements to data onboarding, provisioning, and retrieval systems.

New configuration options let you customize the shape of UnixFS DAGs generated during the data import, control the scope of DAGs announced on the Amino DHT, select which delegated routing endpoints are queried, and choose whether to enable HTTP retrieval alongside Bitswap over Libp2p.

Continue reading for more details.

🗣 Discuss

If you have comments, questions, or feedback on this release, please post here.

If you experienced any bugs with the release, please post an issue.

🔦 Highlights

Opt-in HTTP Retrieval client

This release adds experimental support for retrieving blocks directly over HTTPS (HTTP/2), complementing the existing Bitswap over Libp2p.

The opt-in client enables Kubo to use delegated routing results with /tls/http multiaddrs, connecting to HTTPS servers that support Trustless HTTP Gateway's Block Responses (?format=raw, application/vnd.ipld.raw). Fetching blocks via HTTPS (HTTP/2) simplifies infrastructure and reduces costs for storage providers by leveraging HTTP caching and CDNs.

To enable this feature for testing and feedback, set:

$ ipfs config --json HTTPRetrieval.Enabled true

See HTTPRetrieval for more details.

Dedicated Reprovider.Strategy for MFS

The Mutable File System (MFS) in Kubo is a UnixFS filesystem managed with ipfs files commands. It supports familiar file operations like cp and mv within a folder-tree structure, automatically updating a MerkleDAG and a "root CID" that reflects the current MFS state. Files in MFS are protected from garbage collection, offering a simpler alternative to ipfs pin. This makes it a popular choice for tools like IPFS Desktop and the WebUI.

Previously, the pinned reprovider strategy required manual pin management: each dataset update meant pinning the new version and unpinning the old one. Now, new strategies—mfs and pinned+mfs—let users limit announcements to data explicitly placed in MFS. This simplifies updating datasets and announcing only the latest version to the Amino DHT.

Users relying on the pinned strategy can switch to pinned+mfs and use MFS alone to manage updates and announcements, eliminating the need for manual pinning and unpinning. We hope this makes it easier to publish just the data that matters to you.

See Reprovider.Strategy for more details.

Experimental support for MFS as a FUSE mount point

The MFS root (filesystem behind the ipfs files API) is now available as a read/write FUSE mount point at Mounts.MFS. This filesystem is mounted in the same way as Mounts.IPFS and Mounts.IPNS when running ipfs mount or ipfs daemon --mount.

Note that the operations supported by the MFS FUSE mountpoint are limited, since MFS doesn't store file attributes.

See Mounts and docs/fuse.md for more details.

Grid view in WebUI

The WebUI, accessible at http://127.0.0.1:5001/webui/, now includes support for the grid view on the Files screen:

image

Enhanced DAG-Shaping Controls

This release advances CIDv1 support by introducing fine-grained control over UnixFS DAG shaping during data ingestion with the ipfs add command.

Wider DAG trees (more links per node, higher fanout, larger thresholds) are beneficial for large files and directories with many files, reducing tree depth and lookup latency in high-latency networks, but they increase node size, straining memory and CPU on resource-constrained devices. Narrower trees (lower link count, lower fanout, smaller thresholds) are preferable for smaller directories, frequent updates, or low-power clients, minimizing overhead and ensuring compatibility, though they may increase traversal steps for very large datasets.

Kubo now allows users to act on these tradeoffs and customize the width of the DAG created by ipfs add command.

New DAG-Shaping ipfs add Options

Three new options allow you to override default settings for specific import operations:

  • --max-file-links: Sets the maximum number of child links for a single file chunk.
  • --max-directory-links: Defines the maximum number of child entries in a "basic" (single-chunk) directory.
    • Note: Directories exceeding this limit or the Import.UnixFSHAMTDirectorySizeThreshold are converted to HAMT-based (sharded across multiple blocks) structures.
  • --max-hamt-fanout: Specifies the maximum number of child nodes for HAMT internal structures.

Persistent DAG-Shaping Import.* Configuration

You can set default values for these options using the following configuration settings:

Updated DAG-Shaping Import Profiles

The release updated configuration profiles to incorporate these new Import.* settings:

  • Updated Profile: test-cid-v1 now includes current defaults as explicit Import.UnixFSFileMaxLinks=174, Import.UnixFSDirectoryMaxLinks=0, Import.UnixFSHAMTDirectoryMaxFanout=256 and Import.UnixFSHAMTDirectorySizeThreshold=256KiB
  • New Profile: test-cid-v1-wide adopts experimental directory DAG-shaping defaults, increasing the maximum file DAG width from 174 to 1024, HAMT fanout from 256 to 1024, and raising the HAMT directory sharding threshold from 256KiB to 1MiB, aligning with 1MiB file chunks.

Tip

Apply one of CIDv1 test profiles with ipfs config profile apply test-cid-v1[-wide].

Datastore Metrics Now Opt-In

To reduce overhead in the default configuration, datastore metrics are no longer enabled by default when initializing a Kubo repository with ipfs init.
Metrics prefixed with <dsname>_datastore (e.g., flatfs_datastore_..., leveldb_datastore_...) are not exposed unless explicitly enabled. For a complete list of affected default metrics, refer to prometheus_metrics_added_by_measure_profile.

Convenience opt-in profiles can be enabled at initi...

Read more