Skip to content

Feature Request: Provide progress callback for loading model from cache #710

@kentcdodds

Description

@kentcdodds

Currently, initProgressCallback only reports progress when the model is being downloaded from the network. However, when the model is already cached in the browser (via CacheStorage), the user receives no feedback, even though loading and initializing the model from cache can still take multiple seconds.

I’d like to request a new callback, e.g., cacheLoadProgressCallback, that reports progress as each cached shard is read and processed.

This would allow developers to show meaningful progress indicators during both cold and warm starts.

Possible implementation ideas:

  • Trigger callback as each shard is read from CacheStorage and passed into arrayBuffer()
  • Report total bytes loaded from cache vs total expected
  • Or simply provide a boolean flag indicating whether loading is from cache or network

Thanks for the awesome work on WebLLM!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions