You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
## Description
<!-- Provide a concise and descriptive summary of the changes
implemented in this PR. -->
### Type of change
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] Documentation update (improves or adds clarity to existing
documentation)
### Tested on
- [ ] iOS
- [x] Android
### Testing instructions
<!-- Provide step-by-step instructions on how to test your changes.
Include setup details if necessary. -->
### Screenshots
<!-- Add screenshots here, if applicable -->
### Related issues
<!-- Link related issues here using #issue-number -->
### Checklist
- [x] I have performed a self-review of my code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [x] I have updated the documentation accordingly
- [x] My changes generate no new warnings
### Additional notes
<!-- Include any additional information, assumptions, or context that
reviewers might need to understand this PR. -->
---------
Co-authored-by: Mateusz Kopcinski <[email protected]>
Co-authored-by: Mateusz Kopciński <[email protected]>
Copy file name to clipboardexpand all lines: docs/docs/guides/running-llms.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ const llama = useLLM({
23
23
});
24
24
```
25
25
26
-
The code snippet above fetches the model from the specified URL, loads it into memory, and returns an object with various methods and properties for controlling the model. You can monitor the loading progress by checking the `llama.downloadProgress` and `llama.isModelReady` property, and if anything goes wrong, the `llama.error` property will contain the error message.
26
+
The code snippet above fetches the model from the specified URL, loads it into memory, and returns an object with various methods and properties for controlling the model. You can monitor the loading progress by checking the `llama.downloadProgress` and `llama.isReady` property, and if anything goes wrong, the `llama.error` property will contain the error message.
27
27
28
28
:::danger[Danger]
29
29
Lower-end devices might not be able to fit LLMs into memory. We recommend using quantized models to reduce the memory footprint.
@@ -50,9 +50,9 @@ Given computational constraints, our architecture is designed to support only on
50
50
|`generate`|`(input: string) => Promise<void>`| Function to start generating a response with the given input string. |
51
51
|`response`|`string`| State of the generated response. This field is updated with each token generated by the model |
52
52
|`error`| <code>string | null</code> | Contains the error message if the model failed to load |
53
-
|`isModelGenerating`|`boolean`| Indicates whether the model is currently generating a response |
53
+
|`isGenerating`|`boolean`| Indicates whether the model is currently generating a response |
54
54
|`interrupt`|`() => void`| Function to interrupt the current inference |
55
-
|`isModelReady`|`boolean`| Indicates whether the model is ready |
55
+
|`isReady`|`boolean`| Indicates whether the model is ready |
56
56
|`downloadProgress`|`number`| Represents the download progress as a value between 0 and 1, indicating the extent of the model file retrieval. |
0 commit comments