Skip to content

Commit 7d391c8

Browse files
mkopcinsMateusz Kopciński
and
Mateusz Kopciński
authored
fix typo tokenizer -> tokenizerSource (#89)
## Description Typo in docs ### Type of change - [ ] Bug fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) - [x] Documentation update (improves or adds clarity to existing documentation) ### Tested on - [ ] iOS - [ ] Android ### Testing instructions <!-- Provide step-by-step instructions on how to test your changes. Include setup details if necessary. --> ### Screenshots <!-- Add screenshots here, if applicable --> ### Related issues <!-- Link related issues here using #issue-number --> ### Checklist - [ ] I have performed a self-review of my code - [ ] I have commented my code, particularly in hard-to-understand areas - [ ] I have updated the documentation accordingly - [ ] My changes generate no new warnings ### Additional notes <!-- Include any additional information, assumptions, or context that reviewers might need to understand this PR. --> Co-authored-by: Mateusz Kopciński <[email protected]>
1 parent 1981c8e commit 7d391c8

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

docs/docs/fundamentals/loading-models.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,6 @@ import { useLLM } from 'react-native-executorch';
4040

4141
const llama = useLLM({
4242
modelSource: 'https://.../llama3_2.pte',
43-
tokenizer: require('../assets/tokenizer.bin'),
43+
tokenizerSource: require('../assets/tokenizer.bin'),
4444
});
4545
```

docs/docs/llms/running-llms.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ import { useLLM, LLAMA3_2_1B } from 'react-native-executorch';
1818

1919
const llama = useLLM({
2020
modelSource: LLAMA3_2_1B,
21-
tokenizer: require('../assets/tokenizer.bin'),
21+
tokenizerSource: require('../assets/tokenizer.bin'),
2222
contextWindowLength: 3,
2323
});
2424
```
@@ -37,7 +37,7 @@ Given computational constraints, our architecture is designed to support only on
3737

3838
**`modelSource`** - A string that specifies the location of the model binary. For more information, take a look at [loading models](../fundamentals/loading-models.md) section.
3939

40-
**`tokenizer`** - URL to the binary file which contains the tokenizer
40+
**`tokenizerSource`** - URL to the binary file which contains the tokenizer
4141

4242
**`contextWindowLength`** - The number of messages from the current conversation that the model will use to generate a response. The higher the number, the more context the model will have. Keep in mind that using larger context windows will result in longer inference time and higher memory usage.
4343

@@ -62,7 +62,7 @@ In order to send a message to the model, one can use the following code:
6262
```typescript
6363
const llama = useLLM(
6464
modelSource: LLAMA3_2_1B,
65-
tokenizer: require('../assets/tokenizer.bin'),
65+
tokenizerSource: require('../assets/tokenizer.bin'),
6666
);
6767

6868
...

0 commit comments

Comments
 (0)