Skip to content

Commit 1b3ef07

Browse files
authored
Add Tiktokenizer link in "How to count tokens" (#604)
This adds a link to Tiktokenizer webapp as another tool, in addition to the OpenAI Tokenizer.
1 parent a4913d3 commit 1b3ef07

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

examples/How_to_count_tokens_with_tiktoken.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@
5454
"\n",
5555
"## How strings are typically tokenized\n",
5656
"\n",
57-
"In English, tokens commonly range in length from one character to one word (e.g., `\"t\"` or `\" great\"`), though in some languages tokens can be shorter than one character or longer than one word. Spaces are usually grouped with the starts of words (e.g., `\" is\"` instead of `\"is \"` or `\" \"`+`\"is\"`). You can quickly check how a string is tokenized at the [OpenAI Tokenizer](https://beta.openai.com/tokenizer)."
57+
"In English, tokens commonly range in length from one character to one word (e.g., `\"t\"` or `\" great\"`), though in some languages tokens can be shorter than one character or longer than one word. Spaces are usually grouped with the starts of words (e.g., `\" is\"` instead of `\"is \"` or `\" \"`+`\"is\"`). You can quickly check how a string is tokenized at the [OpenAI Tokenizer](https://beta.openai.com/tokenizer), or the third-party [Tiktokenizer](https://tiktokenizer.vercel.app/) webapp."
5858
]
5959
},
6060
{

0 commit comments

Comments
 (0)