From c9484e81263cc69c4d9acd8a5ba15a42758df0c4 Mon Sep 17 00:00:00 2001 From: myry96 <137733379+myry96@users.noreply.github.com> Date: Sun, 25 Jun 2023 17:49:45 -0700 Subject: [PATCH] [Doc] Fix typo in gpt_guide.md --- docs/gpt_guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/gpt_guide.md b/docs/gpt_guide.md index 4a10c1d46..e3563a3e6 100644 --- a/docs/gpt_guide.md +++ b/docs/gpt_guide.md @@ -116,7 +116,7 @@ In summary, the workflow to run the GPT model is: 1. Initializing the NCCL comm and setting ranks of tensor parallel and pipeline parallel by MPI or threading 2. Load weights by the ranks of tensor parallel, pipeline parallel and other model hyper-parameters. -3. Create the instance of `ParalelGpt` by the ranks of tensor parallel, pipeline parallel and other model hyper-parameters. +3. Create the instance of `ParallelGpt` by the ranks of tensor parallel, pipeline parallel and other model hyper-parameters. 4. Receive the request from client and convert the request to the format of input tensors for ParallelGpt. 5. Run forward 6. Convert the output tensors of ParallelGpt to response of client and return the response.