Skip to content

Commit ad4a8b8

Browse files
committed
Readme fixes for grammar/wording and expansions with 4. Use the Results section
1 parent 9cb19d9 commit ad4a8b8

File tree

1 file changed

+28
-20
lines changed

1 file changed

+28
-20
lines changed

README.md

Lines changed: 28 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ Scale Efficiently: Evaluate and Optimize Your LLM Deployments for Real-World Inf
2020
</picture>
2121
</p>
2222

23-
**GuideLLM** is a powerful tool for evaluating and optimizing the deployment of large language models (LLMs). By simulating real-world inference workloads, GuideLLM helps users gauge the performance, resource needs, and cost implications of deploying LLMs on various hardware configurations. This ensures efficient, scalable, and cost-effective LLM inference serving while maintaining high service quality.
23+
**GuideLLM** is a powerful tool for evaluating and optimizing the deployment of large language models (LLMs). By simulating real-world inference workloads, GuideLLM helps users gauge the performance, resource needs, and cost implications of deploying LLMs on various hardware configurations. This approach ensures efficient, scalable, and cost-effective LLM inference serving while maintaining high service quality.
2424

2525
### Key Features
2626

@@ -38,7 +38,7 @@ Before installing, ensure you have the following prerequisites:
3838
- OS: Linux or MacOS
3939
- Python: 3.8 – 3.12
4040

41-
GuideLLM is available on PyPI and can be installed using `pip`:
41+
GuideLLM is available on PyPI and is installed using `pip`:
4242

4343
```bash
4444
pip install guidellm
@@ -50,7 +50,7 @@ For detailed installation instructions and requirements, see the [Installation G
5050

5151
#### 1. Start an OpenAI Compatible Server (vLLM)
5252

53-
GuideLLM requires an OpenAI-compatible server to run evaluations. It's recommended that [vLLM](https://github.com/vllm-project/vllm) be used for this purpose. To start a vLLM server with a Llama 3.1 8B quantized model, run the following command:
53+
GuideLLM requires an OpenAI-compatible server to run evaluations. [vLLM](https://github.com/vllm-project/vllm) is recommended for this purpose. To start a vLLM server with a Llama 3.1 8B quantized model, run the following command:
5454

5555
```bash
5656
vllm serve "neuralmagic/Meta-Llama-3.1-8B-Instruct-quantized.w4a16"
@@ -70,17 +70,17 @@ guidellm \
7070
--data "prompt_tokens=512,generated_tokens=128"
7171
```
7272

73-
The above command will begin the evaluation and output progress updates similar to the following (if running on a different server, be sure to update the target!): <img src="https://raw.githubusercontent.com/neuralmagic/guidellm/main/docs/assets/sample-benchmarks.gif" />
73+
The above command will begin the evaluation and output progress updates similar to the following (if running on a different server, be sure to update the target!): <img src= "https://raw.githubusercontent.com/neuralmagic/guidellm/main/docs/assets/sample-benchmarks.gif"/>
7474

7575
Notes:
7676

7777
- The `--target` flag specifies the server hosting the model. In this case, it is a local vLLM server.
7878
- The `--model` flag specifies the model to evaluate. The model name should match the name of the model deployed on the server
79-
- By default, GuideLLM will run a `sweep` of performance evaluations across different request rates, each lasting 120 seconds. The results will be saved to a local directory.
79+
- By default, GuideLLM will run a `sweep` of performance evaluations across different request rates, each lasting 120 seconds and the results are printed out to the terminal.
8080

8181
#### 3. Analyze the Results
8282

83-
After the evaluation is completed, GuideLLM will output a summary of the results, including various performance metrics. The results will also be saved to a local directory for further analysis.
83+
After the evaluation is completed, GuideLLM will summarize the results, including various performance metrics.
8484

8585
The output results will start with a summary of the evaluation, followed by the requests data for each benchmark run. For example, the start of the output will look like the following:
8686

@@ -90,26 +90,34 @@ The end of the output will include important performance summary metrics such as
9090

9191
<img alt="Sample GuideLLM benchmark end output" src="https://github.com/neuralmagic/guidellm/blob/main/docs/assets/sample-output-end.png" />
9292

93+
#### 4. Use the Results
94+
95+
The results from GuideLLM are used to optimize your LLM deployment for performance, resource efficiency, and cost. By analyzing the performance metrics, you can identify bottlenecks, determine the optimal request rate, and select the most cost-effective hardware configuration for your deployment.
96+
97+
For example, if we deploy a latency-sensitive chat application, we likely want to optimize for low time to first token (TTFT) and inter-token latency (ITL). A reasonable threshold will depend on the application requirements. Still, we may want to ensure time to first token (TTFT) is under 200ms and inter-token latency (ITL) is under 50ms (20 updates per second). From the example results above, we can see that the model can meet these requirements on average at a request rate of 2.37 requests per second for each server. If you'd like to target a higher percentage of requests meeting these requirements, you can use the **Performance Stats by Benchmark** section to determine the rate at which 90% or 95% of requests meet these requirements.
98+
99+
If we deploy a throughput-sensitive summarization application, we likely want to optimize for the maximum requests the server can handle per second. In this case, the throughput benchmark shows that the server maxes out at 4.06 requests per second. If we need to handle more requests, consider adding more servers or upgrading the hardware configuration.
100+
93101
### Configurations
94102

95103
GuideLLM provides various CLI and environment options to customize evaluations, including setting the duration of each benchmark run, the number of concurrent requests, and the request rate.
96104

97-
Some common configurations for the CLI include:
105+
Some typical configurations for the CLI include:
98106

99107
- `--rate-type`: The rate to use for benchmarking. Options include `sweep`, `synchronous`, `throughput`, `constant`, and `poisson`.
100-
- `--rate-type sweep`: (default) Sweep runs through the full range of performance for the server. Starting with a `synchronous` rate first, then `throughput`, and finally 10 `constant` rates between the min and max request rate found.
101-
- `--rate-type synchronous`: Synchronous runs requests in a synchronous manner, one after the other.
108+
- `--rate-type sweep`: (default) Sweep runs through the full range of the server's performance, starting with a `synchronous` rate, then `throughput`, and finally, 10 `constant` rates between the min and max request rate found.
109+
- `--rate-type synchronous`: Synchronous runs requests synchronously, one after the other.
102110
- `--rate-type throughput`: Throughput runs requests in a throughput manner, sending requests as fast as possible.
103-
- `--rate-type constant`: Constant runs requests at a constant rate. Specify the rate in requests per second with the `--rate` argument. For example, `--rate 10` or multiple rates with `--rate 10 --rate 20 --rate 30`.
104-
- `--rate-type poisson`: Poisson draws from a poisson distribution with the mean at the specified rate, adding some real-world variance to the runs. Specify the rate in requests per second with the `--rate` argument. For example, `--rate 10` or multiple rates with `--rate 10 --rate 20 --rate 30`.
111+
- `--rate-type constant`: Constant runs requests at a constant rate. Specify the request rate per second with the `--rate` argument. For example, `--rate 10` or multiple rates with `--rate 10 --rate 20 --rate 30`.
112+
- `--rate-type poisson`: Poisson draws from a Poisson distribution with the mean at the specified rate, adding some real-world variance to the runs. Specify the request rate per second with the `--rate` argument. For example, `--rate 10` or multiple rates with `--rate 10 --rate 20 --rate 30`.
105113
- `--data-type`: The data to use for the benchmark. Options include `emulated`, `transformers`, and `file`.
106-
- `--data-type emulated`: Emulated supports an EmulationConfig in string or file format for the `--data` argument to generate fake data. Specify the number of prompt tokens at a minimum and optionally the number of output tokens and other params for variance in the length. For example, `--data "prompt_tokens=128"`, `--data "prompt_tokens=128,generated_tokens=128"`, or `--data "prompt_tokens=128,prompt_tokens_variance=10"`.
114+
- `--data-type emulated`: Emulated supports an EmulationConfig in string or file format for the `--data` argument to generate fake data. Specify the number of prompt tokens at a minimum and optionally the number of output tokens and other parameters for variance in the length. For example, `--data "prompt_tokens=128"`, `--data "prompt_tokens=128,generated_tokens=128" `, or `--data "prompt_tokens=128,prompt_tokens_variance=10" `.
107115
- `--data-type file`: File supports a file path or URL to a file for the `--data` argument. The file should contain data encoded as a CSV, JSONL, TXT, or JSON/YAML file with a single prompt per line for CSV, JSONL, and TXT or a list of prompts for JSON/YAML. For example, `--data "data.txt"` where data.txt contents are `"prompt1\nprompt2\nprompt3"`.
108-
- `--data-type transformers`: Transformers supports a dataset name or dataset file path for the `--data` argument. For example, `--data "neuralmagic/LLM_compression_calibration"`.
116+
- `--data-type transformers`: Transformers supports a dataset name or file path for the `--data` argument. For example, `--data "neuralmagic/LLM_compression_calibration"`.
109117
- `--max-seconds`: The maximum number of seconds to run each benchmark. The default is 120 seconds.
110118
- `--max-requests`: The maximum number of requests to run in each benchmark.
111119

112-
For a full list of supported CLI arguments, run the following command:
120+
For a complete list of supported CLI arguments, run the following command:
113121

114122
```bash
115123
guidellm --help
@@ -121,7 +129,7 @@ For a full list of configuration options, run the following command:
121129
guidellm-config
122130
```
123131

124-
For further information, see the [GuideLLM Documentation](#Documentation).
132+
See the [GuideLLM Documentation](#Documentation) for further information.
125133

126134
## Resources
127135

@@ -131,7 +139,7 @@ Our comprehensive documentation provides detailed guides and resources to help y
131139

132140
### Core Docs
133141

134-
- [**Installation Guide**](https://github.com/neuralmagic/guidellm/tree/main/docs/install.md) - Step-by-step instructions to install GuideLLM, including prerequisites and setup tips.
142+
- [**Installation Guide**](https://github.com/neuralmagic/guidellm/tree/main/docs/install.md) - This guide provides step-by-step instructions for installing GuideLLM, including prerequisites and setup tips.
135143
- [**Architecture Overview**](https://github.com/neuralmagic/guidellm/tree/main/docs/architecture.md) - A detailed look at GuideLLM's design, components, and how they interact.
136144
- [**CLI Guide**](https://github.com/neuralmagic/guidellm/tree/main/docs/guides/cli.md) - Comprehensive usage information for running GuideLLM via the command line, including available commands and options.
137145
- [**Configuration Guide**](https://github.com/neuralmagic/guidellm/tree/main/docs/guides/configuration.md) - Instructions on configuring GuideLLM to suit various deployment needs and performance goals.
@@ -142,7 +150,7 @@ Our comprehensive documentation provides detailed guides and resources to help y
142150

143151
### Releases
144152

145-
Stay updated with the latest releases by visiting our [GitHub Releases page](https://github.com/neuralmagic/guidellm/releases) and reviewing the release notes.
153+
Visit our [GitHub Releases page](https://github.com/neuralmagic/guidellm/releases) and review the release notes to stay updated with the latest releases.
146154

147155
### License
148156

@@ -160,12 +168,12 @@ We appreciate contributions to the code, examples, integrations, documentation,
160168

161169
### Join
162170

163-
We invite you to join our growing community of developers, researchers, and enthusiasts passionate about LLMs and optimization. Whether youre looking for help, want to share your own experiences, or stay up to date with the latest developments, there are plenty of ways to get involved:
171+
We invite you to join our growing community of developers, researchers, and enthusiasts passionate about LLMs and optimization. Whether you're looking for help, want to share your own experiences, or stay up to date with the latest developments, there are plenty of ways to get involved:
164172

165173
- [**Neural Magic Community Slack**](https://neuralmagic.com/community/) - Join our Slack channel to connect with other GuideLLM users and developers. Ask questions, share your work, and get real-time support.
166174
- [**GitHub Issues**](https://github.com/neuralmagic/guidellm/issues) - Report bugs, request features, or browse existing issues. Your feedback helps us improve GuideLLM.
167-
- [**Subscribe to Updates**](https://neuralmagic.com/subscribe/) - Sign up to receive the latest news, announcements, and updates about GuideLLM, webinars, events, and more.
168-
- [**Contact Us**](http://neuralmagic.com/contact/) - Use our contact form for more general questions about Neural Magic or GuideLLM.
175+
- [**Subscribe to Updates**](https://neuralmagic.com/subscribe/) - Sign up for the latest news, announcements, and updates about GuideLLM, webinars, events, and more.
176+
- [**Contact Us**](http://neuralmagic.com/contact/) - Use our contact form for general questions about Neural Magic or GuideLLM.
169177

170178
### Cite
171179

0 commit comments

Comments
 (0)