You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+28-20Lines changed: 28 additions & 20 deletions
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ Scale Efficiently: Evaluate and Optimize Your LLM Deployments for Real-World Inf
20
20
</picture>
21
21
</p>
22
22
23
-
**GuideLLM** is a powerful tool for evaluating and optimizing the deployment of large language models (LLMs). By simulating real-world inference workloads, GuideLLM helps users gauge the performance, resource needs, and cost implications of deploying LLMs on various hardware configurations. This ensures efficient, scalable, and cost-effective LLM inference serving while maintaining high service quality.
23
+
**GuideLLM** is a powerful tool for evaluating and optimizing the deployment of large language models (LLMs). By simulating real-world inference workloads, GuideLLM helps users gauge the performance, resource needs, and cost implications of deploying LLMs on various hardware configurations. This approach ensures efficient, scalable, and cost-effective LLM inference serving while maintaining high service quality.
24
24
25
25
### Key Features
26
26
@@ -38,7 +38,7 @@ Before installing, ensure you have the following prerequisites:
38
38
- OS: Linux or MacOS
39
39
- Python: 3.8 – 3.12
40
40
41
-
GuideLLM is available on PyPI and can be installed using `pip`:
41
+
GuideLLM is available on PyPI and is installed using `pip`:
42
42
43
43
```bash
44
44
pip install guidellm
@@ -50,7 +50,7 @@ For detailed installation instructions and requirements, see the [Installation G
50
50
51
51
#### 1. Start an OpenAI Compatible Server (vLLM)
52
52
53
-
GuideLLM requires an OpenAI-compatible server to run evaluations. It's recommended that [vLLM](https://github.com/vllm-project/vllm)be used for this purpose. To start a vLLM server with a Llama 3.1 8B quantized model, run the following command:
53
+
GuideLLM requires an OpenAI-compatible server to run evaluations. [vLLM](https://github.com/vllm-project/vllm)is recommended for this purpose. To start a vLLM server with a Llama 3.1 8B quantized model, run the following command:
The above command will begin the evaluation and output progress updates similar to the following (if running on a different server, be sure to update the target!): <imgsrc="https://raw.githubusercontent.com/neuralmagic/guidellm/main/docs/assets/sample-benchmarks.gif"/>
73
+
The above command will begin the evaluation and output progress updates similar to the following (if running on a different server, be sure to update the target!): <imgsrc="https://raw.githubusercontent.com/neuralmagic/guidellm/main/docs/assets/sample-benchmarks.gif"/>
74
74
75
75
Notes:
76
76
77
77
- The `--target` flag specifies the server hosting the model. In this case, it is a local vLLM server.
78
78
- The `--model` flag specifies the model to evaluate. The model name should match the name of the model deployed on the server
79
-
- By default, GuideLLM will run a `sweep` of performance evaluations across different request rates, each lasting 120 seconds. The results will be saved to a local directory.
79
+
- By default, GuideLLM will run a `sweep` of performance evaluations across different request rates, each lasting 120 seconds and the results are printed out to the terminal.
80
80
81
81
#### 3. Analyze the Results
82
82
83
-
After the evaluation is completed, GuideLLM will output a summary of the results, including various performance metrics. The results will also be saved to a local directory for further analysis.
83
+
After the evaluation is completed, GuideLLM will summarize the results, including various performance metrics.
84
84
85
85
The output results will start with a summary of the evaluation, followed by the requests data for each benchmark run. For example, the start of the output will look like the following:
86
86
@@ -90,26 +90,34 @@ The end of the output will include important performance summary metrics such as
90
90
91
91
<imgalt="Sample GuideLLM benchmark end output"src="https://github.com/neuralmagic/guidellm/blob/main/docs/assets/sample-output-end.png" />
92
92
93
+
#### 4. Use the Results
94
+
95
+
The results from GuideLLM are used to optimize your LLM deployment for performance, resource efficiency, and cost. By analyzing the performance metrics, you can identify bottlenecks, determine the optimal request rate, and select the most cost-effective hardware configuration for your deployment.
96
+
97
+
For example, if we deploy a latency-sensitive chat application, we likely want to optimize for low time to first token (TTFT) and inter-token latency (ITL). A reasonable threshold will depend on the application requirements. Still, we may want to ensure time to first token (TTFT) is under 200ms and inter-token latency (ITL) is under 50ms (20 updates per second). From the example results above, we can see that the model can meet these requirements on average at a request rate of 2.37 requests per second for each server. If you'd like to target a higher percentage of requests meeting these requirements, you can use the **Performance Stats by Benchmark** section to determine the rate at which 90% or 95% of requests meet these requirements.
98
+
99
+
If we deploy a throughput-sensitive summarization application, we likely want to optimize for the maximum requests the server can handle per second. In this case, the throughput benchmark shows that the server maxes out at 4.06 requests per second. If we need to handle more requests, consider adding more servers or upgrading the hardware configuration.
100
+
93
101
### Configurations
94
102
95
103
GuideLLM provides various CLI and environment options to customize evaluations, including setting the duration of each benchmark run, the number of concurrent requests, and the request rate.
96
104
97
-
Some common configurations for the CLI include:
105
+
Some typical configurations for the CLI include:
98
106
99
107
-`--rate-type`: The rate to use for benchmarking. Options include `sweep`, `synchronous`, `throughput`, `constant`, and `poisson`.
100
-
-`--rate-type sweep`: (default) Sweep runs through the full range of performance for the server. Starting with a `synchronous` rate first, then `throughput`, and finally 10 `constant` rates between the min and max request rate found.
101
-
-`--rate-type synchronous`: Synchronous runs requests in a synchronous manner, one after the other.
108
+
-`--rate-type sweep`: (default) Sweep runs through the full range of the server's performance, starting with a `synchronous` rate, then `throughput`, and finally, 10 `constant` rates between the min and max request rate found.
109
+
-`--rate-type synchronous`: Synchronous runs requests synchronously, one after the other.
102
110
-`--rate-type throughput`: Throughput runs requests in a throughput manner, sending requests as fast as possible.
103
-
-`--rate-type constant`: Constant runs requests at a constant rate. Specify the rate in requests per second with the `--rate` argument. For example, `--rate 10` or multiple rates with `--rate 10 --rate 20 --rate 30`.
104
-
-`--rate-type poisson`: Poisson draws from a poisson distribution with the mean at the specified rate, adding some real-world variance to the runs. Specify the rate in requests per second with the `--rate` argument. For example, `--rate 10` or multiple rates with `--rate 10 --rate 20 --rate 30`.
111
+
-`--rate-type constant`: Constant runs requests at a constant rate. Specify the request rate per second with the `--rate` argument. For example, `--rate 10` or multiple rates with `--rate 10 --rate 20 --rate 30`.
112
+
-`--rate-type poisson`: Poisson draws from a Poisson distribution with the mean at the specified rate, adding some real-world variance to the runs. Specify the request rate per second with the `--rate` argument. For example, `--rate 10` or multiple rates with `--rate 10 --rate 20 --rate 30`.
105
113
-`--data-type`: The data to use for the benchmark. Options include `emulated`, `transformers`, and `file`.
106
-
-`--data-type emulated`: Emulated supports an EmulationConfig in string or file format for the `--data` argument to generate fake data. Specify the number of prompt tokens at a minimum and optionally the number of output tokens and other params for variance in the length. For example, `--data "prompt_tokens=128"`, `--data "prompt_tokens=128,generated_tokens=128"`, or `--data "prompt_tokens=128,prompt_tokens_variance=10"`.
114
+
-`--data-type emulated`: Emulated supports an EmulationConfig in string or file format for the `--data` argument to generate fake data. Specify the number of prompt tokens at a minimum and optionally the number of output tokens and other parameters for variance in the length. For example, `--data "prompt_tokens=128"`, `--data "prompt_tokens=128,generated_tokens=128"`, or `--data "prompt_tokens=128,prompt_tokens_variance=10"`.
107
115
-`--data-type file`: File supports a file path or URL to a file for the `--data` argument. The file should contain data encoded as a CSV, JSONL, TXT, or JSON/YAML file with a single prompt per line for CSV, JSONL, and TXT or a list of prompts for JSON/YAML. For example, `--data "data.txt"` where data.txt contents are `"prompt1\nprompt2\nprompt3"`.
108
-
-`--data-type transformers`: Transformers supports a dataset name or dataset file path for the `--data` argument. For example, `--data "neuralmagic/LLM_compression_calibration"`.
116
+
-`--data-type transformers`: Transformers supports a dataset name or file path for the `--data` argument. For example, `--data "neuralmagic/LLM_compression_calibration"`.
109
117
-`--max-seconds`: The maximum number of seconds to run each benchmark. The default is 120 seconds.
110
118
-`--max-requests`: The maximum number of requests to run in each benchmark.
111
119
112
-
For a full list of supported CLI arguments, run the following command:
120
+
For a complete list of supported CLI arguments, run the following command:
113
121
114
122
```bash
115
123
guidellm --help
@@ -121,7 +129,7 @@ For a full list of configuration options, run the following command:
121
129
guidellm-config
122
130
```
123
131
124
-
For further information, see the [GuideLLM Documentation](#Documentation).
132
+
See the [GuideLLM Documentation](#Documentation) for further information.
125
133
126
134
## Resources
127
135
@@ -131,7 +139,7 @@ Our comprehensive documentation provides detailed guides and resources to help y
131
139
132
140
### Core Docs
133
141
134
-
-[**Installation Guide**](https://github.com/neuralmagic/guidellm/tree/main/docs/install.md) - Step-by-step instructions to install GuideLLM, including prerequisites and setup tips.
142
+
-[**Installation Guide**](https://github.com/neuralmagic/guidellm/tree/main/docs/install.md) - This guide provides step-by-step instructions for installing GuideLLM, including prerequisites and setup tips.
135
143
-[**Architecture Overview**](https://github.com/neuralmagic/guidellm/tree/main/docs/architecture.md) - A detailed look at GuideLLM's design, components, and how they interact.
136
144
-[**CLI Guide**](https://github.com/neuralmagic/guidellm/tree/main/docs/guides/cli.md) - Comprehensive usage information for running GuideLLM via the command line, including available commands and options.
137
145
-[**Configuration Guide**](https://github.com/neuralmagic/guidellm/tree/main/docs/guides/configuration.md) - Instructions on configuring GuideLLM to suit various deployment needs and performance goals.
@@ -142,7 +150,7 @@ Our comprehensive documentation provides detailed guides and resources to help y
142
150
143
151
### Releases
144
152
145
-
Stay updated with the latest releases by visiting our [GitHub Releases page](https://github.com/neuralmagic/guidellm/releases) and reviewing the release notes.
153
+
Visit our [GitHub Releases page](https://github.com/neuralmagic/guidellm/releases) and review the release notes to stay updated with the latest releases.
146
154
147
155
### License
148
156
@@ -160,12 +168,12 @@ We appreciate contributions to the code, examples, integrations, documentation,
160
168
161
169
### Join
162
170
163
-
We invite you to join our growing community of developers, researchers, and enthusiasts passionate about LLMs and optimization. Whether you’re looking for help, want to share your own experiences, or stay up to date with the latest developments, there are plenty of ways to get involved:
171
+
We invite you to join our growing community of developers, researchers, and enthusiasts passionate about LLMs and optimization. Whether you're looking for help, want to share your own experiences, or stay up to date with the latest developments, there are plenty of ways to get involved:
164
172
165
173
-[**Neural Magic Community Slack**](https://neuralmagic.com/community/) - Join our Slack channel to connect with other GuideLLM users and developers. Ask questions, share your work, and get real-time support.
166
174
-[**GitHub Issues**](https://github.com/neuralmagic/guidellm/issues) - Report bugs, request features, or browse existing issues. Your feedback helps us improve GuideLLM.
167
-
-[**Subscribe to Updates**](https://neuralmagic.com/subscribe/) - Sign up to receive the latest news, announcements, and updates about GuideLLM, webinars, events, and more.
168
-
-[**Contact Us**](http://neuralmagic.com/contact/) - Use our contact form for more general questions about Neural Magic or GuideLLM.
175
+
-[**Subscribe to Updates**](https://neuralmagic.com/subscribe/) - Sign up for the latest news, announcements, and updates about GuideLLM, webinars, events, and more.
176
+
-[**Contact Us**](http://neuralmagic.com/contact/) - Use our contact form for general questions about Neural Magic or GuideLLM.
0 commit comments