|
| 1 | +# Over-Saturation Stopping |
| 2 | + |
| 3 | +GuideLLM provides over-saturation detection (OSD) to automatically stop benchmarks when a model becomes over-saturated. This feature helps prevent wasted compute resources and ensures that benchmark results remain valid by detecting when the response rate can no longer keep up with the request rate. |
| 4 | + |
| 5 | +## What is Over-Saturation? |
| 6 | + |
| 7 | +Over-saturation occurs when an LLM inference server receives requests faster than it can process them, causing a queue to build up. As the queue grows, the server takes progressively longer to start handling each request, leading to degraded performance metrics. When a performance benchmarking tool oversaturates an LLM inference server, the metrics it measures become significantly skewed, rendering them useless. |
| 8 | + |
| 9 | +Think of it like a cashier getting flustered during a sudden rush. As the line grows (the load), the cashier can't keep up, the line gets longer, and there is no room for additional customers. This waste of costly machine time can be prevented by automatically detecting and stopping benchmarks when over-saturation is detected. |
| 10 | + |
| 11 | +## How It Works |
| 12 | + |
| 13 | +GuideLLM's Over-Saturation Detection (OSD) algorithm uses statistical slope detection to identify when a model becomes over-saturated. The algorithm tracks two key metrics over time: |
| 14 | + |
| 15 | +1. **Concurrent Requests**: The number of requests being processed simultaneously |
| 16 | +2. **Time-to-First-Token (TTFT)**: The latency for the first token of each response |
| 17 | + |
| 18 | +For each metric, the algorithm: |
| 19 | + |
| 20 | +- Maintains a sliding window of recent data points |
| 21 | +- Calculates the linear regression slope using online statistics |
| 22 | +- Computes the margin of error (MOE) using t-distribution confidence intervals |
| 23 | +- Detects positive slopes with low MOE, indicating degradation |
| 24 | + |
| 25 | +Over-saturation is detected when: |
| 26 | + |
| 27 | +- Both concurrent requests and TTFT show statistically significant positive slopes |
| 28 | +- The minimum duration threshold has been met |
| 29 | +- Sufficient data points are available for reliable slope estimation |
| 30 | + |
| 31 | +When over-saturation is detected, the constraint automatically stops request queuing and optionally stops processing of existing requests, preventing further resource waste. |
| 32 | + |
| 33 | +## Usage |
| 34 | + |
| 35 | +### Basic Usage |
| 36 | + |
| 37 | +Enable over-saturation detection with default settings: |
| 38 | + |
| 39 | +```bash |
| 40 | +guidellm benchmark \ |
| 41 | + --target http://localhost:8000 \ |
| 42 | + --profile throughput \ |
| 43 | + --rate 10 \ |
| 44 | + --detect-saturation |
| 45 | +``` |
| 46 | + |
| 47 | +### Advanced Configuration |
| 48 | + |
| 49 | +Configure detection parameters using a JSON dictionary: |
| 50 | + |
| 51 | +```bash |
| 52 | +guidellm benchmark \ |
| 53 | + --target http://localhost:8000 \ |
| 54 | + --profile concurrent \ |
| 55 | + --rate 16 \ |
| 56 | + --over-saturation '{"enabled": true, "min_seconds": 60, "max_window_seconds": 300, "moe_threshold": 1.5}' |
| 57 | +``` |
| 58 | + |
| 59 | +## Configuration Options |
| 60 | + |
| 61 | +The following parameters can be configured when enabling over-saturation detection: |
| 62 | + |
| 63 | +- **`enabled`** (bool, default: `True`): Whether to stop the benchmark if over-saturation is detected |
| 64 | +- **`min_seconds`** (float, default: `30.0`): Minimum seconds before checking for over-saturation. This prevents false positives during the initial warm-up phase. |
| 65 | +- **`max_window_seconds`** (float, default: `120.0`): Maximum time window in seconds for data retention. Older data points are automatically pruned to maintain bounded memory usage. |
| 66 | +- **`moe_threshold`** (float, default: `2.0`): Margin of error threshold for slope detection. Lower values make detection more sensitive to degradation. |
| 67 | +- **`minimum_ttft`** (float, default: `2.5`): Minimum TTFT threshold in seconds for violation counting. Only TTFT values above this threshold are counted as violations. |
| 68 | +- **`maximum_window_ratio`** (float, default: `0.75`): Maximum window size as a ratio of total requests. Limits memory usage by capping the number of tracked requests. |
| 69 | +- **`minimum_window_size`** (int, default: `5`): Minimum data points required for slope estimation. Ensures statistical reliability before making detection decisions. |
| 70 | +- **`confidence`** (float, default: `0.95`): Statistical confidence level for t-distribution calculations (0-1). Higher values require stronger evidence before detecting over-saturation. |
| 71 | + |
| 72 | +## Use Cases |
| 73 | + |
| 74 | +Over-saturation detection is particularly useful in the following scenarios: |
| 75 | + |
| 76 | +### Stress Testing and Capacity Planning |
| 77 | + |
| 78 | +When testing how your system handles increasing load, over-saturation detection automatically stops benchmarks once the system can no longer keep up, preventing wasted compute time on invalid results. |
| 79 | + |
| 80 | +```bash |
| 81 | +guidellm benchmark \ |
| 82 | + --target http://localhost:8000 \ |
| 83 | + --profile sweep \ |
| 84 | + --rate 5 \ |
| 85 | + --detect-saturation |
| 86 | +``` |
| 87 | + |
| 88 | +### Cost-Effective Benchmarking |
| 89 | + |
| 90 | +When running large-scale benchmark matrices across multiple models, GPUs, and configurations, over-saturation detection can significantly reduce costs by stopping invalid runs early. |
| 91 | + |
| 92 | +### Finding Safe Operating Ranges |
| 93 | + |
| 94 | +Use over-saturation detection to identify the maximum sustainable throughput for your deployment, helping you set appropriate rate limits and capacity planning targets. |
| 95 | + |
| 96 | +## Interpreting Results |
| 97 | + |
| 98 | +When over-saturation detection is enabled, the benchmark output includes metadata about the detection state. This metadata is available in the scheduler action metadata and includes: |
| 99 | + |
| 100 | +- **`is_over_saturated`** (bool): Whether over-saturation was detected at the time of evaluation |
| 101 | +- **`concurrent_slope`** (float): The calculated slope for concurrent requests |
| 102 | +- **`concurrent_slope_moe`** (float): The margin of error for the concurrent requests slope |
| 103 | +- **`concurrent_n`** (int): The number of data points used for concurrent requests slope calculation |
| 104 | +- **`ttft_slope`** (float): The calculated slope for TTFT |
| 105 | +- **`ttft_slope_moe`** (float): The margin of error for the TTFT slope |
| 106 | +- **`ttft_n`** (int): The number of data points used for TTFT slope calculation |
| 107 | +- **`ttft_violations`** (int): The count of TTFT values exceeding the minimum threshold |
| 108 | + |
| 109 | +These metrics can help you understand why over-saturation was detected and fine-tune the detection parameters if needed. |
| 110 | + |
| 111 | +## Example: Complete Benchmark with Over-Saturation Detection |
| 112 | + |
| 113 | +```bash |
| 114 | +guidellm benchmark \ |
| 115 | + --target http://localhost:8000 \ |
| 116 | + --profile concurrent \ |
| 117 | + --rate 16 \ |
| 118 | + --data "prompt_tokens=256,output_tokens=128" \ |
| 119 | + --max-seconds 300 \ |
| 120 | + --over-saturation '{"enabled": true, "min_seconds": 30, "max_window_seconds": 120}' \ |
| 121 | + --outputs json,html |
| 122 | +``` |
| 123 | + |
| 124 | +This example: |
| 125 | + |
| 126 | +- Runs a concurrent benchmark with 16 simultaneous requests |
| 127 | +- Uses synthetic data with 256 prompt tokens and 128 output tokens |
| 128 | +- Enables over-saturation detection with custom timing parameters |
| 129 | +- Sets a maximum duration of 300 seconds (as a fallback) |
| 130 | +- Outputs results in both JSON and HTML formats |
| 131 | + |
| 132 | +## Additional Resources |
| 133 | + |
| 134 | +For more in-depth information about over-saturation detection, including the algorithm development, evaluation metrics, and implementation details, see the following Red Hat Developer blog posts: |
| 135 | + |
| 136 | +- [Reduce LLM benchmarking costs with oversaturation detection](https://developers.redhat.com/articles/2025/11/18/reduce-llm-benchmarking-costs-oversaturation-detection) - An introduction to the problem of over-saturation and why it matters for LLM benchmarking |
| 137 | +- [Defining success: Evaluation metrics and data augmentation for oversaturation detection](https://developers.redhat.com/articles/2025/11/20/oversaturation-detection-evaluation-metrics) - How to evaluate the performance of an OSD algorithm through custom metrics, dataset labeling, and load augmentation techniques |
| 138 | +- [Building an oversaturation detector with iterative error analysis](https://developers.redhat.com/articles/2025/11/24/building-oversaturation-detector-iterative-error-analysis) - A detailed walkthrough of how the OSD algorithm was built |
0 commit comments