You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _benchmark/user-guide/optimizing-benchmarks/performance-testing-best-practices.md
+16-15Lines changed: 16 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,17 +14,17 @@ When conducting performance testing with OpenSearch Benchmark, it's crucial to f
14
14
15
15
Performance testing requires careful attention to the testing environment. A properly configured environment is crucial for obtaining reliable and reproducible results.
16
16
17
-
When setting up your testing environment, it's essential to use hardware that closely matches your production environment. Using development or underpowered hardware will not provide meaningful results that can translate to production performance.
17
+
When setting up your testing environment, it's essential to use hardware that closely matches your production environment. Using development or under powered hardware will not provide meaningful results that can translate to production performance. Local machines often have limited hardware, and local development libraries can conflict with the workload's library, preventing the Benchmark test from running effectively.
18
18
19
-
For best results, make sure that your cluster or test machine fulfills the following recommended minimum requirements:
19
+
For best results, make sure that your load generation host or machine running OpenSearch Benchmark follows the minimum hardware requirements:
20
20
21
21
- CPU: 8+ cores
22
22
- RAM: 32GB+
23
23
- Storage: SSD/NVMe
24
24
- Network: 10Gbps
25
25
26
26
27
-
It's recommended to provision a test cluster and configure its settings to reflect what you are most likely to deploy in production. Local machines often have limited hardware, and local development libraries can conflict with the workload's library, preventing the Benchmark test from running effectively.
27
+
It's recommended to provision a test cluster and configure its settings to reflect what you are most likely to deploy in production.
28
28
29
29
30
30
## Test configuration
@@ -51,15 +51,15 @@ The following example shows a basic benchmark configuration file. This configura
51
51
}
52
52
```
53
53
54
-
### Cluster settings
54
+
### Index settings
55
55
56
-
Your OpenSearch cluster settings should be optimized for your specific use case, such as the following index settings:
56
+
Your OpenSearch index settings should be optimized for your specific use case. Try and set the number of shards per index to match your production cluster. However, if you're developer who wants to focus on a single shard's performance and limit the variables impacting performance, use a single primary shard, as shown in the following example `index_settings`:
57
57
58
58
```json
59
59
{
60
60
"index_settings": {
61
-
"number_of_shards": 3,
62
-
"number_of_replicas": 1,
61
+
"number_of_shards": 1,
62
+
"number_of_replicas": 0,
63
63
"refresh_interval": "30s"
64
64
}
65
65
}
@@ -72,7 +72,7 @@ These settings offer ample storage space for your documents and test results wit
72
72
73
73
Running benchmark tests involves monitoring the system during the test and ensuring consistent conditions across test runs.
74
74
75
-
While you can run a basic test, you can customize your test run with additional Benchmark command options. The following example runs a `geonames` workload test that targets a specific host, and outputs the test results as a `csv`:
75
+
While you can run a basic test, you can customize your test run with additional [Benchmark command options]({{site.url}}{{site.baseurl}}/benchmark/reference/commands/index/). The following example runs a `geonames` workload test that targets a specific host, and outputs the test results as a `csv`, which can be used for further analysis of the benchmark's metrics:
76
76
77
77
```bash
78
78
opensearch-benchmark run \
@@ -94,6 +94,7 @@ vmstat 1
94
94
95
95
# Monitor OpenSearch metrics
96
96
curl localhost:9200/_cat/nodes?v
97
+
curl localhost:9200/_cat/indices?v
97
98
98
99
# Monitor cluster health
99
100
curl localhost:9200/_cluster/health?pretty
@@ -152,7 +153,7 @@ OpenSearch Benchmark calculates metrics differently from traditional client-serv
152
153
153
154
To integrate OpenSearch Benchmark results with OpenSearch Dashboards, you can perform one of the following:
154
155
155
-
1. Configure OpenSearch Benchmark to store results in OpenSearch.
156
+
1.[Configure OpenSearch Benchmark]({{site.url}}{{site.baseurl}}/benchmark/user-guide/install-and-configure/configuring-benchmark/) to store results in OpenSearch.
156
157
2. Create index patterns in OpenSearch Dashboards for the benchmark results.
157
158
3. Create visualizations and dashboards to analyze the benchmark data.
158
159
@@ -165,13 +166,13 @@ When conducting performance tests with OpenSearch Benchmark, it's important to b
165
166
166
167
Proper warmup is crucial for accurate performance testing. Without an adequate warmup period, your test results may be skewed by initial system instabilities or caching effects.
167
168
168
-
Don't run tests without a warmup period
169
+
Don't run tests without a warmup period.
169
170
170
171
Instead, always include an adequate warmup period in your tests. This allows the system to reach a steady state before measurements begin. The following example gives a `geonames` run a warmup period of `300s`.
The appropriate warmup time can vary depending on your specific workload and system configuration. Start with at least 5 minutes (300 seconds) and adjust as needed based on your observations.
@@ -180,13 +181,13 @@ The appropriate warmup time can vary depending on your specific workload and sys
180
181
181
182
One of the most common mistakes in performance testing is comparing results from different environments. Results obtained from a laptop or development machine are not comparable to those from a production server due to differences in hardware, network conditions, and other environmental factors.
182
183
183
-
Instead, make that all comparisons are made within the same or identical environments. If you need to compare different configurations, make sure to change only one variable at a time while keeping the environment consistent.
184
+
Instead, make sure that all comparisons are made within the same or identical environments. If you need to compare different configurations, make sure to change only one variable at a time while keeping the environment consistent.
184
185
185
186
### Document environment details
186
187
187
188
Proper documentation of your test environment is crucial for reproducibility and accurate analysis. Without detailed environment information, it becomes difficult to interpret results or reproduce tests in the future.
188
189
189
-
Don't omit environment details from your test reports:
190
+
Don't omit environment details from your test reports.
190
191
191
192
Instead, always document comprehensive details about your test environment. This should include hardware specifications, software versions, and any relevant configuration settings. as shown in the following example:
192
193
@@ -211,7 +212,7 @@ By documenting these details, you ensure that your test results can be properly
211
212
212
213
When encountering issues or unexpected results, OpenSearch Benchmark logs can provide valuable insights. Here's how to effectively use logs for troubleshooting:
213
214
214
-
1. Navigate to the log file: The main log file is typically located at `~/.benchmark/logs/benchmark.log`.
215
+
1. Navigate to the log file: The main log file is typically located at `~/.osb/logs/benchmark.log`.
215
216
216
217
2. Look for error messages. Search for lines containing "ERROR" or "WARNING" to identify potential issues.
217
218
@@ -229,7 +230,7 @@ Security should never be an afterthought in performance testing. It's important
229
230
230
231
### SSL configuration
231
232
232
-
Here's an example of how to configure SSL for secure communications during benchmark testing:
233
+
The following example of how to configure SSL in `opensearch.yml` for secure communications during benchmark testing:
0 commit comments