Skip to content

Commit f4b530f

Browse files
authored
Apply suggestions from code review
Signed-off-by: Naarcha-AWS <[email protected]>
1 parent 3cdea5c commit f4b530f

File tree

1 file changed

+16
-15
lines changed

1 file changed

+16
-15
lines changed

_benchmark/user-guide/optimizing-benchmarks/performance-testing-best-practices.md

Lines changed: 16 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -14,17 +14,17 @@ When conducting performance testing with OpenSearch Benchmark, it's crucial to f
1414

1515
Performance testing requires careful attention to the testing environment. A properly configured environment is crucial for obtaining reliable and reproducible results.
1616

17-
When setting up your testing environment, it's essential to use hardware that closely matches your production environment. Using development or underpowered hardware will not provide meaningful results that can translate to production performance.
17+
When setting up your testing environment, it's essential to use hardware that closely matches your production environment. Using development or under powered hardware will not provide meaningful results that can translate to production performance. Local machines often have limited hardware, and local development libraries can conflict with the workload's library, preventing the Benchmark test from running effectively.
1818

19-
For best results, make sure that your cluster or test machine fulfills the following recommended minimum requirements:
19+
For best results, make sure that your load generation host or machine running OpenSearch Benchmark follows the minimum hardware requirements:
2020

2121
- CPU: 8+ cores
2222
- RAM: 32GB+
2323
- Storage: SSD/NVMe
2424
- Network: 10Gbps
2525

2626

27-
It's recommended to provision a test cluster and configure its settings to reflect what you are most likely to deploy in production. Local machines often have limited hardware, and local development libraries can conflict with the workload's library, preventing the Benchmark test from running effectively.
27+
It's recommended to provision a test cluster and configure its settings to reflect what you are most likely to deploy in production.
2828

2929

3030
## Test configuration
@@ -51,15 +51,15 @@ The following example shows a basic benchmark configuration file. This configura
5151
}
5252
```
5353

54-
### Cluster settings
54+
### Index settings
5555

56-
Your OpenSearch cluster settings should be optimized for your specific use case, such as the following index settings:
56+
Your OpenSearch index settings should be optimized for your specific use case. Try and set the number of shards per index to match your production cluster. However, if you're developer who wants to focus on a single shard's performance and limit the variables impacting performance, use a single primary shard, as shown in the following example `index_settings`:
5757

5858
```json
5959
{
6060
"index_settings": {
61-
"number_of_shards": 3,
62-
"number_of_replicas": 1,
61+
"number_of_shards": 1,
62+
"number_of_replicas": 0,
6363
"refresh_interval": "30s"
6464
}
6565
}
@@ -72,7 +72,7 @@ These settings offer ample storage space for your documents and test results wit
7272

7373
Running benchmark tests involves monitoring the system during the test and ensuring consistent conditions across test runs.
7474

75-
While you can run a basic test, you can customize your test run with additional Benchmark command options. The following example runs a `geonames` workload test that targets a specific host, and outputs the test results as a `csv`:
75+
While you can run a basic test, you can customize your test run with additional [Benchmark command options]({{site.url}}{{site.baseurl}}/benchmark/reference/commands/index/). The following example runs a `geonames` workload test that targets a specific host, and outputs the test results as a `csv`, which can be used for further analysis of the benchmark's metrics:
7676

7777
```bash
7878
opensearch-benchmark run \
@@ -94,6 +94,7 @@ vmstat 1
9494

9595
# Monitor OpenSearch metrics
9696
curl localhost:9200/_cat/nodes?v
97+
curl localhost:9200/_cat/indices?v
9798

9899
# Monitor cluster health
99100
curl localhost:9200/_cluster/health?pretty
@@ -152,7 +153,7 @@ OpenSearch Benchmark calculates metrics differently from traditional client-serv
152153

153154
To integrate OpenSearch Benchmark results with OpenSearch Dashboards, you can perform one of the following:
154155

155-
1. Configure OpenSearch Benchmark to store results in OpenSearch.
156+
1. [Configure OpenSearch Benchmark]({{site.url}}{{site.baseurl}}/benchmark/user-guide/install-and-configure/configuring-benchmark/) to store results in OpenSearch.
156157
2. Create index patterns in OpenSearch Dashboards for the benchmark results.
157158
3. Create visualizations and dashboards to analyze the benchmark data.
158159

@@ -165,13 +166,13 @@ When conducting performance tests with OpenSearch Benchmark, it's important to b
165166

166167
Proper warmup is crucial for accurate performance testing. Without an adequate warmup period, your test results may be skewed by initial system instabilities or caching effects.
167168

168-
Don't run tests without a warmup period
169+
Don't run tests without a warmup period.
169170

170171
Instead, always include an adequate warmup period in your tests. This allows the system to reach a steady state before measurements begin. The following example gives a `geonames` run a warmup period of `300s`.
171172

172173
```python
173174
# DO: Include adequate warmup
174-
opensearch-benchmark run --workload=geonames --warmup-time-period=300
175+
opensearch-benchmark execute-test --workload=geonames --workload-params="warmup_time_period:300"
175176
```
176177

177178
The appropriate warmup time can vary depending on your specific workload and system configuration. Start with at least 5 minutes (300 seconds) and adjust as needed based on your observations.
@@ -180,13 +181,13 @@ The appropriate warmup time can vary depending on your specific workload and sys
180181

181182
One of the most common mistakes in performance testing is comparing results from different environments. Results obtained from a laptop or development machine are not comparable to those from a production server due to differences in hardware, network conditions, and other environmental factors.
182183

183-
Instead, make that all comparisons are made within the same or identical environments. If you need to compare different configurations, make sure to change only one variable at a time while keeping the environment consistent.
184+
Instead, make sure that all comparisons are made within the same or identical environments. If you need to compare different configurations, make sure to change only one variable at a time while keeping the environment consistent.
184185

185186
### Document environment details
186187

187188
Proper documentation of your test environment is crucial for reproducibility and accurate analysis. Without detailed environment information, it becomes difficult to interpret results or reproduce tests in the future.
188189

189-
Don't omit environment details from your test reports:
190+
Don't omit environment details from your test reports.
190191

191192
Instead, always document comprehensive details about your test environment. This should include hardware specifications, software versions, and any relevant configuration settings. as shown in the following example:
192193

@@ -211,7 +212,7 @@ By documenting these details, you ensure that your test results can be properly
211212

212213
When encountering issues or unexpected results, OpenSearch Benchmark logs can provide valuable insights. Here's how to effectively use logs for troubleshooting:
213214

214-
1. Navigate to the log file: The main log file is typically located at `~/.benchmark/logs/benchmark.log`.
215+
1. Navigate to the log file: The main log file is typically located at `~/.osb/logs/benchmark.log`.
215216

216217
2. Look for error messages. Search for lines containing "ERROR" or "WARNING" to identify potential issues.
217218

@@ -229,7 +230,7 @@ Security should never be an afterthought in performance testing. It's important
229230

230231
### SSL configuration
231232

232-
Here's an example of how to configure SSL for secure communications during benchmark testing:
233+
The following example of how to configure SSL in `opensearch.yml` for secure communications during benchmark testing:
233234

234235
```yaml
235236
security:

0 commit comments

Comments
 (0)