You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/reference/troubleshooting/common-issues/high-cpu-usage.asciidoc
+11-9
Original file line number
Diff line number
Diff line change
@@ -64,12 +64,12 @@ High CPU usage is often caused by excessive JVM garbage collection (GC) activity
64
64
65
65
For optimal JVM performance, garbage collection should meet these criteria:
66
66
67
-
* Young GC completes quickly (ideally within 50 ms).
68
-
* Young GC does not occur too frequently (approximately once every 10 seconds).
69
-
* Old GC completes quickly (ideally within 1 second).
70
-
* Old GC does not occur too frequently (once every 10 minutes or less frequently).
67
+
1. Young GC completes quickly (ideally within 50 ms).
68
+
2. Young GC does not occur too frequently (approximately once every 10 seconds).
69
+
3. Old GC completes quickly (ideally within 1 second).
70
+
4. Old GC does not occur too frequently (once every 10 minutes or less frequently).
71
71
72
-
Excessive JVM garbage collection usually indicates high heap memory usage. Common reasons for increased heap memory usage include:
72
+
Excessive JVM garbage collection usually indicates high heap memory usage. Common potential reasons for increased heap memory usage include:
73
73
74
74
* Oversharding of indices
75
75
* Very large aggregation queries
@@ -78,11 +78,11 @@ Excessive JVM garbage collection usually indicates high heap memory usage. Commo
78
78
* Improper heap size configuration
79
79
* Misconfiguration of JVM new generation ratio (-XX:NewRatio)
80
80
81
-
**Hotspotting**
81
+
**Hot spotting**
82
82
83
-
You might experience high CPU usage on specific data nodes or an entire <<data-tiers,data tier>> if traffic isn’t evenly distributed—a scenario known as <<hotspotting,hot spotting>>. This can happen when applications aren’t properly balancing requests across nodes or when “hot” write indices concentrate indexing activity on just one or a few shards.
83
+
You might experience high CPU usage on specific data nodes or an entire <<data-tiers,data tier>> if traffic isn’t evenly distributed—a scenario known as <<hot spotting,hot spotting>>. This commonly occurs when read or write applications don’t properly balance requests across nodes, or when indices receiving heavy write activity (like hot-tier indices) have their shards concentrated on just one or a few nodes.
84
84
85
-
For details on diagnosing and resolving hotspotting, see <<hotspotting,hot spotting>>.
85
+
For details on diagnosing and resolving these issues, see <<hot spotting,hot spotting>>.
86
86
87
87
**Oversharding**
88
88
@@ -91,12 +91,14 @@ If your Elasticsearch cluster contains a large number of shards, you might be fa
91
91
Oversharding occurs when there are too many shards, causing each shard to be smaller than optimal. While Elasticsearch doesn’t have a strict minimum shard size, an excessive number of small shards can negatively impact performance. Each shard consumes cluster resources since Elasticsearch must maintain metadata and manage shard states across all nodes.
92
92
93
93
If you have too many small shards, you can address this by:
94
+
94
95
* Removing empty or unused indices.
95
96
* Deleting or closing indices containing outdated or unnecessary data.
96
97
* Reindexing smaller shards into fewer, larger shards to optimize cluster performance.
97
98
98
99
See <<size-your-shards,Size your shards>> for more information.
99
-
**Additional recommendations**
100
+
101
+
==== Additional recommendations
100
102
101
103
To further reduce CPU load or mitigate temporary spikes in resource usage, consider these steps:
0 commit comments