You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Modify wording relating to HotSpot JVM compressed oops
This commit modifies the wording around using compressed oops on the
HotSpot JVM. In particular, an attempt is made to make clear that the
boundary is not exact, to remove the usage of floating point numbers as
this can confuse people when trying to set their max heap size on the
JVM, and to provide documentation for how to check whether or not the
provided settings enable or disable compressed oops.
Relates elastic/elasticsearch#15445
Copy file name to clipboardExpand all lines: 510_Deployment/50_heap.asciidoc
+18-11
Original file line number
Diff line number
Diff line change
@@ -52,9 +52,10 @@ heap, while leaving the other 50% free. It won't go unused; Lucene will happily
52
52
gobble up whatever is left over.
53
53
54
54
[[compressed_oops]]
55
-
==== Don't Cross 30.5 GB!
55
+
==== Don't Cross 32 GB!
56
56
There is another reason to not allocate enormous heaps to Elasticsearch. As it turns((("heap", "sizing and setting", "32gb heap boundary")))((("32gb Heap boundary")))
57
-
out, the JVM uses a trick to compress object pointers when heaps are 30.5 GB or less.
57
+
out, the HotSpot JVM uses a trick to compress object pointers when heaps are less
58
+
than around 32 GB.
58
59
59
60
In Java, all objects are allocated on the heap and referenced by a pointer.
60
61
Ordinary object pointers (OOP) point at these objects, and are traditionally
@@ -74,36 +75,42 @@ reference four billion _objects_, rather than four billion bytes. Ultimately, t
74
75
means the heap can grow to around 32 GB of physical size while still using a 32-bit
75
76
pointer.
76
77
77
-
Once you cross that magical 30.5 GB boundary, the pointers switch back to
78
+
Once you cross that magical ~32 GB boundary, the pointers switch back to
78
79
ordinary object pointers. The size of each pointer grows, more CPU-memory
79
80
bandwidth is used, and you effectively lose memory. In fact, it takes until around
80
-
40–50 GB of allocated heap before you have the same _effective_ memory of a 30.5 GB
81
-
heap using compressed oops.
81
+
40–50 GB of allocated heap before you have the same _effective_ memory of a
82
+
heap just under 32 GB using compressed oops.
82
83
83
84
The moral of the story is this: even when you have memory to spare, try to avoid
84
-
crossing the 30.5 GB heap boundary. It wastes memory, reduces CPU performance, and
85
+
crossing the 32 GB heap boundary. It wastes memory, reduces CPU performance, and
85
86
makes the GC struggle with large heaps.
86
87
88
+
With the HotSpot JVM, you can verify that your max heap size setting enables compressed
89
+
oops by adding `-XX:+PrintFlagsFinal` and checking that the value of the `UseCompressedOops`
90
+
flag is `true`. Do note that the exact cutoff for max heap size in bytes that allows
91
+
compressed oops varies from JVM to JVM, so take caution when taking examples from
92
+
elsewhere and be sure to check your system with your configuration and your JVM.
93
+
87
94
[role="pagebreak-before"]
88
95
.I Have a Machine with 1 TB RAM!
89
96
****
90
-
The 30.5 GB line is fairly important. So what do you do when your machine has a lot
97
+
The 32 GB line is fairly important. So what do you do when your machine has a lot
91
98
of memory? It is becoming increasingly common to see super-servers with 512–768 GB
92
99
of RAM.
93
100
94
101
First, we would recommend avoiding such large machines (see <<hardware>>).
95
102
96
103
But if you already have the machines, you have two practical options:
97
104
98
-
- Are you doing mostly full-text search? Consider giving 30.5 GB to Elasticsearch
105
+
- Are you doing mostly full-text search? Consider giving just under 32 GB to Elasticsearch
99
106
and letting Lucene use the rest of memory via the OS filesystem cache. All that
100
107
memory will cache segments and lead to blisteringly fast full-text search.
101
108
102
109
- Are you doing a lot of sorting/aggregations? You'll likely want that memory
103
-
in the heap then. Instead of one node with more than 31.5 GB of RAM, consider running two or
110
+
in the heap then. Instead of one node with more than 32 GB of RAM, consider running two or
104
111
more nodes on a single machine. Still adhere to the 50% rule, though. So if your
105
-
machine has 128 GB of RAM, run two nodes, each with 30.5 GB. This means 61 GB will be
106
-
used for heaps, and 67 will be left over for Lucene.
112
+
machine has 128 GB of RAM, run two nodes, each with just under 32 GB. This means that less
113
+
than 64 GB will be used for heaps, and more than 64 GB will be left over for Lucene.
107
114
+
108
115
If you choose this option, set `cluster.routing.allocation.same_shard.host: true`
109
116
in your config. This will prevent a primary and a replica shard from colocating
0 commit comments