Skip to content

Commit 98fdca9

Browse files
committed
Modify wording relating to HotSpot JVM compressed oops
This commit modifies the wording around using compressed oops on the HotSpot JVM. In particular, an attempt is made to make clear that the boundary is not exact, to remove the usage of floating point numbers as this can confuse people when trying to set their max heap size on the JVM, and to provide documentation for how to check whether or not the provided settings enable or disable compressed oops. Relates elastic/elasticsearch#15445
1 parent 25c4adc commit 98fdca9

File tree

1 file changed

+18
-11
lines changed

1 file changed

+18
-11
lines changed

510_Deployment/50_heap.asciidoc

+18-11
Original file line numberDiff line numberDiff line change
@@ -52,9 +52,10 @@ heap, while leaving the other 50% free. It won't go unused; Lucene will happily
5252
gobble up whatever is left over.
5353

5454
[[compressed_oops]]
55-
==== Don't Cross 30.5 GB!
55+
==== Don't Cross 32 GB!
5656
There is another reason to not allocate enormous heaps to Elasticsearch. As it turns((("heap", "sizing and setting", "32gb heap boundary")))((("32gb Heap boundary")))
57-
out, the JVM uses a trick to compress object pointers when heaps are 30.5 GB or less.
57+
out, the HotSpot JVM uses a trick to compress object pointers when heaps are less
58+
than around 32 GB.
5859

5960
In Java, all objects are allocated on the heap and referenced by a pointer.
6061
Ordinary object pointers (OOP) point at these objects, and are traditionally
@@ -74,36 +75,42 @@ reference four billion _objects_, rather than four billion bytes. Ultimately, t
7475
means the heap can grow to around 32 GB of physical size while still using a 32-bit
7576
pointer.
7677

77-
Once you cross that magical 30.5 GB boundary, the pointers switch back to
78+
Once you cross that magical ~32 GB boundary, the pointers switch back to
7879
ordinary object pointers. The size of each pointer grows, more CPU-memory
7980
bandwidth is used, and you effectively lose memory. In fact, it takes until around
80-
40–50 GB of allocated heap before you have the same _effective_ memory of a 30.5 GB
81-
heap using compressed oops.
81+
40–50 GB of allocated heap before you have the same _effective_ memory of a
82+
heap just under 32 GB using compressed oops.
8283

8384
The moral of the story is this: even when you have memory to spare, try to avoid
84-
crossing the 30.5 GB heap boundary. It wastes memory, reduces CPU performance, and
85+
crossing the 32 GB heap boundary. It wastes memory, reduces CPU performance, and
8586
makes the GC struggle with large heaps.
8687

88+
With the HotSpot JVM, you can verify that your max heap size setting enables compressed
89+
oops by adding `-XX:+PrintFlagsFinal` and checking that the value of the `UseCompressedOops`
90+
flag is `true`. Do note that the exact cutoff for max heap size in bytes that allows
91+
compressed oops varies from JVM to JVM, so take caution when taking examples from
92+
elsewhere and be sure to check your system with your configuration and your JVM.
93+
8794
[role="pagebreak-before"]
8895
.I Have a Machine with 1 TB RAM!
8996
****
90-
The 30.5 GB line is fairly important. So what do you do when your machine has a lot
97+
The 32 GB line is fairly important. So what do you do when your machine has a lot
9198
of memory? It is becoming increasingly common to see super-servers with 512–768 GB
9299
of RAM.
93100
94101
First, we would recommend avoiding such large machines (see <<hardware>>).
95102
96103
But if you already have the machines, you have two practical options:
97104
98-
- Are you doing mostly full-text search? Consider giving 30.5 GB to Elasticsearch
105+
- Are you doing mostly full-text search? Consider giving just under 32 GB to Elasticsearch
99106
and letting Lucene use the rest of memory via the OS filesystem cache. All that
100107
memory will cache segments and lead to blisteringly fast full-text search.
101108
102109
- Are you doing a lot of sorting/aggregations? You'll likely want that memory
103-
in the heap then. Instead of one node with more than 31.5 GB of RAM, consider running two or
110+
in the heap then. Instead of one node with more than 32 GB of RAM, consider running two or
104111
more nodes on a single machine. Still adhere to the 50% rule, though. So if your
105-
machine has 128 GB of RAM, run two nodes, each with 30.5 GB. This means 61 GB will be
106-
used for heaps, and 67 will be left over for Lucene.
112+
machine has 128 GB of RAM, run two nodes, each with just under 32 GB. This means that less
113+
than 64 GB will be used for heaps, and more than 64 GB will be left over for Lucene.
107114
+
108115
If you choose this option, set `cluster.routing.allocation.same_shard.host: true`
109116
in your config. This will prevent a primary and a replica shard from colocating

0 commit comments

Comments
 (0)