Skip to content

Commit 82c2335

Browse files
committed
Merge branch 'pr/455'
2 parents b2d990e + fc2a6fb commit 82c2335

File tree

1 file changed

+59
-11
lines changed

1 file changed

+59
-11
lines changed

510_Deployment/50_heap.asciidoc

+59-11
Original file line numberDiff line numberDiff line change
@@ -52,9 +52,10 @@ heap, while leaving the other 50% free. It won't go unused; Lucene will happily
5252
gobble up whatever is left over.
5353

5454
[[compressed_oops]]
55-
==== Don't Cross 30.5 GB!
55+
==== Don't Cross 32 GB!
5656
There is another reason to not allocate enormous heaps to Elasticsearch. As it turns((("heap", "sizing and setting", "32gb heap boundary")))((("32gb Heap boundary")))
57-
out, the JVM uses a trick to compress object pointers when heaps are 30.5 GB or less.
57+
out, the HotSpot JVM uses a trick to compress object pointers when heaps are less
58+
than around 32 GB.
5859

5960
In Java, all objects are allocated on the heap and referenced by a pointer.
6061
Ordinary object pointers (OOP) point at these objects, and are traditionally
@@ -74,36 +75,83 @@ reference four billion _objects_, rather than four billion bytes. Ultimately, t
7475
means the heap can grow to around 32 GB of physical size while still using a 32-bit
7576
pointer.
7677

77-
Once you cross that magical 30.5 GB boundary, the pointers switch back to
78+
Once you cross that magical ~32 GB boundary, the pointers switch back to
7879
ordinary object pointers. The size of each pointer grows, more CPU-memory
7980
bandwidth is used, and you effectively lose memory. In fact, it takes until around
80-
40–50 GB of allocated heap before you have the same _effective_ memory of a 30.5 GB
81-
heap using compressed oops.
81+
40–50 GB of allocated heap before you have the same _effective_ memory of a
82+
heap just under 32 GB using compressed oops.
8283

8384
The moral of the story is this: even when you have memory to spare, try to avoid
84-
crossing the 30.5 GB heap boundary. It wastes memory, reduces CPU performance, and
85+
crossing the 32 GB heap boundary. It wastes memory, reduces CPU performance, and
8586
makes the GC struggle with large heaps.
8687

88+
==== Just how far under 32gb should I set the JVM?
89+
90+
Unfortunately, that depends. The exact cutoff varies by JVMs and platforms.
91+
If you want to play it safe, setting the heap to `31gb` is likely safe.
92+
Alternatively, you can verify the cutoff point for the HotSpot JVM by adding
93+
`-XX:+PrintFlagsFinal` to your JVM options and checking that the value of the
94+
UseCompressedOops flag is true. This will let you find the exact cutoff for your
95+
platform and JVM.
96+
97+
For example, here we test a Java 1.7 installation on MacOSX and see the max heap
98+
size is around 32600mb (~31.83gb) before compressed pointers are disabled:
99+
100+
[source,bash]
101+
----
102+
$ JAVA_HOME=`/usr/libexec/java_home -v 1.7` java -Xmx32600m -XX:+PrintFlagsFinal 2> /dev/null | grep UseCompressedOops
103+
bool UseCompressedOops := true
104+
$ JAVA_HOME=`/usr/libexec/java_home -v 1.7` java -Xmx32766m -XX:+PrintFlagsFinal 2> /dev/null | grep UseCompressedOops
105+
bool UseCompressedOops = false
106+
----
107+
108+
In contrast, a Java 1.8 installation on the same machine has a max heap size
109+
around 32766mb (~31.99gb):
110+
111+
[source,bash]
112+
----
113+
$ JAVA_HOME=`/usr/libexec/java_home -v 1.8` java -Xmx32766m -XX:+PrintFlagsFinal 2> /dev/null | grep UseCompressedOops
114+
bool UseCompressedOops := true
115+
$ JAVA_HOME=`/usr/libexec/java_home -v 1.8` java -Xmx32767m -XX:+PrintFlagsFinal 2> /dev/null | grep UseCompressedOops
116+
bool UseCompressedOops = false
117+
----
118+
119+
The morale of the story is that the exact cutoff to leverage compressed oops
120+
varies from JVM to JVM, so take caution when taking examples from elsewhere and
121+
be sure to check your system with your configuration and JVM.
122+
123+
Beginning with Elasticsearch v2.2.0, the startup log will actually tell you if your
124+
JVM is using compressed OOPs or not. You'll see a log message like:
125+
126+
[source, bash]
127+
----
128+
[2015-12-16 13:53:33,417][INFO ][env] [Illyana Rasputin] heap size [989.8mb], compressed ordinary object pointers [true]
129+
----
130+
131+
Which indicates that compressed object pointers are being used. If they are not,
132+
the message will say `[false]`.
133+
134+
87135
[role="pagebreak-before"]
88136
.I Have a Machine with 1 TB RAM!
89137
****
90-
The 30.5 GB line is fairly important. So what do you do when your machine has a lot
138+
The 32 GB line is fairly important. So what do you do when your machine has a lot
91139
of memory? It is becoming increasingly common to see super-servers with 512–768 GB
92140
of RAM.
93141
94142
First, we would recommend avoiding such large machines (see <<hardware>>).
95143
96144
But if you already have the machines, you have two practical options:
97145
98-
- Are you doing mostly full-text search? Consider giving 30.5 GB to Elasticsearch
146+
- Are you doing mostly full-text search? Consider giving just under 32 GB to Elasticsearch
99147
and letting Lucene use the rest of memory via the OS filesystem cache. All that
100148
memory will cache segments and lead to blisteringly fast full-text search.
101149
102150
- Are you doing a lot of sorting/aggregations? You'll likely want that memory
103-
in the heap then. Instead of one node with more than 31.5 GB of RAM, consider running two or
151+
in the heap then. Instead of one node with more than 32 GB of RAM, consider running two or
104152
more nodes on a single machine. Still adhere to the 50% rule, though. So if your
105-
machine has 128 GB of RAM, run two nodes, each with 30.5 GB. This means 61 GB will be
106-
used for heaps, and 67 will be left over for Lucene.
153+
machine has 128 GB of RAM, run two nodes, each with just under 32 GB. This means that less
154+
than 64 GB will be used for heaps, and more than 64 GB will be left over for Lucene.
107155
+
108156
If you choose this option, set `cluster.routing.allocation.same_shard.host: true`
109157
in your config. This will prevent a primary and a replica shard from colocating

0 commit comments

Comments
 (0)