Skip to content

Conversation

aylonsk
Copy link
Contributor

@aylonsk aylonsk commented Jul 10, 2025

Description

For HNSW Graphs, the alternate encoding I implemented was GroupVarInt encoding, which in theory should be less costly both in space and runtime. The pros of this encoding would be that it allocates all of the space for a group of 4 integers in advance, and that it can encode using all 8 bits per byte instead of the 7 for VarInt. The cons are that it can only encode integers (<=32bits), and uses the first byte to encode the size of each number. However, since we are using delta encoding to condense our integers, they will never be larger than 32bits, making this irrelevant.

Closes #12871

@github-actions
Copy link
Contributor

This PR does not have an entry in lucene/CHANGES.txt. Consider adding one. If the PR doesn't need a changelog entry, then add the skip-changelog label to it and you will stop receiving this reminder on future updates to the PR.

@benwtrent
Copy link
Member

Hi @aylonsk ! Thank you for digging into this issue. I am sure you are still working on it, but I had some feedback:

  • It would be interesting to get statistics around resulting index size changes and performance changes (index & search). Lucene util is the preferred tool for this: GroupVarInt Encoding Implementation for HNSW Graphs #14932

  • As with most Lucene formats, changes like this need to be backwards compatible. Readers are loaded via their names. Consequently, users might have indices with the Lucene99Hnsw format name that do not have group-varint applied, and consequently cannot be read by your change here. There are a couple of options to handle this:

    • Add versioning to the format
    • Create a new format (Lucene103Hnsw...) and move Lucene99Hnsw... to the bwc formats package for readers (there are many example PRs in the past doing this).

Handling the format change can be complicated. So, my first step would be to justify the change with performance metrics. Then do all the complicated format stuff.

Good luck!

@aylonsk
Copy link
Contributor Author

aylonsk commented Jul 10, 2025

Thanks for your response! My apologies, I forgot to post my results from LuceneUtil.

Because I noticed variance between each run, I decided to test each set of hyperparameters 10 times and take the median for latency, netCPU, and AvgCpuCount. Therefore, my results aren't in the standard table format.

I ran 12 comparison tests in total, each a different combination of HPs. Here were the variables I kept the same: (topK=100, fanout=50, beamWidth=250, numSegments=1)

Here are some specific tests:

BENCHMARKS (10 runs per test):

  1. Base HP’s: nDocs=500,000, maxConn=64, quantized=no, numSegments=1

Baseline:
Recall: 0.832
Latency (Median): 0.73 ms
NetCPU (Median) 0.708 ms
AvgCPUCount (Median): 0.973 ms
Index Size: 220.55MB
Vec Disk/Vec RAM: 190.735MB

Candidate:
Recall: 0.835
Latency (Median): 0.7 ms
NetCPU (Median) 0.677 ms
AvgCPUCount (Median): 0.966 ms
Index Size: 220.12MB
Vec Disk/Vec RAM: 190.735MB

Latency Improvement: ~4.11% speedup

  1. nDocs=500,000, maxConn=32, quantized=no, numSegments=1

Baseline:
Recall: 0.834
Latency (Median): 0.722 ms
NetCPU (Median): 0.701 ms
AvgCPUCount (Median): 0.966 ms
Index Size: 220.19MB
Vec Disk/Vec RAM: 190.735MB

Candidate:
Recall: 0.83
Latency (Median): 0.691 ms
NetCPU (Median): 0.665 ms
AvgCPUCount (Median): 0.96 ms
Index Size: 219.67MB
Vec Disk/Vec RAM: 190.735MB

Latency Improvement: ~4.3% speedup

  1. nDocs=500,000, maxConn=32, quantized=7bits, numSegments=1

Baseline:
Recall: 0.671
Latency (Median): 1.2935 ms
NetCPU (Median): 1.2635 ms
AvgCpuCount (Median): 0.976 ms
Index Size: 255.74 ms
Vec Disk: 240.326MB
Vec RAM: 49.591MB

Candidate:
Recall: 0.696
Latency (Median): 1.2525 ms
NetCPU (Median): 1.192 ms
AvgCPUCount (Median): 0.974 ms
Index Size: 259.34MB
Vec Disk: 240.326MB
Vec RAM: 49.591MB

Latency Improvement: ~3.17% speedup

  1. nDocs=2,000,000, maxConn=32, quantized=7bits, numSegments=1

Baseline:
Recall: 0.74
Latency (Median): 2.6675 ms
NetCPU (Median): 2.545 ms
AvgCpuCount (Median): 0.969 ms
Index Size: 1049.52MB
Vec Disk: 961.30MB
Vec RAM: 198.364MB

Candidate:
Recall: 0.717
Latency (Median): 2.521 ms
NetCPU (Median): 2.398 ms
AvgCPUCount (Median): 0.98 ms
Index Size: 1043.27MB
Vec Disk: 961.304MB
Vec RAM: 198.364MB

Latency Improvement: 5.49% speedup

  1. nDocs=100,000, maxConn=64, quantized=7bits, numSegments=1

Baseline:
Recall: 0.848
Latency (Median): 2.305
NetCPU (Median): 2.2575
AvgCpuCount (Median): 0.976
Index Size: 51.52MB
Vec Disk: 48.07MB
Vec RAM: 9.918MB

Candidate:
Recall: 0.848
Latency (Median): 1.85 ms
NetCPU (Median): 1.80 ms
AvgCPUCount (Median): 0.974 ms
Index Size: 51.52MB
Vec Disk: 48.07MB
Vec RAM: 9.918MB

Latency Improvement: ~18.1% speedup

While the degree of improvement varied between tests, all tests except 1 showed improvement in latency over the baseline. Considering how simple and non-intrusive this implementation is, I think it would be an easy net benefit.

Thank you for letting me know about the backwards compatibility requirement. I will look into fixing that tomorrow.

@benwtrent
Copy link
Member

@aylonsk great looking numbers! I expect for cheaper vector ops (e.g. single bit quantization), the impact is even higher.

@jpountz
Copy link
Contributor

jpountz commented Jul 18, 2025

@aylonsk To handle backward compatibility, I'd recommend doing the following:

  • Add a new version constant to the format class, something like VERSION_GROUP_VARINT = 1; VERSION_CURRENT = VERSION_GROUP_VARINT.
  • Add a int version parameter to a pkg-private constructor of this format.
  • Pass this version to the writer, write it in the codec header, and update the writer to use group varint when version >= 1, and vint otherwise.
  • Make the reader read the version from the codec header, use group varint when version >= 1, and vint otherwise.
  • Copy TestLucene99HnswVectorsFormat into a new test case that exercises version=VERSION_START.

@aylonsk
Copy link
Contributor Author

aylonsk commented Jul 29, 2025

Hello, and thank you for all of your suggestions. I have updated the reader and format files accordingly to allow for backwards compatibility using a VERSION_GROUPVARINT parameter in the format class, and an interface near the top level of the reader class to make impact on runtime minimal.

The testing part was trickier, as I needed to create a new class (TestLucene99HnswVectorsFormatV2) that would extend the same class (BaseKnnVectorsFormatTestCase) as the original TestLucene99HnswVectorsFormat, but with a getCodec() method that would return the a format with the old writer and the new reader. At first I thought I would have to create my own test to read, write, and check docIDs in HNSW graphs, but then I realized that there are already tests in the BaseKnnVectorsFormatTestCase class that do this (such as testRecall).

To make this possible, I created two new classes in the lucene99 backwards_codecs directory: A VarInt-only writer (Lucene99HnswVectorsWriterV0) and a format that returns the current backwards-compatible reader in its fieldsReader class and the VarInt-only writer in its fieldsWriter class. To confirm the validity of the test, a VarInt-only reader was also created but not commited (Lucene99HnswVectorsReaderV0), and when I flipped the format class to using the new writer and the old reader, the testRecall test failed.

Any questions/comments/suggestions are appreciated. Thank you!

@github-actions
Copy link
Contributor

This PR does not have an entry in lucene/CHANGES.txt. Consider adding one. If the PR doesn't need a changelog entry, then add the skip-changelog label to it and you will stop receiving this reminder on future updates to the PR.

@Override
public int getMaxDimensions(String fieldName) {
return 1024;
return 4096;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we probably don't want to make this change? At least not as part of this PR :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, my apologies. I changed this when testing with 4K vectors and forgot to change it back. Will fix this

private final FieldInfos fieldInfos;
private final IntObjectHashMap<FieldEntry> fields;
private final IndexInput vectorIndex;
private final Populator dataReader;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think dataReader will confuse people since that is the name of a class (DataReader). Maybe call it decoder? Or neighborDecoder?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW the extra abstraction doesn't make things easier to read / understand to me. I'd rather use a plain if statement in the decoding logic:

if (version >= VERSION_GROUPVARINT) {
    // read using group-varint
} else {
    // read using DataInput#readVInt()
}
// prefix sum

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rather use a plain if statement in the decoding logic

I'm curious, if there's an if inside a low-level function like #seek -- will it slow things down by checking the condition every time, or is this something the JVM can compile away when version is declared final? (so performance-wise, it's equivalent to checking the condition only once, and using a separate implementation of Populator based on the value, like we're doing in this PR)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't believe that the JVM can compile it away. However, this if statement should be easily predictable, and right after it we decode 16 doc ID deltas and then compute their prefix sum. I'd expect the latter to be the bottleneck and the extra cost of the if statement to be negligible.

If we find benchmarks that say otherwise, we can still fork this file format into a new one that only ever reads doc ID deltas using group-varint to fix the problem.

FWIW with two implementations of Populator, the JVM would compile it similarly as an if statement.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, thanks @jpountz

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jpountz good to know. if there is a minimal impact on runtime, I agree that this is better.

@@ -0,0 +1,233 @@
/*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I think we are introducing these backward_codecs in order to be able to write graphs in the old VInt (v0) format in order to test that we are able to read them back with the existing codec?

Given that, I think any additional classes we add here could live in the tests rather than in backward_codecs, since we won't need these for reading old indexes (which is what backward_codecs are for).

Having said that, I wonder if we could add a test-only constructor to the format that would enable it to continue writing the old format?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't we have a package private constructor that allows setting the version for the writer? That way tests can write with the old version if necessary?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I think that is what you mean by "test only ctor".

I agree, a new package private ctor is likely best and easiest

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a test-only constructor to the format class, eliminating the need for the "V0" testing files. However, this also required me to make a similar type of constructor in the writer class. If these changes feel excessive, @kaivalnp showed me a way to keep the version logic out of the format class and specify the writer version separately when initializing.

@github-actions
Copy link
Contributor

github-actions bot commented Aug 1, 2025

This PR does not have an entry in lucene/CHANGES.txt. Consider adding one. If the PR doesn't need a changelog entry, then add the skip-changelog label to it and you will stop receiving this reminder on future updates to the PR.

@github-actions
Copy link
Contributor

github-actions bot commented Aug 1, 2025

This PR does not have an entry in lucene/CHANGES.txt. Consider adding one. If the PR doesn't need a changelog entry, then add the skip-changelog label to it and you will stop receiving this reminder on future updates to the PR.

Copy link
Contributor

@kaivalnp kaivalnp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @aylonsk, looks like a clean switch to group varints for edges of the HNSW graph!

Can you add a CHANGES.txt entry as well?

Comment on lines 148 to 156
/**
* Constructs a format using the default parameters and the specific writer version.
*
* @param writeVersion the version used for the writer to encode docID's (VarInt=0, GroupVarInt=1)
*/
Lucene99HnswVectorsFormat(int writeVersion) {
this(DEFAULT_MAX_CONN, DEFAULT_BEAM_WIDTH, DEFAULT_NUM_MERGE_WORKER, null, writeVersion);
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can avoid this constructor and use the larger one below from tests? (all defaults are public)
If you do so, we should add a "test-only" comment to that constructor.

If you still want to keep this, we should add the "test-only" comment here and make that private? (since it isn't used outside this class).

*/
public Lucene99HnswVectorsFormat(
int maxConn, int beamWidth, int numMergeWorkers, ExecutorService mergeExec) {
this(maxConn, beamWidth, numMergeWorkers, mergeExec, 1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should use VERSION_CURRENT instead of 1 here

import org.apache.lucene.tests.index.BaseKnnVectorsFormatTestCase;
import org.apache.lucene.tests.util.TestUtil;

public class TestLucene99HnswVectorsFormatV2 extends BaseKnnVectorsFormatTestCase {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Rename as TestLucene99HnswVectorsFormatV0 since we're checking minor version 0 here?

}

/**
* Constructs a format using the given graph construction parameters and scalar quantization.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

scalar quantization seems incorrect here? Although I see it was pre-existing..

vectorIndex.writeVInt(actualSize);
for (int i = 0; i < actualSize; i++) {
vectorIndex.writeVInt(scratch[i]);
if (version >= Lucene99HnswVectorsFormat.VERSION_GROUPVARINT) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: import as static constant for parity?

@jpountz
Copy link
Contributor

jpountz commented Aug 3, 2025

The change looks good to me, I have the same feedback as @kaivalnp. Should we try to run knnPerfTest with this change?

@github-actions github-actions bot added this to the 11.0.0 milestone Aug 5, 2025
Copy link
Contributor

@kaivalnp kaivalnp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some minor comments, looks great overall @aylonsk!

int numMergeWorkers,
TaskExecutor mergeExec)
throws IOException {
this(state, M, beamWidth, flatVectorWriter, numMergeWorkers, mergeExec, 1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I missed this earlier -- this should be VERSION_CURRENT instead of 1

this(state, M, beamWidth, flatVectorWriter, numMergeWorkers, mergeExec, 1);
}

Lucene99HnswVectorsWriter(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a "test-only" comment here too?

@aylonsk
Copy link
Contributor Author

aylonsk commented Aug 5, 2025

Thank you for your suggestions @kaivalnp, I have pushed these changes to the PR.

@jpountz I ran the knnPerfTest on the baseline VarInt vs candidate GroupVarInt implementations. These tests was run with fairly standard hyperparameters, and for each test, the median results of 3 runs was taken (a PR that will hopefully be approved in LuceneUtil). Looking at the results, it seems that removing the top-level abstraction from the reader did not visibly affect the performance improvement, which is good.

VarInt Median Results:
recall  latency(ms)  netCPU  avgCpuCount   nDoc  topK  fanout  maxConn  beamWidth  quantized  index(s)  index_docs/s  num_segments  index_size(MB)  vec_disk(MB)  vec_RAM(MB)  indexType
 0.938        1.411   1.392        0.988  50000   100      50       64        250         no      0.00      Infinity             8           22.30        19.073       19.073       HNSW
GroupVarInt Median Results:
recall  latency(ms)  netCPU  avgCpuCount   nDoc  topK  fanout  maxConn  beamWidth  quantized  index(s)  index_docs/s  num_segments  index_size(MB)  vec_disk(MB)  vec_RAM(MB)  indexType
 0.937        1.329   1.311        0.986  50000   100      50       64        250         no      0.00      Infinity             8           20.55        19.073       19.073       HNSW

Median Latency Improvement: ~5.81%

Copy link
Contributor

@kaivalnp kaivalnp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@msokolov
Copy link
Contributor

msokolov commented Aug 6, 2025

Looks good! I'll merge. Thanks for the nice new format and tests to show it really helps!

@msokolov msokolov merged commit a6c9912 into apache:main Aug 6, 2025
8 checks passed
@jpountz
Copy link
Contributor

jpountz commented Aug 7, 2025

Thank you @kaivalnp ! @msokolov I'm curious if you plan on backporting?

@msokolov
Copy link
Contributor

msokolov commented Aug 7, 2025

Thanks, @jpountz, I forgot; just did backport, and moved the change comment to the 10.3 section

@jpountz jpountz modified the milestones: 11.0.0, 10.3.0 Aug 7, 2025
@benwtrent
Copy link
Member

I may be reading the graph wrong, but it seems like this made indexing throughput plummit:

https://benchmarks.mikemccandless.com/indexing.html

image

https://benchmarks.mikemccandless.com/2025.08.06.18.04.40.html

This being the only significant index level change.

@msokolov @kaivalnp

@msokolov
Copy link
Contributor

msokolov commented Aug 8, 2025

yeah, weird. It does look similar to some previous dives we took (Jul 7, May 31) where there was no indexing change at all. I wonder if luceneutil has a glitch? Let's wait a day and see if this holds up: if it does, we should revert

@msokolov
Copy link
Contributor

msokolov commented Aug 8, 2025

Screenshot from 2025-08-08 10-09-18

@benwtrent
Copy link
Member

thank you @msokolov! you are right, lets give it some baking time. It might just be a glitch.

@ChrisHegarty
Copy link
Contributor

ChrisHegarty commented Aug 8, 2025

In the mix of all this is the change to do bulk off-heap scoring for float32 vectors #14980. I was hoping that it would have a significant positive effect, but I don't see it yet! :-( Does the bench use dot product? Hmm... maybe the nightly uses mip. In which case will have to wait for the other similarities, #15037.

Edit: I do see the new bulk scorer in the output profiles!

@msokolov
Copy link
Contributor

msokolov commented Aug 8, 2025

luceneutil should be using dot-product in its test evaluations; see https://github.com/mikemccand/luceneutil/blob/fd54de089305f8b990c5fc324a179fdf21991d51/src/main/perf/LineFileDocs.java#L456

@benwtrent
Copy link
Member

Edit: I do see the new bulk scorer in the output profiles!

@ChrisHegarty I see this stack:

                              at org.apache.lucene.index.VectorSimilarityFunction$2#compare() [Inlined code]
                              at org.apache.lucene.codecs.hnsw.DefaultFlatVectorScorer$FloatScoringSupplier$1#score() [Inlined code]
                              at org.apache.lucene.util.hnsw.RandomVectorScorer#bulkScore() [JIT compiled code]

I don't think your off-heap optimized stuff is there yet. Just the interface (that delegates to single score path).

@mikemccand
Copy link
Member

I'm digging into the nightly benchy regression. It is confusing! I don't like those periodic glitches ... I'm not sure this latest drop is such a glitch. One odd thing about that first drop (8/6) is this new top offender in CPU profiling for 1 KB docs with vectors, indexing:

PERCENT       CPU SAMPLES   STACK
70.64%        3M            sun.nio.ch.UnixFileDispatcherImpl#read0() [Native code]
                              at sun.nio.ch.UnixFileDispatcherImpl#read() [Inlined code]
                              at sun.nio.ch.IOUtil#readIntoNativeBuffer() [Inlined code]
                              at sun.nio.ch.IOUtil#read() [Inlined code]
                              at sun.nio.ch.IOUtil#read() [Inlined code]
                              at sun.nio.ch.FileChannelImpl#implRead() [Inlined code]
                              at sun.nio.ch.FileChannelImpl#traceImplRead() [Inlined code]
                              at sun.nio.ch.FileChannelImpl#read() [JIT compiled code]
                              at perf.LineFileDocs#readVector() [Inlined code]
                              at perf.LineFileDocs#readDocs() [JIT compiled code]
                              at perf.LineFileDocs$1#run() [Interpreted code]

That's baffling -- it's as if the vectors file was cold for the run, or readahead was ineffective, or, something.

And then in 8/7 that stack frame is way down -- 0.24%.

@aylonsk
Copy link
Contributor Author

aylonsk commented Aug 11, 2025

Thank you to everyone for looking into this issue. While it seems like the cause is still being determined, I tried to get ahead of the problem and run some more benchmarks with attention to detail on indexing time, making sure to reindex every time and use the most updated versions of both implementations. I didn't run with median results this time either, and will simply post all of the runs performed.

VarInt 1:
recall  latency(ms)  netCPU  avgCpuCount    nDoc  topK  fanout  maxConn  beamWidth  quantized  index(s)  index_docs/s  num_segments  index_size(MB)  vec_disk(MB)  vec_RAM(MB)  indexType
 0.920        1.635   1.608        0.983  100000   100      50       64        250         no     16.10       6209.64             8           41.43        38.147       38.147       HNSW
GroupVarInt 1:
recall  latency(ms)  netCPU  avgCpuCount    nDoc  topK  fanout  maxConn  beamWidth  quantized  index(s)  index_docs/s  num_segments  index_size(MB)  vec_disk(MB)  vec_RAM(MB)  indexType
 0.922        2.460   2.148        0.873  100000   100      50       64        250         no     25.53       3917.42             8           41.58        38.147       38.147       HNSW

Latency improvement: None
index(s) improvement: None

VarInt 2:
recall  latency(ms)  netCPU  avgCpuCount    nDoc  topK  fanout  maxConn  beamWidth  quantized  index(s)  index_docs/s  num_segments  index_size(MB)  vec_disk(MB)  vec_RAM(MB)  indexType
 0.920        1.722   1.698        0.986  100000   100      50       64        250         no     15.69       6372.27             8           41.43        38.147       38.147       HNSW
GroupVarInt 2:
recall  latency(ms)  netCPU  avgCpuCount    nDoc  topK  fanout  maxConn  beamWidth  quantized  index(s)  index_docs/s  num_segments  index_size(MB)  vec_disk(MB)  vec_RAM(MB)  indexType
 0.919        1.664   1.631        0.980  100000   100      50       64        250         no     15.99       6253.91             8           41.59        38.147       38.147       HNSW

Latency improvement: ~3.37%
index(s) improvement: None

VarInt 3:
recall  latency(ms)  netCPU  avgCpuCount    nDoc  topK  fanout  maxConn  beamWidth  quantized  index(s)  index_docs/s  num_segments  index_size(MB)  vec_disk(MB)  vec_RAM(MB)  indexType
 0.919        1.875   1.712        0.913  100000   100      50       64        250         no     18.91       5287.65             8           41.44        38.147       38.147       HNSW
GroupVarInt 3:
recall  latency(ms)  netCPU  avgCpuCount    nDoc  topK  fanout  maxConn  beamWidth  quantized  index(s)  index_docs/s  num_segments  index_size(MB)  vec_disk(MB)  vec_RAM(MB)  indexType
 0.919        1.554   1.468        0.945  100000   100      50       64        250         no      15.85      6341.88             8           41.59        38.147       38.147       HNSW

Latency Improvement: 17.12%
index(s) Improvement: ~16.18%

VarInt 4:
recall  latency(ms)  netCPU  avgCpuCount    nDoc  topK  fanout  maxConn  beamWidth  quantized  index(s)  index_docs/s  num_segments  index_size(MB)  vec_disk(MB)  vec_RAM(MB)  indexType
 0.919        2.043   1.838        0.900  100000   100      50       64        250         no     16.92       5909.47             8           41.44        38.147       38.147       HNSW
GroupVarInt 4:
recall  latency(ms)  netCPU  avgCpuCount    nDoc  topK  fanout  maxConn  beamWidth  quantized  index(s)  index_docs/s  num_segments  index_size(MB)  vec_disk(MB)  vec_RAM(MB)  indexType
 0.920        1.839   1.754        0.954  100000   100      50       64        250         no     15.87       6302.79             8           41.60        38.147       38.147       HNSW

Latency Improvement: ~9.99%
index(s) Improvement: ~6.21%

With the exception of the first run, the improvements seem to have remained across reindexing. It's possible that there may be some extra cost in the first run of GroupVInt, but it could also be a result of the variability between tests. However, it doesn't seem like there is much evidence to support that this change could've caused such a large drop in indexing throughput.

@ChrisHegarty
Copy link
Contributor

ChrisHegarty commented Aug 11, 2025

Just an initial observation on the two recent down spikes in the benchmark (7/6 and 8/6), they seem to be doing a lot of allocation towards the end of the benchmark run, whereas in other runs the allocation rate tails off. Looking at the heap profile (https://benchmarks.mikemccandless.com/2025.08.06.18.04.40.html#profiler_1kb_indexing_vectors_12_heap), there is quite a bit of org.apache.lucene.util.ArrayUtil#growExact(), which I don't see in other runs.

EDIT: Ha! the heap usage is towards the end of the run, since the initial and majority of the time is spent in file I/O reading from the test vector file, as mentioned in Mike's comment #14932 (comment).

@ChrisHegarty
Copy link
Contributor

There's definitely quite a bit of variability with these benchmark runs, so it can be difficult to draw definitive conclusions from any particular runs - however we should understand where the variability is coming from. What's odd to me is that my local testing of luceneutil for #14980, shows significant improvements in merge times, but I don't see these anywhere in the bench results. We see some improvement in Vector Search, but not completely obvious that the impact comes from #14980

@mikemccand
Copy link
Member

I will open a new issue to understand the benchy regression, since it's a 10.3 blocker. It remains confusing. I think we may try reverting the GroupVarInt change and see if benchy recovers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Cut over HNSW's neighbor lists to group-varint?

7 participants