Skip to content

Conversation

@BlakeOrth
Copy link
Contributor

@BlakeOrth BlakeOrth commented Oct 17, 2025

Which issue does this PR close?

It's not yet clear to me if this will fully close the above issue, or if it's just the first step. I think there may be more work to do, so I'm not going to have this auto-close the issue.

Rationale for this change

tl;dr of the issue: normalizing the access pattern(s) for objects for partitioned tables should not only reduce the number of requests to a backing object store, but will also allow any existing and/or future caching mechanisms to apply equally to both directory-partitioned and flat tables.

List request on main:

DataFusion CLI v50.2.0
> \object_store_profiling summary
ObjectStore Profile mode set to Summary
> CREATE EXTERNAL TABLE overture_partitioned
STORED AS PARQUET LOCATION 's3://overturemaps-us-west-2/release/2025-09-24.0/';
0 row(s) fetched.
Elapsed 37.236 seconds.

Object Store Profiling
Instrumented Object Store: instrument_mode: Summary, inner: AmazonS3(overturemaps-us-west-2)
Summaries:
+-----------+----------+-----+-----+-----+-----+-------+
| Operation | Metric   | min | max | avg | sum | count |
+-----------+----------+-----+-----+-----+-----+-------+
| List      | duration |     |     |     |     | 1     |
| List      | size     |     |     |     |     | 1     |
+-----------+----------+-----+-----+-----+-----+-------+
Instrumented Object Store: instrument_mode: Summary, inner: AmazonS3(overturemaps-us-west-2)
Summaries:
+-----------+----------+-----------+-----------+-------------+-------------+-------+
| Operation | Metric   | min       | max       | avg         | sum         | count |
+-----------+----------+-----------+-----------+-------------+-------------+-------+
| Get       | duration | 0.044411s | 0.338399s | 0.104535s   | 162.133179s | 1551  |
| Get       | size     | 8 B       | 1285059 B | 338457.56 B | 524947683 B | 1551  |
| List      | duration |           |           |             |             | 3     |
| List      | size     |           |           |             |             | 3     |
+-----------+----------+-----------+-----------+-------------+-------------+-------+
> select count(*) from overture_partitioned;
+------------+
| count(*)   |
+------------+
| 4219677254 |
+------------+
1 row(s) fetched.
Elapsed 40.061 seconds.

Object Store Profiling
Instrumented Object Store: instrument_mode: Summary, inner: AmazonS3(overturemaps-us-west-2)
Summaries:
+-----------+----------+-----------+-----------+-------------+-------------+-------+
| Operation | Metric   | min       | max       | avg         | sum         | count |
+-----------+----------+-----------+-----------+-------------+-------------+-------+
| Get       | duration | 0.042554s | 0.453125s | 0.103147s   | 159.980835s | 1551  |
| Get       | size     | 8 B       | 1285059 B | 338457.56 B | 524947683 B | 1551  |
| List      | duration | 0.043498s | 0.196298s | 0.092462s   | 2.034174s   | 22    |
| List      | size     |           |           |             |             | 22    |
+-----------+----------+-----------+-----------+-------------+-------------+-------+
> select count(*) from overture_partitioned;
+------------+
| count(*)   |
+------------+
| 4219677254 |
+------------+
1 row(s) fetched.
Elapsed 0.924 seconds.

Object Store Profiling
Instrumented Object Store: instrument_mode: Summary, inner: AmazonS3(overturemaps-us-west-2)
Summaries:
+-----------+----------+-----------+-----------+-----------+-----------+-------+
| Operation | Metric   | min       | max       | avg       | sum       | count |
+-----------+----------+-----------+-----------+-----------+-----------+-------+
| List      | duration | 0.040526s | 0.161407s | 0.092792s | 2.041431s | 22    |
| List      | size     |           |           |           |           | 22    |
+-----------+----------+-----------+-----------+-----------+-----------+-------+
>

List requests for this PR:

DataFusion CLI v50.2.0
> \object_store_profiling summary
ObjectStore Profile mode set to Summary
> CREATE EXTERNAL TABLE overture_partitioned
STORED AS PARQUET LOCATION 's3://overturemaps-us-west-2/release/2025-09-24.0/';
0 row(s) fetched.
Elapsed 33.962 seconds.

Object Store Profiling
Instrumented Object Store: instrument_mode: Summary, inner: AmazonS3(overturemaps-us-west-2)
Summaries:
+-----------+----------+-----+-----+-----+-----+-------+
| Operation | Metric   | min | max | avg | sum | count |
+-----------+----------+-----+-----+-----+-----+-------+
| List      | duration |     |     |     |     | 1     |
| List      | size     |     |     |     |     | 1     |
+-----------+----------+-----+-----+-----+-----+-------+
Instrumented Object Store: instrument_mode: Summary, inner: AmazonS3(overturemaps-us-west-2)
Summaries:
+-----------+----------+-----------+-----------+-------------+-------------+-------+
| Operation | Metric   | min       | max       | avg         | sum         | count |
+-----------+----------+-----------+-----------+-------------+-------------+-------+
| Get       | duration | 0.043832s | 0.342730s | 0.110505s   | 171.393509s | 1551  |
| Get       | size     | 8 B       | 1285059 B | 338457.56 B | 524947683 B | 1551  |
| List      | duration |           |           |             |             | 3     |
| List      | size     |           |           |             |             | 3     |
+-----------+----------+-----------+-----------+-------------+-------------+-------+
> select count(*) from overture_partitioned;
+------------+
| count(*)   |
+------------+
| 4219677254 |
+------------+
1 row(s) fetched.
Elapsed 38.119 seconds.

Object Store Profiling
Instrumented Object Store: instrument_mode: Summary, inner: AmazonS3(overturemaps-us-west-2)
Summaries:
+-----------+----------+-----------+-----------+-------------+-------------+-------+
| Operation | Metric   | min       | max       | avg         | sum         | count |
+-----------+----------+-----------+-----------+-------------+-------------+-------+
| Get       | duration | 0.043186s | 0.296394s | 0.099681s   | 154.605286s | 1551  |
| Get       | size     | 8 B       | 1285059 B | 338457.56 B | 524947683 B | 1551  |
| List      | duration |           |           |             |             | 1     |
| List      | size     |           |           |             |             | 1     |
+-----------+----------+-----------+-----------+-------------+-------------+-------+
> select count(*) from overture_partitioned;
+------------+
| count(*)   |
+------------+
| 4219677254 |
+------------+
1 row(s) fetched.
Elapsed 0.815 seconds.

Object Store Profiling
Instrumented Object Store: instrument_mode: Summary, inner: AmazonS3(overturemaps-us-west-2)
Summaries:
+-----------+----------+-----+-----+-----+-----+-------+
| Operation | Metric   | min | max | avg | sum | count |
+-----------+----------+-----+-----+-----+-----+-------+
| List      | duration |     |     |     |     | 1     |
| List      | size     |     |     |     |     | 1     |
+-----------+----------+-----+-----+-----+-----+-------+
>

List operations

Action main this PR
Create Table 3 3
Cold-cache Query 22 1
Warm-cache Query 22 1

What changes are included in this PR?

  • Refactored helpers related to listing, discovering, and pruning objects based on partitions to normalize the strategy between partitioned and flat tables

Are these changes tested?

Yes. The internal methods that have been modified are covered by existing tests.

Are there any user-facing changes?

No

Additional Notes

I want to surface that I believe there is a chance for a performance regression for certain queries against certain tables. One performance related mechanism the existing code implements, but this code currently omits, is (potentially) reducing the number of partitions listed based on query filters. In order for the existing code to exercise this optimization the query filters must contain all the path elements of a subdirectory as column filters. E.g.

Given a table with a directory-partitioning structure like:

path/to/table/a=1/b=2/c=3/data.parquet

This query:

select count(*) from table where a=1 and b=2;

Will result in listing the following path:

LIST: path/to/table/a=1/b=2/

Whereas this query:

select count(*) from table where b=2;

Will result in listing the following path:

LIST: path/to/table/

I believe the real-world impact of this omission is likely minimal, at least when using high-latency storage such as S3 or other object stores, especially considering the existing implementation is likely to execute multiple sequential LIST operations due to its breadth-first search implementation. The most likely configuration for a table that would be negatively impacted would be a table that holds many thousands of underlying objects (most cloud stores return recursive list requests with page sizes of many hundreds to thousands of objects) with a relatively shallow partition structure. I may be able to find or build a dataset that fulfills these criteria to test this assertion if there's concern about it.

I believe we could also augment the existing low-level object_store interactions to allow listing a prefix on a table, which would allow the same pruning of list operations with the code in this PR. The downside to this approach is it either complicates future caching efforts, or leads to cache fragmentation in a simpler cache implementation. I didn't include these changes in this PR to avoid the change set being too large.

cc @alamb

 - Refactored helpers related to listing, discovering, and pruning
   objects based on partitions to normalize the strategy between
   partitioned and flat tables
@BlakeOrth BlakeOrth force-pushed the feature/partitions_list_all branch from 2e2af3c to 0721219 Compare October 17, 2025 22:40
@github-actions github-actions bot added the catalog Related to the catalog crate label Oct 17, 2025

[dev-dependencies]
datafusion-datasource-parquet = { workspace = true }
datafusion = { workspace = true }
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like this dependency tripped the circular dependency check even though it's a dev dependency for test setup. Is there an alternative mechanism to get access to a SessionStateBuilder for testing rather than using the import here?

https://github.com/apache/datafusion/pull/18146/files#diff-d73871e86bf1a466152863951f635e91ad931a16f6d40863b5061e39eefeea31R461

@alamb
Copy link
Contributor

alamb commented Oct 19, 2025

I want to surface that I believe there is a chance for a performance regression for certain queries against certain tables. One performance related mechanism the existing code implements,

I agree with your assesment that this is likely to be minimal -- especially given that queries that request thousands of objects will therefore require many thousand of s3 requests for the data files themselves

Copy link
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @BlakeOrth -- I started reviewing this PR and hope to do more later today

fn task_ctx(&self) -> Arc<datafusion_execution::TaskContext> {
unimplemented!()
}
let state = SessionStateBuilder::new().build();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In order to avoid circular dependencies (needed to allow datafusion to compile faster) the API needed for the catalog is in the Session trait, which is implemented by SessionState, but can be implemented by other things

Thus, in this case the options are:

  1. Keep the MockSession and implement whatever APIs it needs
  2. Move the tests to the datafusion crate (e.g. somewhere in https://github.com/apache/datafusion/blob/main/datafusion/core/tests/core_integration.rs)

@alamb alamb added the performance Make DataFusion faster label Oct 20, 2025
Copy link
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @BlakeOrth

tl;dr of the issue: normalizing the access pattern(s) for objects for partitioned tables should not only reduce the number of requests to a backing object store, but will also allow any existing and/or future caching mechanisms to apply equally to both directory-partitioned and flat tables.

I don't fully understand this. Is the idea that the current code will do something like

LIST path/to/table/a=1/b=2/c=3/

But if we aren' more clever the basic cache will just have a list like

LIST path/to/table/

(and thus not be able to satisfy the request)?

It seems to me that we may have to implement prefix listing on the files cache as well, to avoid causing regressions in existing functionality.

Ok(out)
}

async fn prune_partitions(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I reviewed some of the context of this code that does multi-level pruning of partitions, see

In that case, @kdbrooks reported querying a remote table with 27,000 partitions

I want to surface that I believe there is a chance for a performance regression for certain queries against certain tables. One performance related mechanism the existing code implements, but this code currently omits, is (potentially) reducing the number of partitions listed based on query filters. In order for the existing code to exercise this optimization the query filters must contain all the path elements of a subdirectory as column filters.

Thus I think this may need to be reconsidered

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

27,000 partitions, wow, the diversity of structured data never ceases to amaze me. Let me review the PR you posted here closely to make sure we keep use cases like this in mind.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alamb I've reviewed the changes and discussion in the PR you listed and it seems like the big performance boost was mostly attributed to parallelism for listing partitions. It seems like there's likely a way to accomplish this with the code in the PR I've made here, however I admit I haven't really explored that. It also seems I may have made a poor assumption that object_store::list() already parallelized in some manner since it returns an unordered stream of results. It seems to me that a synthetic benchmark is likely a good place to start exploring potential solutions. Is there any prior art I can reference to build a benchmark in DataFusion?

@alamb
Copy link
Contributor

alamb commented Oct 20, 2025

@BlakeOrth what are your next planned steps? If I may suggest it, I think we should move ahead with adding a cache for the results of calling ObjectStore::list. I think we should be able to make everything faster (we don't have to choose)

@BlakeOrth
Copy link
Contributor Author

Thank you @BlakeOrth

tl;dr of the issue: normalizing the access pattern(s) for objects for partitioned tables should not only reduce the number of requests to a backing object store, but will also allow any existing and/or future caching mechanisms to apply equally to both directory-partitioned and flat tables.

I don't fully understand this. Is the idea that the current code will do something like

LIST path/to/table/a=1/b=2/c=3/

But if we aren' more clever the basic cache will just have a list like

LIST path/to/table/

(and thus not be able to satisfy the request)?

It seems to me that we may have to implement prefix listing on the files cache as well, to avoid causing regressions in existing functionality.

@alamb So in the current code

LIST path/to/table/a=1/b=2/c=3/

This table cannot take advantage of any list file caching (at least as implemented) because the cache mechanisms don't exist for tables with partition columns. However, the current code can reduce the number of LIST operations for this table given appropriate query filters.

The code in this PR would enable a simple implementation of the list files cache to store a key for all objects under

LIST path/to/table/

and continue to appropriately filter cached results based on query filters. However, it would (again, as written) remove the ability to list specific prefixes based on query filters.

It seems to me that we may have to implement prefix listing on the files cache as well, to avoid causing regressions in existing functionality.

If we implemented the ability to list a specific prefix in a table I think any cache would also need to be "prefix aware", otherwise we've more or less just made a lateral move where caching may apply to flat tables but not directory partitioned tables.

Does that help clarify this a bit? I hope I understood your question correctly. If we need more clarification on something I can probably put together and annotate some queries against a hypothetical table to help make this all a bit more clear.

@BlakeOrth
Copy link
Contributor Author

@BlakeOrth what are your next planned steps? If I may suggest it, I think we should move ahead with adding a cache for the results of calling ObjectStore::list. I think we should be able to make everything faster (we don't have to choose)

@alamb I think my ideal next step if this PR were to be merged would be adding a default implementation of the ListFilesCache that leverages the existing cache infrastructure. We could obviously do this without merging this work, but doing so would leave directory-partitioned tables even further removed from flat tables. I suppose in theory we could first implement the default cache and then work to normalize access patterns between the two (or manually add the cache somewhere into the partitioned workflow).

@alamb
Copy link
Contributor

alamb commented Oct 22, 2025

I'll take another look at this today

Copy link
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went through this PR again and I am pretty sure it will cause regressions in performance for people that have non trivially sized partitions.

However, I am not sure I fully understand the implications of this change fully (nor the existing code) and since all the tests continue to pass in this PR it suggests to me there is not particularly good test coverage either.

It is not at all clear to me how this helps enabling the ListFilesCache. Does it make it easier to thread through the references to the cache? Maybe I can help make that work

I also started on getting some more test coverage for exactly what requests are being made given various setups in the following PR which I think will shed some more light about what is going on


Ok::<_, DataFusionError>(stream)
})
.buffer_unordered(CONCURRENCY_LIMIT)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this line to call buffer_unorderd basically means it runs the body of the stream in parallel -- in this case it parses the partition values and calls list individually for each partition 🤔

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, although it's subtly more complex than that. This existing implementation doesn't blindly list all known partitions in parallel. It's probably more fair to say that it potentially filters known partitions and then rediscovers all un-pruned partitions in parallel. I think an example probably shows this off pretty well.

Given a table with the following structure:

test_table/
├── a=1
│   └── b=10
│       └── c=100
│           └── file1.parquet
├── a=2
│   └── b=20
│       └── c=200
│           └── file2.parquet
└── a=3
    └── b=30
        └── c=300
            └── file2.parquet

Here are some annotated query examples showing the list operations:

> create external table test_table
stored as parquet location '/tmp/test_table/';

> select count(*) from test_table;
+----------+
| count(*) |
+----------+
| 6        |
+----------+
1 row(s) fetched.
Elapsed 0.007 seconds.

Object Store Profiling
Instrumented Object Store: instrument_mode: Trace, inner: LocalFileSystem(file:///)
# This list call is executed, rediscovering partition 'a'
2025-10-22T21:40:58.237088151+00:00 operation=List duration=0.000294s path=tmp/test_table
# ----
# These 3 list calls are executed in parallel, rediscovering all of the partitions at the 2nd level, in this case 'b'
2025-10-22T21:40:58.237397741+00:00 operation=List duration=0.000081s path=tmp/test_table/a=1
2025-10-22T21:40:58.237414558+00:00 operation=List duration=0.000069s path=tmp/test_table/a=2
2025-10-22T21:40:58.237436985+00:00 operation=List duration=0.000101s path=tmp/test_table/a=3
# ---
# Then the 'b' partitions are listed in parallel, rediscovering the 'c' partitions
2025-10-22T21:40:58.237487175+00:00 operation=List duration=0.000056s path=tmp/test_table/a=1/b=10
2025-10-22T21:40:58.237513848+00:00 operation=List duration=0.000058s path=tmp/test_table/a=2/b=20
# Then the 'c' partitions are listed in parallel, finally discovering readable files for this table.
# Note that, while the 'c' partition directly following this comment is returned prior to a 'b' partition, the timestamps 
# indicate the list call for the 'b' partition was submitted first.
2025-10-22T21:40:58.237576223+00:00 operation=List duration=0.000047s path=tmp/test_table/a=2/b=20/c=200
2025-10-22T21:40:58.237548133+00:00 operation=List duration=0.000080s path=tmp/test_table/a=3/b=30
2025-10-22T21:40:58.237560094+00:00 operation=List duration=0.000088s path=tmp/test_table/a=1/b=10/c=100
2025-10-22T21:40:58.237631945+00:00 operation=List duration=0.000095s path=tmp/test_table/a=3/b=30/c=300
# Only after the partitions are listed and files discovered does any reading of the actual data start
2025-10-22T21:40:58.238183601+00:00 operation=Get duration=0.000085s size=8 range: bytes=477-484 path=tmp/test_table/a=2/b=20/c=200/file2.parquet
2025-10-22T21:40:58.238237666+00:00 operation=Get duration=0.000041s size=8 range: bytes=477-484 path=tmp/test_table/a=3/b=30/c=300/file2.parquet
. . .

Next, using a simple filter on a single partition column value. Note the list calls are identical even though there's a filter in the query:

> select count(*) from test_table where b=30;
+----------+
| count(*) |
+----------+
| 2        |
+----------+
1 row(s) fetched.
Elapsed 0.018 seconds.

Object Store Profiling
Instrumented Object Store: instrument_mode: Trace, inner: LocalFileSystem(file:///)
2025-10-22T21:53:51.490170215+00:00 operation=List duration=0.000815s path=tmp/test_table
2025-10-22T21:53:51.491027338+00:00 operation=List duration=0.000355s path=tmp/test_table/a=1
2025-10-22T21:53:51.491104157+00:00 operation=List duration=0.000355s path=tmp/test_table/a=2
2025-10-22T21:53:51.491261191+00:00 operation=List duration=0.000211s path=tmp/test_table/a=3
2025-10-22T21:53:51.491484012+00:00 operation=List duration=0.000184s path=tmp/test_table/a=2/b=20
2025-10-22T21:53:51.491405857+00:00 operation=List duration=0.000360s path=tmp/test_table/a=1/b=10
2025-10-22T21:53:51.491523030+00:00 operation=List duration=0.000259s path=tmp/test_table/a=3/b=30
2025-10-22T21:53:51.491698312+00:00 operation=List duration=0.000181s path=tmp/test_table/a=2/b=20/c=200
2025-10-22T21:53:51.491793944+00:00 operation=List duration=0.000363s path=tmp/test_table/a=1/b=10/c=100
2025-10-22T21:53:51.491833864+00:00 operation=List duration=0.000350s path=tmp/test_table/a=3/b=30/c=300

This query shows where I believe there is a potential performance regression with this PR exactly as written. This shows the existing code pruning list operations when the filter can be evaluated against the known partition columns.

> select count(*) from test_table where a=3 and b=30;
+----------+
| count(*) |
+----------+
| 2        |
+----------+
1 row(s) fetched.
Elapsed 0.012 seconds.

Object Store Profiling
Instrumented Object Store: instrument_mode: Trace, inner: LocalFileSystem(file:///)
2025-10-22T21:57:58.229995504+00:00 operation=List duration=0.000373s path=tmp/test_table/a=3/b=30
2025-10-22T21:57:58.230384839+00:00 operation=List duration=0.000146s path=tmp/test_table/a=3/b=30/c=300

However, the above optimization only applies when the full column path from the beginning of the table structure is present, as the following query goes back to listing every directory in the table.

> select count(*) from test_table where b=20 and c=200;
+----------+
| count(*) |
+----------+
| 2        |
+----------+
1 row(s) fetched.
Elapsed 0.013 seconds.

Object Store Profiling
Instrumented Object Store: instrument_mode: Trace, inner: LocalFileSystem(file:///)
2025-10-22T22:01:02.257613181+00:00 operation=List duration=0.000386s path=tmp/test_table
2025-10-22T22:01:02.258016229+00:00 operation=List duration=0.000196s path=tmp/test_table/a=1
2025-10-22T22:01:02.258038911+00:00 operation=List duration=0.000181s path=tmp/test_table/a=2
2025-10-22T22:01:02.258176544+00:00 operation=List duration=0.000046s path=tmp/test_table/a=3
2025-10-22T22:01:02.258240850+00:00 operation=List duration=0.000043s path=tmp/test_table/a=2/b=20
2025-10-22T22:01:02.258250880+00:00 operation=List duration=0.000052s path=tmp/test_table/a=3/b=30
2025-10-22T22:01:02.258226660+00:00 operation=List duration=0.000080s path=tmp/test_table/a=1/b=10
2025-10-22T22:01:02.258288622+00:00 operation=List duration=0.000045s path=tmp/test_table/a=2/b=20/c=200
2025-10-22T22:01:02.258310145+00:00 operation=List duration=0.000035s path=tmp/test_table/a=3/b=30/c=300
2025-10-22T22:01:02.258321461+00:00 operation=List duration=0.000039s path=tmp/test_table/a=1/b=10/c=100

@BlakeOrth
Copy link
Contributor Author

BlakeOrth commented Oct 22, 2025

@alamb

I went through this PR again and I am pretty sure it will cause regressions in performance for people that have non trivially sized partitions.

Yes, I'm almost sure there is more than one case where there would be a regression. Similarly, there are likely cases where this PR would improve performance even without caching (for tables with a relatively low number of object, but a deep partition structure, similar to the example table in my code comment response above, for instance). Based on the PR you referenced in a PR comment above, I think it might be best to see if we can cook up a synthetic benchmark to help us better understand where the performance tradeoffs might be. Does that sound reasonable? If so, can you point me to good existing benchmarks that target methods (as opposed to an integration level bench)?

However, I am not sure I fully understand the implications of this change fully (nor the existing code) and since all the tests continue to pass in this PR it suggests to me there is not particularly good test coverage either.

To be fair, I ensured all existing tests passed prior to submitting this PR 😄 I did review the existing unit tests and to my surprise they seemed to address most test scenarios I could think of (and several I had not yet thought of). Additionally, unless I'm mistaken, this code path should be exercised in many sqllogic tests, which have cases specifically looking at partitioned tables. I did have to make a couple of bug fixes from my initial implementation to get all the existing unit and sqllogic tests to pass.

It is not at all clear to me how this helps enabling the ListFilesCache. Does it make it easier to thread through the references to the cache?

Currently the ListFilesCache is only invoked in the list_all_files method. The existing code never calls this method when a table has partition columns. This PR uses list_all_files regardless of whether or not a table has partition columns, making the cache infrastructure available to all ListingTables.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

catalog Related to the catalog crate performance Make DataFusion faster

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants