Currently, our implementation of the AnalyzeForeignTable callback, clickhouseAnalyzeForeignTable(), assigns a no-op function. Borrow from the postgres_fdw implementation to fetch a table size from system.tables or system.parts (maybe like this?), and to assign an estimation function that uses SELECT SAMPLE to generate stats. Determine whether this might impose too much storage overhead for stats on hyper large ClickHouse tables, or document how to keep them within reason (and perhaps auto-select the sample size in IMPORT SCHEMA?).
Currently, our implementation of the
AnalyzeForeignTablecallback,clickhouseAnalyzeForeignTable(), assigns a no-op function. Borrow from the postgres_fdw implementation to fetch a table size fromsystem.tablesorsystem.parts(maybe like this?), and to assign an estimation function that uses SELECT SAMPLE to generate stats. Determine whether this might impose too much storage overhead for stats on hyper large ClickHouse tables, or document how to keep them within reason (and perhaps auto-select the sample size inIMPORT SCHEMA?).