-
Notifications
You must be signed in to change notification settings - Fork 695
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
make Distributed single shard table use logic similar to SingleShardTableShard #7572
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -466,3 +466,52 @@ select create_reference_table('temp_table'); | |
ERROR: cannot distribute a temporary table | ||
DROP TABLE temp_table; | ||
DROP TABLE shard_count_table_3; | ||
-- test shard count 1 placement with colocate none. | ||
-- create a base table instance | ||
set citus.enable_single_shard_table_multi_node_placement to on; | ||
CREATE TABLE shard_count_table_1_inst_1 (a int); | ||
SELECT create_distributed_table('shard_count_table_1_inst_1', 'a', shard_count:=1, colocate_with:='none'); | ||
create_distributed_table | ||
--------------------------------------------------------------------- | ||
|
||
(1 row) | ||
|
||
-- create another table with similar requirements | ||
CREATE TABLE shard_count_table_1_inst_2 (a int); | ||
SELECT create_distributed_table('shard_count_table_1_inst_2', 'a', shard_count:=1, colocate_with:='none'); | ||
create_distributed_table | ||
--------------------------------------------------------------------- | ||
|
||
(1 row) | ||
|
||
-- Now check placement: | ||
SELECT (SELECT colocation_id FROM citus_tables WHERE table_name = 'shard_count_table_1_inst_1'::regclass) != (SELECT colocation_id FROM citus_tables WHERE table_name = 'shard_count_table_1_inst_2'::regclass); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is not testing the intended thing afaict. You want to test that the placement of the shard is different, not that the colocation id is different. The colocation id would also be different if The placement of the shards can be checked easily in |
||
?column? | ||
--------------------------------------------------------------------- | ||
t | ||
(1 row) | ||
|
||
-- double check shard counts | ||
SELECT (SELECT shard_count FROM citus_tables WHERE table_name = 'shard_count_table_1_inst_1'::regclass) = (SELECT shard_count FROM citus_tables WHERE table_name = 'shard_count_table_1_inst_2'::regclass); | ||
?column? | ||
--------------------------------------------------------------------- | ||
t | ||
(1 row) | ||
|
||
SELECT shard_count = 1 FROM citus_tables WHERE table_name = 'shard_count_table_1_inst_1'::regclass; | ||
?column? | ||
--------------------------------------------------------------------- | ||
t | ||
(1 row) | ||
|
||
-- check placement: These should be placed on different workers. | ||
SELECT nodename || ':' || nodeport AS inst_1_node_endpoint FROM citus_shards WHERE table_name = 'shard_count_table_1_inst_1'::regclass \gset | ||
SELECT nodename || ':' || nodeport AS inst_2_node_endpoint FROM citus_shards WHERE table_name = 'shard_count_table_1_inst_2'::regclass \gset | ||
SELECT :'inst_1_node_endpoint', :'inst_2_node_endpoint', :'inst_1_node_endpoint' = :'inst_2_node_endpoint'; | ||
?column? | ?column? | ?column? | ||
--------------------------------------------------------------------- | ||
localhost:xxxxx | localhost:xxxxx | f | ||
(1 row) | ||
|
||
DROP TABLE shard_count_table_1_inst_1; | ||
DROP TABLE shard_count_table_1_inst_2; |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -774,15 +774,64 @@ SELECT a.author_id as first_author, b.word_count as second_word_count | |
FROM articles_hash a, articles_single_shard_hash b | ||
WHERE a.author_id = 10 and a.author_id = b.author_id | ||
ORDER BY 1,2 LIMIT 3; | ||
DEBUG: Creating router plan | ||
DEBUG: query has a single distribution column value: 10 | ||
first_author | second_word_count | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @JelteF scenarios like this can change behavior coz the single shard is now on a different node There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't think that's a big issue (although we should modify the test accordingly). The reason I don't think it's a big issue is that it requires setting |
||
--------------------------------------------------------------------- | ||
10 | 19519 | ||
10 | 19519 | ||
10 | 19519 | ||
(3 rows) | ||
|
||
DEBUG: found no worker with all shard placements | ||
DEBUG: push down of limit count: 3 | ||
DEBUG: join prunable for task partitionId 0 and 1 | ||
DEBUG: join prunable for task partitionId 0 and 2 | ||
DEBUG: join prunable for task partitionId 0 and 3 | ||
DEBUG: join prunable for task partitionId 0 and 4 | ||
DEBUG: join prunable for task partitionId 0 and 5 | ||
DEBUG: join prunable for task partitionId 1 and 0 | ||
DEBUG: join prunable for task partitionId 1 and 2 | ||
DEBUG: join prunable for task partitionId 1 and 3 | ||
DEBUG: join prunable for task partitionId 1 and 4 | ||
DEBUG: join prunable for task partitionId 1 and 5 | ||
DEBUG: join prunable for task partitionId 2 and 0 | ||
DEBUG: join prunable for task partitionId 2 and 1 | ||
DEBUG: join prunable for task partitionId 2 and 3 | ||
DEBUG: join prunable for task partitionId 2 and 4 | ||
DEBUG: join prunable for task partitionId 2 and 5 | ||
DEBUG: join prunable for task partitionId 3 and 0 | ||
DEBUG: join prunable for task partitionId 3 and 1 | ||
DEBUG: join prunable for task partitionId 3 and 2 | ||
DEBUG: join prunable for task partitionId 3 and 4 | ||
DEBUG: join prunable for task partitionId 3 and 5 | ||
DEBUG: join prunable for task partitionId 4 and 0 | ||
DEBUG: join prunable for task partitionId 4 and 1 | ||
DEBUG: join prunable for task partitionId 4 and 2 | ||
DEBUG: join prunable for task partitionId 4 and 3 | ||
DEBUG: join prunable for task partitionId 4 and 5 | ||
DEBUG: join prunable for task partitionId 5 and 0 | ||
DEBUG: join prunable for task partitionId 5 and 1 | ||
DEBUG: join prunable for task partitionId 5 and 2 | ||
DEBUG: join prunable for task partitionId 5 and 3 | ||
DEBUG: join prunable for task partitionId 5 and 4 | ||
DEBUG: pruning merge fetch taskId 1 | ||
DETAIL: Creating dependency on merge taskId 2 | ||
DEBUG: pruning merge fetch taskId 2 | ||
DETAIL: Creating dependency on merge taskId 2 | ||
DEBUG: pruning merge fetch taskId 4 | ||
DETAIL: Creating dependency on merge taskId 4 | ||
DEBUG: pruning merge fetch taskId 5 | ||
DETAIL: Creating dependency on merge taskId 4 | ||
DEBUG: pruning merge fetch taskId 7 | ||
DETAIL: Creating dependency on merge taskId 6 | ||
DEBUG: pruning merge fetch taskId 8 | ||
DETAIL: Creating dependency on merge taskId 6 | ||
DEBUG: pruning merge fetch taskId 10 | ||
DETAIL: Creating dependency on merge taskId 8 | ||
DEBUG: pruning merge fetch taskId 11 | ||
DETAIL: Creating dependency on merge taskId 8 | ||
DEBUG: pruning merge fetch taskId 13 | ||
DETAIL: Creating dependency on merge taskId 10 | ||
DEBUG: pruning merge fetch taskId 14 | ||
DETAIL: Creating dependency on merge taskId 10 | ||
DEBUG: pruning merge fetch taskId 16 | ||
DETAIL: Creating dependency on merge taskId 12 | ||
DEBUG: pruning merge fetch taskId 17 | ||
DETAIL: Creating dependency on merge taskId 12 | ||
ERROR: the query contains a join that requires repartitioning | ||
HINT: Set citus.enable_repartition_joins to on to enable repartitioning | ||
SET citus.enable_non_colocated_router_query_pushdown TO OFF; | ||
-- but this is not the case otherwise | ||
SELECT a.author_id as first_author, b.word_count as second_word_count | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we want to backport this, I think it indeed makes sense for this to be false by default. But lets create a PR right after merging this to change the default to true for future releases.