-
Notifications
You must be signed in to change notification settings - Fork 695
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
make Distributed single shard table use logic similar to SingleShardTableShard #7572
base: main
Are you sure you want to change the base?
Changes from 3 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -774,15 +774,64 @@ SELECT a.author_id as first_author, b.word_count as second_word_count | |
FROM articles_hash a, articles_single_shard_hash b | ||
WHERE a.author_id = 10 and a.author_id = b.author_id | ||
ORDER BY 1,2 LIMIT 3; | ||
DEBUG: Creating router plan | ||
DEBUG: query has a single distribution column value: 10 | ||
first_author | second_word_count | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @JelteF scenarios like this can change behavior coz the single shard is now on a different node There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't think that's a big issue (although we should modify the test accordingly). The reason I don't think it's a big issue is that it requires setting |
||
--------------------------------------------------------------------- | ||
10 | 19519 | ||
10 | 19519 | ||
10 | 19519 | ||
(3 rows) | ||
|
||
DEBUG: found no worker with all shard placements | ||
DEBUG: push down of limit count: 3 | ||
DEBUG: join prunable for task partitionId 0 and 1 | ||
DEBUG: join prunable for task partitionId 0 and 2 | ||
DEBUG: join prunable for task partitionId 0 and 3 | ||
DEBUG: join prunable for task partitionId 0 and 4 | ||
DEBUG: join prunable for task partitionId 0 and 5 | ||
DEBUG: join prunable for task partitionId 1 and 0 | ||
DEBUG: join prunable for task partitionId 1 and 2 | ||
DEBUG: join prunable for task partitionId 1 and 3 | ||
DEBUG: join prunable for task partitionId 1 and 4 | ||
DEBUG: join prunable for task partitionId 1 and 5 | ||
DEBUG: join prunable for task partitionId 2 and 0 | ||
DEBUG: join prunable for task partitionId 2 and 1 | ||
DEBUG: join prunable for task partitionId 2 and 3 | ||
DEBUG: join prunable for task partitionId 2 and 4 | ||
DEBUG: join prunable for task partitionId 2 and 5 | ||
DEBUG: join prunable for task partitionId 3 and 0 | ||
DEBUG: join prunable for task partitionId 3 and 1 | ||
DEBUG: join prunable for task partitionId 3 and 2 | ||
DEBUG: join prunable for task partitionId 3 and 4 | ||
DEBUG: join prunable for task partitionId 3 and 5 | ||
DEBUG: join prunable for task partitionId 4 and 0 | ||
DEBUG: join prunable for task partitionId 4 and 1 | ||
DEBUG: join prunable for task partitionId 4 and 2 | ||
DEBUG: join prunable for task partitionId 4 and 3 | ||
DEBUG: join prunable for task partitionId 4 and 5 | ||
DEBUG: join prunable for task partitionId 5 and 0 | ||
DEBUG: join prunable for task partitionId 5 and 1 | ||
DEBUG: join prunable for task partitionId 5 and 2 | ||
DEBUG: join prunable for task partitionId 5 and 3 | ||
DEBUG: join prunable for task partitionId 5 and 4 | ||
DEBUG: pruning merge fetch taskId 1 | ||
DETAIL: Creating dependency on merge taskId 2 | ||
DEBUG: pruning merge fetch taskId 2 | ||
DETAIL: Creating dependency on merge taskId 2 | ||
DEBUG: pruning merge fetch taskId 4 | ||
DETAIL: Creating dependency on merge taskId 4 | ||
DEBUG: pruning merge fetch taskId 5 | ||
DETAIL: Creating dependency on merge taskId 4 | ||
DEBUG: pruning merge fetch taskId 7 | ||
DETAIL: Creating dependency on merge taskId 6 | ||
DEBUG: pruning merge fetch taskId 8 | ||
DETAIL: Creating dependency on merge taskId 6 | ||
DEBUG: pruning merge fetch taskId 10 | ||
DETAIL: Creating dependency on merge taskId 8 | ||
DEBUG: pruning merge fetch taskId 11 | ||
DETAIL: Creating dependency on merge taskId 8 | ||
DEBUG: pruning merge fetch taskId 13 | ||
DETAIL: Creating dependency on merge taskId 10 | ||
DEBUG: pruning merge fetch taskId 14 | ||
DETAIL: Creating dependency on merge taskId 10 | ||
DEBUG: pruning merge fetch taskId 16 | ||
DETAIL: Creating dependency on merge taskId 12 | ||
DEBUG: pruning merge fetch taskId 17 | ||
DETAIL: Creating dependency on merge taskId 12 | ||
ERROR: the query contains a join that requires repartitioning | ||
HINT: Set citus.enable_repartition_joins to on to enable repartitioning | ||
SET citus.enable_non_colocated_router_query_pushdown TO OFF; | ||
-- but this is not the case otherwise | ||
SELECT a.author_id as first_author, b.word_count as second_word_count | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not testing the intended thing afaict. You want to test that the placement of the shard is different, not that the colocation id is different. The colocation id would also be different if
citus.enable_single_shard_table_multi_node_placement
was set tooff
.The placement of the shards can be checked easily in
pg_dist_shard_placement
orcitus_shards
.