Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backport CI & test output fixes to release-11.3 #7878

Merged
merged 15 commits into from
Feb 4, 2025

Conversation

onurctirtir
Copy link
Member

@onurctirtir onurctirtir commented Feb 4, 2025

To future developer;

We cannot backport 27ac44e because Citus upgrade tests that we run on 11.3 (and potentially on older versions) actually want to test the scenario where coordinator is also upgraded.

Note that 27ac44e was the commit where we adjust the test suite to not upgrade the coordinator yet while doing so for one of the workers in the cluster to test rolling Citus upgrades in Citus upgrade tests.

onurctirtir and others added 9 commits February 4, 2025 12:31
Removes el/7 and ol/7 as runners and update checkout action to v4

We use EL/7 and OL/7 runners to test packaging for these distributions.
However, for the past two weeks, we've encountered errors during the
checkout step in the pipelines. The error message is as follows:
```
/__e/node20/bin/node: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by /__e/node20/bin/node)
/__e/node20/bin/node: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /__e/node20/bin/node)
/__e/node20/bin/node: /lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by /__e/node20/bin/node)
/__e/node20/bin/node: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by /__e/node20/bin/node)
/__e/node20/bin/node: /lib64/libc.so.6: version `GLIBC_2.28' not found (required by /__e/node20/bin/node)
/__e/node20/bin/node: /lib64/libc.so.6: version `GLIBC_2.25' not found (required by /__e/node20/bin/node)
```
The GCC version within the EL/7 and OL/7 Docker images is 2.17, and we
cannot upgrade it. Therefore, we need to remove these images from the
packaging test pipelines. Consequently, we will no longer verify if the
code builds for EL/7 and OL/7.

However, we are not using these packaging images as runners within the
packaging infrastructure, so we can continue to use these images for
packaging.

Additional Info: I learned that Marlin team fully dropped the el/7
support so we will drop in further releases as well

(cherry picked from commit c603c3e)
DESCRIPTION: Removes ubuntu/bionic from packaging pipelines

Since pg16 beta is not available for ubuntu/bionic and ubuntu/bionic
support is EOL, I need to remove this os from pipeline
https://ubuntu.com/blog/ubuntu-18-04-eol-for-devices

Additionally, added concurrency support for GH Actions Packaging
pipeline

(cherry picked from commit 553780e)
foreign_key_to_reference_shard_rebalance failed because partition of
2024 year does not exist, fixed by add default partition.

Co-authored-by: chuhx <[email protected]>
(cherry picked from commit 968ac74)
For some reason using localhost in our hba file doesn't have the
intended effect anymore in our Github Actions runners. Probably because
of some networking change (IPv6 maybe) or some change in the
`/etc/hosts` file.

Replacing localhost with the equivalent loopback IPv4 and IPv6 addresses
resolved this issue.

(cherry picked from commit 8c9de08)
Copy link

codecov bot commented Feb 4, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 89.59%. Comparing base (262c335) to head (3bc9c75).
Report is 21 commits behind head on release-11.3.

Additional details and impacted files
@@               Coverage Diff                @@
##           release-11.3    #7878      +/-   ##
================================================
- Coverage         93.26%   89.59%   -3.68%     
================================================
  Files               269      268       -1     
  Lines             57459    57530      +71     
  Branches              0     7181    +7181     
================================================
- Hits              53592    51546    -2046     
- Misses             3867     3891      +24     
- Partials              0     2093    +2093     

@onurctirtir onurctirtir force-pushed the release-11.3-fix-CI-02-04-25 branch 4 times, most recently from 61ce025 to 3d486b3 Compare February 4, 2025 11:16
onurctirtir and others added 6 commits February 4, 2025 15:55
Fixes the following flaky outputs by decreasing citus_task_wait loop
interval, and changing the order of wait commands.

https://app.circleci.com/pipelines/github/citusdata/citus/32102/workflows/19958297-6c7e-49ef-9bc2-8efe8aacb96f/jobs/1089589

``` diff
SELECT job_id, task_id, status, nodes_involved
 FROM pg_dist_background_task WHERE job_id in (:job_id) ORDER BY task_id;
  job_id | task_id |  status  | nodes_involved
 --------+---------+----------+----------------
   17779 |    1013 | done     | {50,56}
   17779 |    1014 | running  | {50,57}
-  17779 |    1015 | running  | {50,56}
-  17779 |    1016 | blocked  | {50,57}
+  17779 |    1015 | done     | {50,56}
+  17779 |    1016 | running  | {50,57}
   17779 |    1017 | runnable | {50,56}
   17779 |    1018 | blocked  | {50,57}
   17779 |    1019 | runnable | {50,56}
   17779 |    1020 | blocked  | {50,57}
 (8 rows)
```

#6893 (comment)
```diff
SELECT job_id, task_id, status, nodes_involved
 FROM pg_dist_background_task WHERE job_id in (:job_id) ORDER BY task_id;
  job_id | task_id |  status  | nodes_involved
 --------+---------+----------+----------------
   17779 |    1013 | done     | {50,56}
-  17779 |    1014 | running  | {50,57}
+  17779 |    1014 | runnable | {50,57}
   17779 |    1015 | running  | {50,56}
   17779 |    1016 | blocked  | {50,57}
   17779 |    1017 | runnable | {50,56}
   17779 |    1018 | blocked  | {50,57}
   17779 |    1019 | runnable | {50,56}
   17779 |    1020 | blocked  | {50,57}
 (8 rows)
```

(cherry picked from commit 905fd46)
https://github.com/citusdata/citus/actions/runs/6745019678/attempts/1#summary-18336188930
```diff
     insert into target_table SELECT a*2 FROM source_table RETURNING a;
-NOTICE:  executing the command locally: SELECT bytes FROM fetch_intermediate_results(ARRAY['repartitioned_results_xxxxx_from_4213582_to_0','repartitioned_results_xxxxx_from_4213584_to_0']::text[],'localhost',57638) bytes
+NOTICE:  executing the command locally: SELECT bytes FROM fetch_intermediate_results(ARRAY['repartitioned_results_3940758121873413_from_4213584_to_0','repartitioned_results_3940758121873413_from_4213582_to_0']::text[],'localhost',57638) bytes
```

The elements in the array passed to `fetch_intermediate_results` are the
same, but in the opposite order than expected.

To fix this flakiness, we can omit the `"SELECT bytes FROM
fetch_intermediate_results..."` line. From the following logs, it is
understandable that the intermediate results have been fetched.

(cherry picked from commit 0dc41ee)
Sometimes in CI citus_non_blocking_split_shard_cleanup failed like this:

```diff
--- /__w/citus/citus/src/test/regress/expected/citus_non_blocking_split_shard_cleanup.out.modified	2023-11-01 15:07:14.280551207 +0000
+++ /__w/citus/citus/src/test/regress/results/citus_non_blocking_split_shard_cleanup.out.modified	2023-11-01 15:07:14.292551358 +0000
@@ -106,21 +106,22 @@
 -----------------------------------

 (1 row)

 \c - - - :worker_2_port
 SET search_path TO "citus_split_test_schema";
 -- Replication slots should be cleaned up
 SELECT slot_name FROM pg_replication_slots;
             slot_name
 ---------------------------------
-(0 rows)
+ citus_shard_split_slot_19_10_17
+(1 row)

 -- Publications should be cleanedup
 SELECT count(*) FROM pg_publication;
  count
```

It's expected that the replication slot is sometimes not cleaned up if
we don't wait until resource cleanup completes. This PR starts doing
that here.

(cherry picked from commit e3c93c3)
Sometimes isolation_metadata_sync_deadlock fails in CI like this:

```diff
diff -dU10 -w /__w/citus/citus/src/test/regress/expected/isolation_metadata_sync_deadlock.out /__w/citus/citus/src/test/regress/results/isolation_metadata_sync_deadlock.out
--- /__w/citus/citus/src/test/regress/expected/isolation_metadata_sync_deadlock.out.modified	2023-11-01 16:03:15.090199229 +0000
+++ /__w/citus/citus/src/test/regress/results/isolation_metadata_sync_deadlock.out.modified	2023-11-01 16:03:15.098199312 +0000
@@ -110,10 +110,14 @@
 t
 (1 row)

 step s2-stop-connection:
  SELECT stop_session_level_connection_to_node();

 stop_session_level_connection_to_node
 -------------------------------------

 (1 row)
+
+teardown failed: ERROR:  localhost:57638 is a metadata node, but is out of sync
+HINT:  If the node is up, wait until metadata gets synced to it and try again.
+CONTEXT:  SQL statement "SELECT master_remove_distributed_table_metadata_from_workers(v_obj.objid, v_obj.schema_name, v_obj.object_name)"
```

Source:
https://github.com/citusdata/citus/actions/runs/6721938040/attempts/1#summary-18268946448

To fix this we now wait for the metadata to be fully synced to all
nodes at the start of the teardown steps.

(cherry picked from commit a6e8688)
@onurctirtir onurctirtir force-pushed the release-11.3-fix-CI-02-04-25 branch 4 times, most recently from 3bc9c75 to 3953fe4 Compare February 4, 2025 13:37
@onurctirtir onurctirtir enabled auto-merge (rebase) February 4, 2025 13:38
@onurctirtir onurctirtir disabled auto-merge February 4, 2025 14:21
@onurctirtir onurctirtir merged commit 814f1d1 into release-11.3 Feb 4, 2025
212 of 229 checks passed
@onurctirtir onurctirtir deleted the release-11.3-fix-CI-02-04-25 branch February 4, 2025 14:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants