Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleanup: update the name and information in files #731

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

tuhaihe
Copy link
Member

@tuhaihe tuhaihe commented Nov 26, 2024

Fixes #ISSUE_Number

What does this PR do?

This PR includes four commits, which are used to rename the old project name to "Apache Cloudberry" and update the contact information. See the commit messages for details. These four commits contain so many changes, so let be patient for the review.

Type of Change

  • Bug fix (non-breaking change)
  • New feature (non-breaking change)
  • Breaking change (fix or feature with breaking changes)
  • Documentation update

Breaking Changes

Test Plan

  • Unit tests added/updated
  • Integration tests added/updated
  • Passed make installcheck
  • Passed make -C src/test installcheck-cbdb-parallel

Impact

Performance:

User-facing changes:

Dependencies:

Checklist

Additional Context


configure Show resolved Hide resolved
@edespino
Copy link
Contributor

@tuhaihe How were these changes tested?

Copy link
Contributor

@avamingli avamingli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please rename only the DOC part.

Copy link
Contributor

@edespino edespino left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See attatched file: there are many matches for CloudberryDB.
cloudberrydb-search.txt

See attached file: there are many matches for "Cloudberry" by itself
cloudberry-search.txt

There is also two test failures which might be related to this PR:

  • gpcopy_encoding
  • dispatch_encoding
  • qp_misc_rio_join_small

I can collaborate with you on running a wider test schedule list prior to committing.

@tuhaihe
Copy link
Member Author

tuhaihe commented Nov 29, 2024

Hi @edespino thanks for your feedback and support.
I will update my PR again to address the related comments.

@tuhaihe tuhaihe force-pushed the cleanup-mails-and-names1126 branch from aa70469 to 7c2bcd4 Compare December 4, 2024 08:45
@tuhaihe
Copy link
Member Author

tuhaihe commented Dec 4, 2024

The terms cloudberrydb and cbdb still appear in certain files, such as deploy/k8s and hd-ci/compile_cbdb.bash. For now, I would like to leave them unchanged and see how to update them later.

@tuhaihe
Copy link
Member Author

tuhaihe commented Dec 4, 2024

I can build my changes on one docker:

make[2]: Leaving directory '/cloudberry/gpcontrib/pxf_fdw'
make[1]: Leaving directory '/cloudberry/gpcontrib'
Apache Cloudberry (Incubating) installation complete.
[root@679abf66a2bb cloudberry]# git branch
  main
* tuhaihe/cleanup-mails-and-names1126
[root@679abf66a2bb cloudberry]# cat /etc/os-release
NAME="Rocky Linux"
VERSION="9.4 (Blue Onyx)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="9.4"
PLATFORM_ID="platform:el9"
PRETTY_NAME="Rocky Linux 9.4 (Blue Onyx)"
ANSI_COLOR="0;32"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:rocky:rocky:9::baseos"
HOME_URL="https://rockylinux.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
SUPPORT_END="2032-05-31"
ROCKY_SUPPORT_PRODUCT="Rocky-Linux-9"
ROCKY_SUPPORT_PRODUCT_VERSION="9.4"
REDHAT_SUPPORT_PRODUCT="Rocky Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="9.4"

However, when run the command make installcheck:

(using postmaster on Unix socket, default port)
============== dropping database "regression"         ==============
psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: No such file or directory
	Is the server running locally and accepting connections on that socket?
command failed: "/usr/local/cbdb/bin/psql" -X -c "DROP DATABASE IF EXISTS \"regression\"" "postgres"
make[1]: *** [GNUmakefile:208: installcheck-good] Error 2
make[1]: Leaving directory '/cloudberry/src/test/regress'
make: *** [GNUmakefile:146: installcheck] Error 2

@jiaqizho
Copy link
Contributor

jiaqizho commented Dec 5, 2024

I can build my changes on one docker:

make[2]: Leaving directory '/cloudberry/gpcontrib/pxf_fdw'
make[1]: Leaving directory '/cloudberry/gpcontrib'
Apache Cloudberry (Incubating) installation complete.
[root@679abf66a2bb cloudberry]# git branch
  main
* tuhaihe/cleanup-mails-and-names1126
[root@679abf66a2bb cloudberry]# cat /etc/os-release
NAME="Rocky Linux"
VERSION="9.4 (Blue Onyx)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="9.4"
PLATFORM_ID="platform:el9"
PRETTY_NAME="Rocky Linux 9.4 (Blue Onyx)"
ANSI_COLOR="0;32"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:rocky:rocky:9::baseos"
HOME_URL="https://rockylinux.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
SUPPORT_END="2032-05-31"
ROCKY_SUPPORT_PRODUCT="Rocky-Linux-9"
ROCKY_SUPPORT_PRODUCT_VERSION="9.4"
REDHAT_SUPPORT_PRODUCT="Rocky Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="9.4"

However, when run the command make installcheck:

(using postmaster on Unix socket, default port)
============== dropping database "regression"         ==============
psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: No such file or directory
	Is the server running locally and accepting connections on that socket?
command failed: "/usr/local/cbdb/bin/psql" -X -c "DROP DATABASE IF EXISTS \"regression\"" "postgres"
make[1]: *** [GNUmakefile:208: installcheck-good] Error 2
make[1]: Leaving directory '/cloudberry/src/test/regress'
make: *** [GNUmakefile:146: installcheck] Error 2

do u have make create-demo-cluster? also need export the GPPORT/PGPORT

configure Show resolved Hide resolved
deploy/build/README.Linux.md Show resolved Hide resolved
@tuhaihe
Copy link
Member Author

tuhaihe commented Dec 6, 2024

Now I run make installcheck returns:

=========================
 13 of 658 tests failed.
=========================

The differences that caused some tests to fail can be viewed in the
file "/home/gpadmin/cloudberry/src/test/regress/regression.diffs".  A copy of the test summary that you see
above is saved in the file "/home/gpadmin/cloudberry/src/test/regress/regression.out".

make[1]: *** [GNUmakefile:208: installcheck-good] Error 1
make[1]: Leaving directory '/home/gpadmin/cloudberry/src/test/regress'
make: *** [GNUmakefile:146: installcheck] Error 2

regression.out.txt
regression.diffs.txt

I will see how to fix them next week.

@my-ship-it
Copy link
Contributor

Now I run make installcheck returns:

=========================
 13 of 658 tests failed.
=========================

The differences that caused some tests to fail can be viewed in the
file "/home/gpadmin/cloudberry/src/test/regress/regression.diffs".  A copy of the test summary that you see
above is saved in the file "/home/gpadmin/cloudberry/src/test/regress/regression.out".

make[1]: *** [GNUmakefile:208: installcheck-good] Error 1
make[1]: Leaving directory '/home/gpadmin/cloudberry/src/test/regress'
make: *** [GNUmakefile:146: installcheck] Error 2

regression.out.txt regression.diffs.txt

I will see how to fix them next week.

Feel free to ask engineers if you need any help.

@tuhaihe tuhaihe force-pushed the cleanup-mails-and-names1126 branch 2 times, most recently from ab4f5d9 to db79d34 Compare December 11, 2024 06:46
@tuhaihe
Copy link
Member Author

tuhaihe commented Dec 11, 2024

The latest make installcheck command:

========================
 4 of 659 tests failed.
========================

The differences that caused some tests to fail can be viewed in the
file "/home/gpadmin/cloudberry/src/test/regress/regression.diffs". The diffs are the following:

diff -I HINT: -I CONTEXT: -I GP_IGNORE: -U3 /home/gpadmin/cloudberry/src/test/regress/expected/explain_optimizer.out /home/gpadmin/cloudberry/src/test/regress/results/explain.out
--- /home/gpadmin/cloudberry/src/test/regress/expected/explain_optimizer.out	2024-12-11 00:06:32.043419735 -0800
+++ /home/gpadmin/cloudberry/src/test/regress/results/explain.out	2024-12-11 00:06:32.068420210 -0800
@@ -395,7 +395,6 @@
          },                                                 +
          "Settings": {                                      +
              "Optimizer": "Pivotal Optimizer (GPORCA)",     +
-             "optimizer": "on",                             +
              "enable_parallel": "off",                      +
              "parallel_setup_cost": "0",                    +
              "parallel_tuple_cost": "0",                    +
diff -I HINT: -I CONTEXT: -I GP_IGNORE: -U3 /home/gpadmin/cloudberry/src/test/regress/expected/gpcopy_encoding.out /home/gpadmin/cloudberry/src/test/regress/results/gpcopy_encoding.out
--- /home/gpadmin/cloudberry/src/test/regress/expected/gpcopy_encoding.out	2024-12-11 00:12:53.291486702 -0800
+++ /home/gpadmin/cloudberry/src/test/regress/results/gpcopy_encoding.out	2024-12-11 00:12:53.293486751 -0800
@@ -21,7 +21,7 @@
 copy enctest to '/tmp/enctest_utf_to_latin1-1' encoding 'latin1';
 set client_encoding='latin1';
 copy enctest to stdout;
-�
+�
 copy enctest to '/tmp/enctest_utf_to_latin1-2';
 -- Connect to 'latin1' database, and load back the files we just created.
 -- This is to check that they were created correctly, and that the ENCODING
diff -I HINT: -I CONTEXT: -I GP_IGNORE: -U3 /home/gpadmin/cloudberry/src/test/regress/expected/dispatch_encoding.out /home/gpadmin/cloudberry/src/test/regress/results/dispatch_encoding.out
--- /home/gpadmin/cloudberry/src/test/regress/expected/dispatch_encoding.out	2024-12-11 00:15:14.095364710 -0800
+++ /home/gpadmin/cloudberry/src/test/regress/results/dispatch_encoding.out	2024-12-11 00:15:14.097364748 -0800
@@ -55,14 +55,16 @@

 select raise_error(t) from enctest;
 ERROR:  raise_error called on "funny char Ä"
+CONTEXT:  PL/pgSQL function raise_error(text) line 3 at RAISE
 -- now do it again with latin1
 set client_encoding='latin1';
 select raise_notice(t) from enctest;
-NOTICE:  raise_notice called on "funny char �"
+NOTICE:  raise_notice called on "funny char �"
  raise_notice
 --------------

 (1 row)

 select raise_error(t) from enctest;
-ERROR:  raise_error called on "funny char �"
+ERROR:  raise_error called on "funny char �"
+CONTEXT:  PL/pgSQL function raise_error(text) line 3 at RAISE
diff -I HINT: -I CONTEXT: -I GP_IGNORE: -U3 /home/gpadmin/cloudberry/src/test/regress/expected/cbdb_parallel.out /home/gpadmin/cloudberry/src/test/regress/results/cbdb_parallel.out
--- /home/gpadmin/cloudberry/src/test/regress/expected/cbdb_parallel.out	2024-12-11 00:40:32.554101669 -0800
+++ /home/gpadmin/cloudberry/src/test/regress/results/cbdb_parallel.out	2024-12-11 00:40:32.628103091 -0800
@@ -536,11 +536,8 @@
 GP_IGNORE:(20 rows)

 select count(*) from ao1, ao2 where ao1.x = ao2.x;
-  count
----------
- 1080000
-(1 row)
-
+ERROR:  could not resize shared memory segment "/PostgreSQL.3871811390" to 8388608 bytes: No space left on device
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 commit;
 --
 -- test parallel with indices
@@ -1563,45 +1560,13 @@
 GP_IGNORE:(30 rows)

 select * from rt1  join (select count(*) as c, sum(t1.a) as a  from t1 join t2 using(a)) t3 on t3.c = rt1.a;
- a | b | c | a
----+---+---+---
-(0 rows)
-
+ERROR:  could not resize shared memory segment "/PostgreSQL.288487486" to 4194304 bytes: No space left on device
 set local enable_parallel = off;
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 explain(costs off, locus) select * from rt1  join (select count(*) as c, sum(t1.a) as a  from t1 join t2 using(a)) t3 on t3.c = rt1.a;
-QUERY PLAN
-___________
- Hash Join
-   Locus: Entry
-   Hash Cond: (rt1.a = (count(*)))
-   ->  Gather Motion 1:1  (slice1; segments: 1)
-         Locus: Entry
-         ->  Seq Scan on rt1
-               Locus: SegmentGeneral
-   ->  Hash
-         Locus: Entry
-         ->  Finalize Aggregate
-               Locus: Entry
-               ->  Gather Motion 3:1  (slice2; segments: 3)
-                     Locus: Entry
-                     ->  Partial Aggregate
-                           Locus: Hashed
-                           ->  Hash Join
-                                 Locus: Hashed
-                                 Hash Cond: (t1.a = t2.a)
-                                 ->  Seq Scan on t1
-                                       Locus: Hashed
-                                 ->  Hash
-                                       Locus: Hashed
-                                       ->  Seq Scan on t2
-                                             Locus: Hashed
-GP_IGNORE:(25 rows)
-
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 select * from rt1  join (select count(*) as c, sum(t1.a) as a  from t1 join t2 using(a)) t3 on t3.c = rt1.a;
- a | b | c | a
----+---+---+---
-(0 rows)
-
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 abort;
 --
 -- Test final join path's parallel_workers should be same with join_locus whose
@@ -2247,44 +2212,24 @@
 GP_IGNORE:(10 rows)

 select sum(t1.c1) from t1 where c1 not in (select c2 from t2);
- sum
------
-   1
-(1 row)
-
+ERROR:  could not resize shared memory segment "/PostgreSQL.3126407118" to 8388608 bytes: No space left on device
 explain(costs off) select * from t1 where c1 not in (select c2 from t3_null);
-QUERY PLAN
-___________
- Gather Motion 6:1  (slice1; segments: 6)
-   ->  Parallel Hash Left Anti Semi (Not-In) Join
-         Hash Cond: (t1.c1 = t3_null.c2)
-         ->  Parallel Seq Scan on t1
-         ->  Parallel Hash
-               ->  Broadcast Workers Motion 6:6  (slice2; segments: 6)
-                     ->  Parallel Seq Scan on t3_null
-GP_IGNORE:(8 rows)
-
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 select * from t1 where c1 not in (select c2 from t3_null);
- c1 | c2
-----+----
-(0 rows)
-
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 -- non-parallel results.
 set local enable_parallel = off;
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 select sum(t1.c1) from t1 where c1 not in (select c2 from t2);
- sum
------
-   1
-(1 row)
-
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 select * from t1 where c1 not in (select c2 from t3_null);
- c1 | c2
-----+----
-(0 rows)
-
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 drop table t1;
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 drop table t2;
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 drop table t3_null;
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 end;
 --
 -- End of Test Parallel-aware Hash Left Anti Semi (Not-In) Join.
@@ -2545,19 +2490,15 @@
 begin;
 set local max_parallel_workers_per_gather = 8;
 select * from refresh_compare(true, false);
- parallel_is_better
---------------------
- t
-(1 row)
-
+ERROR:  could not resize shared memory segment "/PostgreSQL.1207446260" to 8388608 bytes: No space left on device
+CONTEXT:  SQL statement "refresh materialized view matv"
+PL/pgSQL function refresh_compare(boolean,boolean) line 30 at SQL statement
 select * from refresh_compare(false, false);
- parallel_is_better
---------------------
- t
-(1 row)
-
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 drop function refresh_compare;
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 reset max_parallel_workers_per_gather;
+ERROR:  current transaction is aborted, commands ignored until end of transaction block
 end;
 --
 -- Parallel Create AO/AOCO Table AS

@tuhaihe
Copy link
Member Author

tuhaihe commented Dec 11, 2024

  1. Disk Space Issue

Some tests failed with the error message: ‘No space left on device’. This issue seems unusual, as the test computer’s hard drive has 600GB of space, which should be sufficient. The disk usage details are as follows:

[gpadmin@cdw cloudberry]$ df -H
Filesystem      Size  Used Avail Use% Mounted on
overlay         529G   41G  467G   8% /
tmpfs            68M     0   68M   0% /dev
shm              68M  1.9M   66M   3% /dev/shm
/dev/vda1       529G   41G  467G   8% /etc/hosts
tmpfs            68G     0   68G   0% /proc/acpi
tmpfs            68G     0   68G   0% /proc/scsi
tmpfs            68G     0   68G   0% /sys/firmware
  1. Encoding Errors

Some tests returned errors related to the latin1 encoding. These files were not modified in this pull request (PR).

  1. Other Errors

For other errors, I will work with the engineering team to determine the appropriate fixes. If the errors are not introduced by this PR, I will create an issue to document them for future resolution.

@yjhnupt
Copy link

yjhnupt commented Dec 11, 2024

+ERROR: could not resize shared memory segment "/PostgreSQL.3126407118" to 8388608 bytes: No space left on device

here need to enlarge shm size .

@tuhaihe
Copy link
Member Author

tuhaihe commented Dec 11, 2024

+ERROR: could not resize shared memory segment "/PostgreSQL.3126407118" to 8388608 bytes: No space left on device

here need to enlarge shm size .

Thanks @yjhnupt for your suggestions. I use the command to run the build docker image:

docker run -it --rm -h cdw --shm-size=2gb apache/incubator-cloudberry:cbdb-build-rocky9-latest

Now, after running make installcheck, it returns:

========================
 3 of 659 tests failed.
========================
diff -I HINT: -I CONTEXT: -I GP_IGNORE: -U3 /home/gpadmin/cloudberry/src/test/regress/expected/explain_optimizer.out /home/gpadmin/cloudberry/src/test/regress/results/explain.out
--- /home/gpadmin/cloudberry/src/test/regress/expected/explain_optimizer.out	2024-12-11 02:06:36.538896201 -0800
+++ /home/gpadmin/cloudberry/src/test/regress/results/explain.out	2024-12-11 02:06:36.562896655 -0800
@@ -395,7 +395,6 @@
          },                                                 +
          "Settings": {                                      +
              "Optimizer": "Pivotal Optimizer (GPORCA)",     +
-             "optimizer": "on",                             +
              "enable_parallel": "off",                      +
              "parallel_setup_cost": "0",                    +
              "parallel_tuple_cost": "0",                    +
diff -I HINT: -I CONTEXT: -I GP_IGNORE: -U3 /home/gpadmin/cloudberry/src/test/regress/expected/gpcopy_encoding.out /home/gpadmin/cloudberry/src/test/regress/results/gpcopy_encoding.out
--- /home/gpadmin/cloudberry/src/test/regress/expected/gpcopy_encoding.out	2024-12-11 02:12:57.455824245 -0800
+++ /home/gpadmin/cloudberry/src/test/regress/results/gpcopy_encoding.out	2024-12-11 02:12:57.457824282 -0800
@@ -21,7 +21,7 @@
 copy enctest to '/tmp/enctest_utf_to_latin1-1' encoding 'latin1';
 set client_encoding='latin1';
 copy enctest to stdout;
-�
+�
 copy enctest to '/tmp/enctest_utf_to_latin1-2';
 -- Connect to 'latin1' database, and load back the files we just created.
 -- This is to check that they were created correctly, and that the ENCODING
diff -I HINT: -I CONTEXT: -I GP_IGNORE: -U3 /home/gpadmin/cloudberry/src/test/regress/expected/dispatch_encoding.out /home/gpadmin/cloudberry/src/test/regress/results/dispatch_encoding.out
--- /home/gpadmin/cloudberry/src/test/regress/expected/dispatch_encoding.out	2024-12-11 02:15:40.607127519 -0800
+++ /home/gpadmin/cloudberry/src/test/regress/results/dispatch_encoding.out	2024-12-11 02:15:40.610127575 -0800
@@ -55,14 +55,16 @@

 select raise_error(t) from enctest;
 ERROR:  raise_error called on "funny char Ä"
+CONTEXT:  PL/pgSQL function raise_error(text) line 3 at RAISE
 -- now do it again with latin1
 set client_encoding='latin1';
 select raise_notice(t) from enctest;
-NOTICE:  raise_notice called on "funny char �"
+NOTICE:  raise_notice called on "funny char �"
  raise_notice
 --------------

 (1 row)

 select raise_error(t) from enctest;
-ERROR:  raise_error called on "funny char �"
+ERROR:  raise_error called on "funny char �"
+CONTEXT:  PL/pgSQL function raise_error(text) line 3 at RAISE

@edespino
Copy link
Contributor

Getting closer! When running manually, I would run with one of the following. The first diff explicitly expects the orca optimizer to be set on (regardless of it being on by default).

With Orca query optimizer on:

PGOPTIONS='-c optimizer=on' make  installcheck

With Orca query optimizer off:

PGOPTIONS='-c optimizer=off' make installcheck

@yjhnupt
Copy link

yjhnupt commented Dec 12, 2024

NOTICE: raise_notice called on "funny char �"

locale refer to : https://www.cnblogs.com/williamjie/p/9303115.html

@tuhaihe tuhaihe force-pushed the cleanup-mails-and-names1126 branch from 2cb9d44 to f70fb0f Compare December 12, 2024 10:23
@tuhaihe
Copy link
Member Author

tuhaihe commented Dec 12, 2024

Getting closer! When running manually, I would run with one of the following. The first diff explicitly expects the orca optimizer to be set on (regardless of it being on by default).

With Orca query optimizer on:

PGOPTIONS='-c optimizer=on' make  installcheck

With Orca query optimizer off:

PGOPTIONS='-c optimizer=off' make installcheck

Thanks @edespino! It works.

I ran the following two commands:

  1. No errors
$ PGCLIENTENCODING=UTF8 PGOPTIONS='-c optimizer=on' make -C src/test installcheck-cbdb-parallel

...

=======================
 All 259 tests passed.
=======================

make[1]: Leaving directory '/home/gpadmin/cloudberry/src/test/isolation2'
make: Leaving directory '/home/gpadmin/cloudberry/src/test'

image

  1. Still has the encoding errors for latin1
$ PGCLIENTENCODING=UTF8 PGOPTIONS='-c optimizer=on' make  installcheck

...

========================
 2 of 659 tests failed.
========================

[gpadmin@cdw cloudberry]$ cat /home/gpadmin/cloudberry/src/test/regress/regression.diffs
diff -I HINT: -I CONTEXT: -I GP_IGNORE: -U3 /home/gpadmin/cloudberry/src/test/regress/expected/gpcopy_encoding.out /home/gpadmin/cloudberry/src/test/regress/results/gpcopy_encoding.out
--- /home/gpadmin/cloudberry/src/test/regress/expected/gpcopy_encoding.out	2024-12-12 03:03:35.691378647 -0800
+++ /home/gpadmin/cloudberry/src/test/regress/results/gpcopy_encoding.out	2024-12-12 03:03:35.693378680 -0800
@@ -21,7 +21,7 @@
 copy enctest to '/tmp/enctest_utf_to_latin1-1' encoding 'latin1';
 set client_encoding='latin1';
 copy enctest to stdout;
-�
+�
 copy enctest to '/tmp/enctest_utf_to_latin1-2';
 -- Connect to 'latin1' database, and load back the files we just created.
 -- This is to check that they were created correctly, and that the ENCODING
diff -I HINT: -I CONTEXT: -I GP_IGNORE: -U3 /home/gpadmin/cloudberry/src/test/regress/expected/dispatch_encoding.out /home/gpadmin/cloudberry/src/test/regress/results/dispatch_encoding.out
--- /home/gpadmin/cloudberry/src/test/regress/expected/dispatch_encoding.out	2024-12-12 03:06:09.112219102 -0800
+++ /home/gpadmin/cloudberry/src/test/regress/results/dispatch_encoding.out	2024-12-12 03:06:09.114219140 -0800
@@ -55,14 +55,16 @@

 select raise_error(t) from enctest;
 ERROR:  raise_error called on "funny char Ä"
+CONTEXT:  PL/pgSQL function raise_error(text) line 3 at RAISE
 -- now do it again with latin1
 set client_encoding='latin1';
 select raise_notice(t) from enctest;
-NOTICE:  raise_notice called on "funny char �"
+NOTICE:  raise_notice called on "funny char �"
  raise_notice
 --------------

 (1 row)

 select raise_error(t) from enctest;
-ERROR:  raise_error called on "funny char �"
+ERROR:  raise_error called on "funny char �"
+CONTEXT:  PL/pgSQL function raise_error(text) line 3 at RAISE

For now, I can continue testing by setting up the correct encoding environment for Latin1. However, this issue doesn’t seem critical since our CI/CD pipeline has passed successfully.

Would it be possible to merge this PR now, or would you prefer that I ensure all tests pass locally before proceeding? Personally, I lean towards merging the PR now, as the failed tests are limited to my local environment settings. Otherwise, with new PRs being introduced, I would need to rebase multiple times to address potential failures caused by the latest commits.

Updates: I find how to fix them. See my commit 9cf2835. Will test it locally tomorrow!

@tuhaihe tuhaihe force-pushed the cleanup-mails-and-names1126 branch from 242ddbb to 6d973ed Compare December 13, 2024 07:26
@tuhaihe
Copy link
Member Author

tuhaihe commented Dec 14, 2024

Finally got it done (nearly)!

I reviewed all the changes file by file this week and found a few issues then fixed them. Now all the local tests (only one failed now) based on my latest changes have passed completely!

As for that latin1 encoding issue, we can add PGCLIENTENCODING=UTF8 when running the test commands.

Screenshots:

  • PGCLIENTENCODING=UTF8 PGOPTIONS='-c optimizer=on' make -C src/test installcheck-cbdb-parallel
image
  • PGCLIENTENCODING=UTF8 PGOPTIONS='-c optimizer=on' make installcheck
image
  • PGCLIENTENCODING=UTF8 PGOPTIONS='-c optimizer=off' make installcheck-world
image

for the failed test, haven't changed the file src/test/regress/expected/uaocs_compaction/stats.out. I'm not sure about it.

Update: The above failed test cannot happen again. I tested this command again on my branch and the latest main branch, another new test failed - see issue #781

My tests

My testing was based on the latest building Docker image:

docker run -it --rm -h cdw --shm-size=30gb apache/incubator-cloudberry:cbdb-build-rocky9-latest

Then pulling my branch:

git clone --branch cleanup-mails-and-names1126 https://github.com/tuhaihe/cloudberrydb.git ~/cloudberry

After that, compiled and installed the Cloudberry following this guide (https://github.com/edespino/cloudberry/blob/rocky9-dev-readme/deploy/build/README-rockylinux9.md).

Lastly, ran the three tests mentioned above. FYI.

Plan

Except for the first two commits in this PR, most of the others are quite similar. I’m planning to squash them down into a single epic commit, and then we can start merging this PR.

@tuhaihe tuhaihe force-pushed the cleanup-mails-and-names1126 branch 2 times, most recently from 2e6af04 to 397e660 Compare December 16, 2024 02:54
@edespino
Copy link
Contributor

FYI: I removed the skip ci token in the PR body. This will allow the CI to run automatically on this PR. It was previously skipping.

@tuhaihe
Copy link
Member Author

tuhaihe commented Dec 17, 2024

FYI: I removed the skip ci token in the PR body. This will allow the CI to run automatically on this PR. It was previously skipping.

Thanks Ed! Now all the CI checks have passed. Do we have any plans to merge this PR?

@tuhaihe tuhaihe force-pushed the cleanup-mails-and-names1126 branch from 397e660 to 28f8726 Compare December 18, 2024 05:50
@tuhaihe tuhaihe requested a review from my-ship-it December 18, 2024 10:59
In this PR, including the following changes:

* Rename the project name from "Cloudberry Database" to "Apache
  Cloudberry"
* Update the website url to "cloudberry.apache.org"
* Update the contact email from "[email protected]" to
  "[email protected]" mailing list
Rename the Cloudberry Database to Apache Cloudberry to address the
lastest project name change.
This commit consolidates changes to align the project with ASF brand
standards, improve clarity, and ensure consistent naming and
references throughout the codebase.

Key Changes:

1. Rebranding

* Renamed all instances of `Cloudberry Database`, `CloudberryDB`, and
  similar terms to `Apache Cloudberry` or `Cloudberry` as appropriate.
* Applied consistent naming across:
  - `contrib/*`
  - `gpAux/*`
  - `gpMgmt/*`
  - `gpcontrib/*`
  - `src/*` (e.g., src/backend, src/bin, src/include, src/test, etc.)
  - `doc/*`

2. URL Updates

* Updated outdated GitHub repository links to the new Apache project
  location.

These changes are part of an ongoing effort to ensure ASF compliance.
@tuhaihe tuhaihe force-pushed the cleanup-mails-and-names1126 branch from 28f8726 to 920dfd8 Compare December 23, 2024 03:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants