Обсуждение: Reducing connection overhead in pg_upgrade compat check phase

Поиск
Список
Период
Сортировка

Reducing connection overhead in pg_upgrade compat check phase

От
Daniel Gustafsson
Дата:
When adding a check to pg_upgrade a while back I noticed in a profile that the
cluster compatibility check phase spend a lot of time in connectToServer.  Some
of this can be attributed to data type checks which each run serially in turn
connecting to each database to run the check, and this seemed like a place
where we can do better.

The attached patch moves the checks from individual functions, which each loops
over all databases, into a struct which is consumed by a single umbrella check
where all data type queries are executed against a database using the same
connection.  This way we can amortize the connectToServer overhead across more
accesses to the database.

In the trivial case, a single database, I don't see a reduction of performance
over the current approach.  In a cluster with 100 (empty) databases there is a
~15% reduction in time to run a --check pass.  While it won't move the earth in
terms of wallclock time, consuming less resources on the old cluster allowing
--check to be cheaper might be the bigger win.

--
Daniel Gustafsson


Вложения

Re: Reducing connection overhead in pg_upgrade compat check phase

От
Nathan Bossart
Дата:
On Fri, Feb 17, 2023 at 10:44:49PM +0100, Daniel Gustafsson wrote:
> In the trivial case, a single database, I don't see a reduction of performance
> over the current approach.  In a cluster with 100 (empty) databases there is a
> ~15% reduction in time to run a --check pass.  While it won't move the earth in
> terms of wallclock time, consuming less resources on the old cluster allowing
> --check to be cheaper might be the bigger win.

Nice!  This has actually been on my list of things to look into, so I
intend to help review the patch.  In any case, +1 for making pg_upgrade
faster.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



Re: Reducing connection overhead in pg_upgrade compat check phase

От
Nathan Bossart
Дата:
On Fri, Feb 17, 2023 at 10:44:49PM +0100, Daniel Gustafsson wrote:
> When adding a check to pg_upgrade a while back I noticed in a profile that the
> cluster compatibility check phase spend a lot of time in connectToServer.  Some
> of this can be attributed to data type checks which each run serially in turn
> connecting to each database to run the check, and this seemed like a place
> where we can do better.
> 
> The attached patch moves the checks from individual functions, which each loops
> over all databases, into a struct which is consumed by a single umbrella check
> where all data type queries are executed against a database using the same
> connection.  This way we can amortize the connectToServer overhead across more
> accesses to the database.

This change consolidates all the data type checks, so instead of 7 separate
loops through all the databases, there is just one.  However, I wonder if
we are leaving too much on the table, as there are a number of other
functions that also loop over all the databases:

    * get_loadable_libraries
    * get_db_and_rel_infos
    * report_extension_updates
    * old_9_6_invalidate_hash_indexes
    * check_for_isn_and_int8_passing_mismatch
    * check_for_user_defined_postfix_ops
    * check_for_incompatible_polymorphics
    * check_for_tables_with_oids
    * check_for_user_defined_encoding_conversions

I suspect consolidating get_loadable_libraries, get_db_and_rel_infos, and
report_extension_updates would be prohibitively complicated and not worth
the effort.  old_9_6_invalidate_hash_indexes is only needed for unsupported
versions, so that might not be worth consolidating.
check_for_isn_and_int8_passing_mismatch only loops through all databases
when float8_pass_by_value in the control data differs, so that might not be
worth it, either.  The last 4 are for supported versions and, from a very
quick glance, seem possible to consolidate.  That would bring us to a total
of 11 separate loops that we could consolidate into one.  However, the data
type checks seem to follow a nice pattern, so perhaps this is easier said
than done.

IIUC with the patch, pg_upgrade will immediately fail as soon as a single
check in a database fails.  I believe this differs from the current
behavior where all matches for a given check in the cluster are logged
before failing.  I wonder if it'd be better to perform all of the data type
checks in all databases before failing so that all of the violations are
reported.  Else, users would have to run pg_upgrade, fix a violation, run
pg_upgrade again, fix another one, etc.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



Re: Reducing connection overhead in pg_upgrade compat check phase

От
Justin Pryzby
Дата:
On Fri, Feb 17, 2023 at 10:44:49PM +0100, Daniel Gustafsson wrote:
> When adding a check to pg_upgrade a while back I noticed in a profile that the
> cluster compatibility check phase spend a lot of time in connectToServer.  Some
> of this can be attributed to data type checks which each run serially in turn
> connecting to each database to run the check, and this seemed like a place
> where we can do better.

 src/bin/pg_upgrade/check.c      | 371 +++++++++++++++---------------
 src/bin/pg_upgrade/pg_upgrade.h |  28 ++-
 src/bin/pg_upgrade/version.c    | 394 ++++++++++++++------------------
 3 files changed, 373 insertions(+), 420 deletions(-)

And saves 50 LOC.

The stated goal of the patch is to reduce overhead.  But it only updates
a couple functions, and there are (I think) nine functions which loop
around all DBs.  If you want to reduce the overhead, I assumed you'd
cache the DB connection for all tests ... but then I tried it, and first
ran into max_connections, and then ran into EMFILE.  Which is probably
enough to kill my idea.

But maybe the existing patch could be phrased in terms of moving all the
per-db checks from functions to data structures (which has its own
merits).  Then, there could be a single loop around DBs which executes
all the functions.  The test runner can also test the major version and
handle the textfile output.

However (as Nathan mentioned) what's currently done shows *all* the
problems of a given type - if there were 9 DBs with 99 relations with
OIDs, it'd show all of them at once.  It'd be a big step backwards to
only show problems for the first problematic DB.

But maybe that's an another opportunity to do better.  Right now, if I
run pg_upgrade, it'll show all the failing objects, but only for first
check that fails.  After fixing them, it might tell me about a 2nd
failing check.  I've never run into multiple types of failing checks,
but I do know that needing to re-run pg-upgrade is annoying (see
3c0471b5f).

You talked about improving the two data types tests, which aren't
conditional on a maximum server version.  The minimal improvement you'll
get is when only those two checks are run (like on a developer upgrade
v16=>v16).  But when more checks are run during a production upgrade
like v13=>v16, you'd see a larger gain.

I fooled around with that idea in the attached patch.  I have no
particular interest in optimizing --check for large numbers of DBs, so
I'm not planning to pursue it further, but maybe it'll be useful to you.

About your original patch:

+static DataTypesUsageChecks data_types_usage_checks[] = {
+    /*
+     * Look for composite types that were made during initdb *or* belong to
+     * information_schema; that's important in case information_schema was
+     * dropped and reloaded.
+     *
+     * The cutoff OID here should match the source cluster's value of
+     * FirstNormalObjectId.  We hardcode it rather than using that C #define
+     * because, if that #define is ever changed, our own version's value is
+     * NOT what to use.  Eventually we may need a test on the source cluster's
+     * version to select the correct value.
+     */
+    {"Checking for system-defined composite types in user tables",
+     "tables_using_composite.txt",

I think this might e cleaner using "named initializer" struct
initialization, rather than a comma-separated list (whatever that's
called).

Maybe instead of putting all checks into an array of
DataTypesUsageChecks, they should be defined in separate arrays, and
then an array defined with the list of checks?

+             * If the check failed, terminate the umbrella status and print
+             * the specific status line of the check to indicate which it was
+             * before terminating with the detailed error message.
+             */
+            if (found)
+            {
+                PQfinish(conn);
 
-    base_query = psprintf("SELECT '%s'::pg_catalog.regtype AS oid",
-                          type_name);
+                report_status(PG_REPORT, "failed");
+                prep_status("%s", cur_check->status);
+                pg_log(PG_REPORT, "fatal");
+                pg_fatal("%s    %s", cur_check->fatal_check, output_path);
+            }

I think this loses the message localization/translation that currently
exists.  It could be written like prep_status(cur_check->status) or
prep_status("%s", _(cur_check->status)).  And _(cur_check->fatal_check). 

-- 
Justin

Вложения

Re: Reducing connection overhead in pg_upgrade compat check phase

От
Daniel Gustafsson
Дата:
> On 18 Feb 2023, at 06:46, Nathan Bossart <nathandbossart@gmail.com> wrote:
>
> On Fri, Feb 17, 2023 at 10:44:49PM +0100, Daniel Gustafsson wrote:
>> When adding a check to pg_upgrade a while back I noticed in a profile that the
>> cluster compatibility check phase spend a lot of time in connectToServer.  Some
>> of this can be attributed to data type checks which each run serially in turn
>> connecting to each database to run the check, and this seemed like a place
>> where we can do better.
>>
>> The attached patch moves the checks from individual functions, which each loops
>> over all databases, into a struct which is consumed by a single umbrella check
>> where all data type queries are executed against a database using the same
>> connection.  This way we can amortize the connectToServer overhead across more
>> accesses to the database.
>
> This change consolidates all the data type checks, so instead of 7 separate
> loops through all the databases, there is just one.  However, I wonder if
> we are leaving too much on the table, as there are a number of other
> functions that also loop over all the databases:
>
>     * get_loadable_libraries
>     * get_db_and_rel_infos
>     * report_extension_updates
>     * old_9_6_invalidate_hash_indexes
>     * check_for_isn_and_int8_passing_mismatch
>     * check_for_user_defined_postfix_ops
>     * check_for_incompatible_polymorphics
>     * check_for_tables_with_oids
>     * check_for_user_defined_encoding_conversions
>
> I suspect consolidating get_loadable_libraries, get_db_and_rel_infos, and
> report_extension_updates would be prohibitively complicated and not worth
> the effort.

Agreed, the added complexity of the code seems hard to justify unless there are
actual reports of problems.

I did experiment with reducing the allocations of namespaces and tablespaces
with a hashtable, see the attached WIP diff.  There is no measurable difference
in speed, but a synthetic benchmark where allocations cannot be reused shows
reduced memory pressure.  This might help on very large schemas, but it's not
worth pursuing IMO.

> old_9_6_invalidate_hash_indexes is only needed for unsupported
> versions, so that might not be worth consolidating.
> check_for_isn_and_int8_passing_mismatch only loops through all databases
> when float8_pass_by_value in the control data differs, so that might not be
> worth it, either.

Yeah, these two aren't all that interesting to spend cycles on IMO.

> The last 4 are for supported versions and, from a very
> quick glance, seem possible to consolidate.  That would bring us to a total
> of 11 separate loops that we could consolidate into one.  However, the data
> type checks seem to follow a nice pattern, so perhaps this is easier said
> than done.

There is that, refactoring the data type checks leads to removal of duplicated
code and a slight performance improvement.  Refactoring the other checks to
reduce overhead would be an interesting thing to look at, but this point in the
v16 cycle might not be ideal for that.

> IIUC with the patch, pg_upgrade will immediately fail as soon as a single
> check in a database fails.  I believe this differs from the current
> behavior where all matches for a given check in the cluster are logged
> before failing.

Yeah, that's wrong. Fixed.

> I wonder if it'd be better to perform all of the data type
> checks in all databases before failing so that all of the violations are
> reported.  Else, users would have to run pg_upgrade, fix a violation, run
> pg_upgrade again, fix another one, etc.

I think that's better, and have changed the patch to do it that way.

One change this brings is that check.c contains version specific checks in the
struct.  Previously these were mostly contained in version.c (some, like the
9.4 jsonb check was in check.c) which maintained some level of separation.
Splitting the array init is of course one option but it also seems a tad messy.
Not sure what's best, but for now I've documented it in the array comment at
least.

This version also moves the main data types check to check.c, renames some
members in the struct, moves to named initializers (as commented on by Justin
downthread), and adds some more polish here and there.

--
Daniel Gustafsson



Вложения

Re: Reducing connection overhead in pg_upgrade compat check phase

От
Nathan Bossart
Дата:
On Wed, Feb 22, 2023 at 10:37:35AM +0100, Daniel Gustafsson wrote:
>> On 18 Feb 2023, at 06:46, Nathan Bossart <nathandbossart@gmail.com> wrote:
>> The last 4 are for supported versions and, from a very
>> quick glance, seem possible to consolidate.  That would bring us to a total
>> of 11 separate loops that we could consolidate into one.  However, the data
>> type checks seem to follow a nice pattern, so perhaps this is easier said
>> than done.
> 
> There is that, refactoring the data type checks leads to removal of duplicated
> code and a slight performance improvement.  Refactoring the other checks to
> reduce overhead would be an interesting thing to look at, but this point in the
> v16 cycle might not be ideal for that.

Makes sense.

>> I wonder if it'd be better to perform all of the data type
>> checks in all databases before failing so that all of the violations are
>> reported.  Else, users would have to run pg_upgrade, fix a violation, run
>> pg_upgrade again, fix another one, etc.
> 
> I think that's better, and have changed the patch to do it that way.

Thanks.  This seems to work as intended.  One thing I noticed is that the
"failed check" log is only printed once, even if multiple data type checks
failed.  I believe this is because this message uses PG_STATUS.  If I
change it to PG_REPORT, all of the "failed check" messages appear.  TBH I'm
not sure we need this message at all since a more detailed explanation will
be printed afterwards.  If we do keep it around, I think it should be
indented so that it looks more like this:

    Checking for data type usage                                checking all databases
   
 
        failed check: incompatible aclitem data type in user tables
        failed check: reg* data types in user tables

> One change this brings is that check.c contains version specific checks in the
> struct.  Previously these were mostly contained in version.c (some, like the
> 9.4 jsonb check was in check.c) which maintained some level of separation.
> Splitting the array init is of course one option but it also seems a tad messy.
> Not sure what's best, but for now I've documented it in the array comment at
> least.

Hm.  We could move check_for_aclitem_data_type_usage() and
check_for_jsonb_9_4_usage() to version.c since those are only used for
determining whether the check applies now.  Otherwise, IMO things are in
roughly the right place.  I don't think it's necessary to split the array.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



Re: Reducing connection overhead in pg_upgrade compat check phase

От
Daniel Gustafsson
Дата:
> On 22 Feb 2023, at 20:20, Nathan Bossart <nathandbossart@gmail.com> wrote:

> One thing I noticed is that the
> "failed check" log is only printed once, even if multiple data type checks
> failed.  I believe this is because this message uses PG_STATUS.  If I
> change it to PG_REPORT, all of the "failed check" messages appear.  TBH I'm
> not sure we need this message at all since a more detailed explanation will
> be printed afterwards.  If we do keep it around, I think it should be
> indented so that it looks more like this:
>
>     Checking for data type usage                                checking all databases
      
>         failed check: incompatible aclitem data type in user tables
>         failed check: reg* data types in user tables

Thats a good point, that's better.  I think it makes sense to keep it around.

>> One change this brings is that check.c contains version specific checks in the
>> struct.  Previously these were mostly contained in version.c (some, like the
>> 9.4 jsonb check was in check.c) which maintained some level of separation.
>> Splitting the array init is of course one option but it also seems a tad messy.
>> Not sure what's best, but for now I've documented it in the array comment at
>> least.
>
> Hm.  We could move check_for_aclitem_data_type_usage() and
> check_for_jsonb_9_4_usage() to version.c since those are only used for
> determining whether the check applies now.  Otherwise, IMO things are in
> roughly the right place.  I don't think it's necessary to split the array.

Will do, thanks.

--
Daniel Gustafsson




Re: Reducing connection overhead in pg_upgrade compat check phase

От
Daniel Gustafsson
Дата:
> On 23 Feb 2023, at 15:12, Daniel Gustafsson <daniel@yesql.se> wrote:
>
>> On 22 Feb 2023, at 20:20, Nathan Bossart <nathandbossart@gmail.com> wrote:
>
>> One thing I noticed is that the
>> "failed check" log is only printed once, even if multiple data type checks
>> failed.  I believe this is because this message uses PG_STATUS.  If I
>> change it to PG_REPORT, all of the "failed check" messages appear.  TBH I'm
>> not sure we need this message at all since a more detailed explanation will
>> be printed afterwards.  If we do keep it around, I think it should be
>> indented so that it looks more like this:
>>
>>     Checking for data type usage                                checking all databases
       
>>         failed check: incompatible aclitem data type in user tables
>>         failed check: reg* data types in user tables
>
> Thats a good point, that's better.  I think it makes sense to keep it around.
>
>>> One change this brings is that check.c contains version specific checks in the
>>> struct.  Previously these were mostly contained in version.c (some, like the
>>> 9.4 jsonb check was in check.c) which maintained some level of separation.
>>> Splitting the array init is of course one option but it also seems a tad messy.
>>> Not sure what's best, but for now I've documented it in the array comment at
>>> least.
>>
>> Hm.  We could move check_for_aclitem_data_type_usage() and
>> check_for_jsonb_9_4_usage() to version.c since those are only used for
>> determining whether the check applies now.  Otherwise, IMO things are in
>> roughly the right place.  I don't think it's necessary to split the array.
>
> Will do, thanks.

The attached v3 is a rebase to handle conflicts and with the above comments
adressed.

--
Daniel Gustafsson


Вложения

Re: Reducing connection overhead in pg_upgrade compat check phase

От
Nathan Bossart
Дата:
On Mon, Mar 13, 2023 at 03:10:58PM +0100, Daniel Gustafsson wrote:
> The attached v3 is a rebase to handle conflicts and with the above comments
> adressed.

Thanks for the new version of the patch.

I noticed that git-am complained when I applied the patch:

    Applying: pg_upgrade: run all data type checks per connection
    .git/rebase-apply/patch:1023: new blank line at EOF.
    +
    warning: 1 line adds whitespace errors.

+                for (int rowno = 0; rowno < ntups; rowno++)
+                {
+                    found = true;

It looks like "found" is set unconditionally a few lines above, so I think
this is redundant.

Also, I think it would be worth breaking check_for_data_types_usage() into
a few separate functions (or doing some other similar refactoring) to
improve readability.  At this point, the function is quite lengthy, and I
count 6 levels of indentation at some lines.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



Re: Reducing connection overhead in pg_upgrade compat check phase

От
Nathan Bossart
Дата:
I put together a rebased version of the patch for cfbot.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com

Вложения

Re: Reducing connection overhead in pg_upgrade compat check phase

От
Daniel Gustafsson
Дата:
> On 4 Jul 2023, at 21:08, Nathan Bossart <nathandbossart@gmail.com> wrote:
>
> I put together a rebased version of the patch for cfbot.

Thanks for doing that, much appreciated!  I was busy looking at other peoples
patches and hadn't gotten to my own yet =)

> On 13 Mar 2023, at 19:21, Nathan Bossart <nathandbossart@gmail.com> wrote:

> I noticed that git-am complained when I applied the patch:
>
>    Applying: pg_upgrade: run all data type checks per connection
>    .git/rebase-apply/patch:1023: new blank line at EOF.
>    +
>    warning: 1 line adds whitespace errors.

Fixed.

> +                for (int rowno = 0; rowno < ntups; rowno++)
> +                {
> +                    found = true;
>
> It looks like "found" is set unconditionally a few lines above, so I think
> this is redundant.

Correct, this must've been a leftover from a previous coding that changed.
Removed.

> Also, I think it would be worth breaking check_for_data_types_usage() into
> a few separate functions (or doing some other similar refactoring) to
> improve readability.  At this point, the function is quite lengthy, and I
> count 6 levels of indentation at some lines.


It it is pretty big for sure, but it's also IMHO not terribly complicated as
it's not really performing any hard to follow logic.

I have no issues refactoring it, but trying my hand at I was only making (what
I consider) less readable code by having to jump around so I consider it a
failure.  If you have any suggestions, I would be more than happy to review and
incorporate those though.

Attached is a v5 with the above fixes and a pgindenting to fix up a few runaway
comments and indentations.

--
Daniel Gustafsson




Вложения

Re: Reducing connection overhead in pg_upgrade compat check phase

От
Nathan Bossart
Дата:
Thanks for the new patch.

On Thu, Jul 06, 2023 at 05:58:33PM +0200, Daniel Gustafsson wrote:
>> On 4 Jul 2023, at 21:08, Nathan Bossart <nathandbossart@gmail.com> wrote:
>> Also, I think it would be worth breaking check_for_data_types_usage() into
>> a few separate functions (or doing some other similar refactoring) to
>> improve readability.  At this point, the function is quite lengthy, and I
>> count 6 levels of indentation at some lines.
> 
> 
> It it is pretty big for sure, but it's also IMHO not terribly complicated as
> it's not really performing any hard to follow logic.
> 
> I have no issues refactoring it, but trying my hand at I was only making (what
> I consider) less readable code by having to jump around so I consider it a
> failure.  If you have any suggestions, I would be more than happy to review and
> incorporate those though.

I don't have a strong opinion about this.

+                for (int rowno = 0; rowno < ntups; rowno++)
+                {
+                    if (script == NULL && (script = fopen_priv(output_path, "a")) == NULL)
+                        pg_fatal("could not open file \"%s\": %s",
+                                 output_path,
+                                 strerror(errno));
+                    if (!db_used)
+                    {
+                        fprintf(script, "In database: %s\n", active_db->db_name);
+                        db_used = true;
+                    }

Since "script" will be NULL and "db_used" will be false in the first
iteration of the loop, couldn't we move this stuff to before the loop?

+                    fprintf(script, " %s.%s.%s\n",
+                            PQgetvalue(res, rowno, i_nspname),
+                            PQgetvalue(res, rowno, i_relname),
+                            PQgetvalue(res, rowno, i_attname));

nitpick: І think the current code has two spaces at the beginning of this
format string.  Did you mean to remove one of them?

+                if (script)
+                {
+                    fclose(script);
+                    script = NULL;
+                }

Won't "script" always be initialized here?  If I'm following this code
correctly, I think everything except the fclose() can be removed.

+            cur_check++;

I think this is unnecessary since we assign "cur_check" at the beginning of
every loop iteration.  I see two of these.

+static int    n_data_types_usage_checks = 7;

Can we determine this programmatically so that folks don't need to remember
to update it?

+    /* Prepare an array to store the results of checks in */
+    results = pg_malloc(sizeof(bool) * n_data_types_usage_checks);
+    memset(results, true, sizeof(*results));

IMHO it's a little strange that this is initialized to all "true", only
because I think most other Postgres code does the opposite.

+bool
+check_for_aclitem_data_type_usage(ClusterInfo *cluster)

Do you think we should rename these functions to something like
"should_check_for_*"?  They don't actually do the check, they just tell you
whether you should based on the version.  In fact, I wonder if we could
just add the versions directly to data_types_usage_checks so that we don't
need the separate hook functions.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



Re: Reducing connection overhead in pg_upgrade compat check phase

От
Daniel Gustafsson
Дата:
> On 8 Jul 2023, at 23:43, Nathan Bossart <nathandbossart@gmail.com> wrote:

Thanks for reviewing!

> Since "script" will be NULL and "db_used" will be false in the first
> iteration of the loop, couldn't we move this stuff to before the loop?

We could.  It's done this way to match how all the other checks are performing
the inner loop for consistency.  I think being consistent is better than micro
optimizing in non-hot codepaths even though it adds some redundancy.

> nitpick: І think the current code has two spaces at the beginning of this
> format string.  Did you mean to remove one of them?

Nice catch, I did not. Fixed.

> Won't "script" always be initialized here?  If I'm following this code
> correctly, I think everything except the fclose() can be removed.

You are right that this check is superfluous.  This is again an artifact of
modelling the code around how the other checks work for consistency.  At least
I think that's a good characteristic of the code.

> I think this is unnecessary since we assign "cur_check" at the beginning of
> every loop iteration.  I see two of these.

Right, this is a pointless leftover from a previous version which used a
while() loop, and I had missed removing them.  Fixed.

> +static int    n_data_types_usage_checks = 7;
>
> Can we determine this programmatically so that folks don't need to remember
> to update it?

Fair point, I've added a counter loop to the beginning of the check function to
calculate it.

> IMHO it's a little strange that this is initialized to all "true", only
> because I think most other Postgres code does the opposite.

Agreed, but it made for a less contrived codepath in knowing when an error has
been seen already, to avoid duplicate error output, so I think it's worth it.

> Do you think we should rename these functions to something like
> "should_check_for_*"?  They don't actually do the check, they just tell you
> whether you should based on the version.

I've been pondering that too, and did a rename now along with moving them all
to a single place as well as changing the comments to make it clearer.

> In fact, I wonder if we could just add the versions directly to
> data_types_usage_checks so that we don't need the separate hook functions.

We could, but it would be sort of contrived I think since some check <= and
some == while some check the catversion as well (and new ones may have other
variants.  I think this is the least paint-ourselves-in-a-corner version, if we
feel it's needlessly complicated and no other variants are added we can revisit
this.

--
Daniel Gustafsson


Вложения

Re: Reducing connection overhead in pg_upgrade compat check phase

От
Nathan Bossart
Дата:
On Mon, Jul 10, 2023 at 04:43:23PM +0200, Daniel Gustafsson wrote:
>> On 8 Jul 2023, at 23:43, Nathan Bossart <nathandbossart@gmail.com> wrote:
>> Since "script" will be NULL and "db_used" will be false in the first
>> iteration of the loop, couldn't we move this stuff to before the loop?
> 
> We could.  It's done this way to match how all the other checks are performing
> the inner loop for consistency.  I think being consistent is better than micro
> optimizing in non-hot codepaths even though it adds some redundancy.
>
> [ ... ] 
> 
>> Won't "script" always be initialized here?  If I'm following this code
>> correctly, I think everything except the fclose() can be removed.
> 
> You are right that this check is superfluous.  This is again an artifact of
> modelling the code around how the other checks work for consistency.  At least
> I think that's a good characteristic of the code.

I can't say I agree with this, but I'm not going to hold up the patch over
it.  FWIW I was looking at this more from a code simplification/readability
standpoint.

>> +static int    n_data_types_usage_checks = 7;
>> 
>> Can we determine this programmatically so that folks don't need to remember
>> to update it?
> 
> Fair point, I've added a counter loop to the beginning of the check function to
> calculate it.

+    /* Gather number of checks to perform */
+    while (tmp->status != NULL)
+        n_data_types_usage_checks++;

I think we need to tmp++ somewhere here.

>> In fact, I wonder if we could just add the versions directly to
>> data_types_usage_checks so that we don't need the separate hook functions.
> 
> We could, but it would be sort of contrived I think since some check <= and
> some == while some check the catversion as well (and new ones may have other
> variants.  I think this is the least paint-ourselves-in-a-corner version, if we
> feel it's needlessly complicated and no other variants are added we can revisit
> this.

Makes sense.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



Re: Reducing connection overhead in pg_upgrade compat check phase

От
Daniel Gustafsson
Дата:
> On 11 Jul 2023, at 01:09, Nathan Bossart <nathandbossart@gmail.com> wrote:
> On Mon, Jul 10, 2023 at 04:43:23PM +0200, Daniel Gustafsson wrote:

>>> +static int    n_data_types_usage_checks = 7;
>>>
>>> Can we determine this programmatically so that folks don't need to remember
>>> to update it?
>>
>> Fair point, I've added a counter loop to the beginning of the check function to
>> calculate it.
>
> +    /* Gather number of checks to perform */
> +    while (tmp->status != NULL)
> +        n_data_types_usage_checks++;
>
> I think we need to tmp++ somewhere here.

Yuk, yes, will fix when caffeinated. Thanks.

--
Daniel Gustafsson




Re: Reducing connection overhead in pg_upgrade compat check phase

От
Daniel Gustafsson
Дата:
> On 11 Jul 2023, at 01:26, Daniel Gustafsson <daniel@yesql.se> wrote:
>> On 11 Jul 2023, at 01:09, Nathan Bossart <nathandbossart@gmail.com> wrote:

>> I think we need to tmp++ somewhere here.
>
> Yuk, yes, will fix when caffeinated. Thanks.

I did have coffee before now, but only found time to actually address this now
so here is a v7 with just that change and a fresh rebase.

--
Daniel Gustafsson


Вложения

Re: Reducing connection overhead in pg_upgrade compat check phase

От
Nathan Bossart
Дата:
On Wed, Jul 12, 2023 at 12:43:14AM +0200, Daniel Gustafsson wrote:
> I did have coffee before now, but only found time to actually address this now
> so here is a v7 with just that change and a fresh rebase.

Thanks.  I think the patch is in decent shape.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



Re: Reducing connection overhead in pg_upgrade compat check phase

От
Daniel Gustafsson
Дата:
> On 12 Jul 2023, at 01:36, Nathan Bossart <nathandbossart@gmail.com> wrote:
>
> On Wed, Jul 12, 2023 at 12:43:14AM +0200, Daniel Gustafsson wrote:
>> I did have coffee before now, but only found time to actually address this now
>> so here is a v7 with just that change and a fresh rebase.
>
> Thanks.  I think the patch is in decent shape.

Due to ENOTENOUGHTIME it bitrotted a bit, so here is a v8 rebase which I really
hope to close in this CF.

--
Daniel Gustafsson


Вложения

Re: Reducing connection overhead in pg_upgrade compat check phase

От
Peter Eisentraut
Дата:
On 31.08.23 23:34, Daniel Gustafsson wrote:
>> On 12 Jul 2023, at 01:36, Nathan Bossart <nathandbossart@gmail.com> wrote:
>>
>> On Wed, Jul 12, 2023 at 12:43:14AM +0200, Daniel Gustafsson wrote:
>>> I did have coffee before now, but only found time to actually address this now
>>> so here is a v7 with just that change and a fresh rebase.
>>
>> Thanks.  I think the patch is in decent shape.
> 
> Due to ENOTENOUGHTIME it bitrotted a bit, so here is a v8 rebase which I really
> hope to close in this CF.

The alignment of this output looks a bit funny:

...
Checking for prepared transactions                            ok
Checking for contrib/isn with bigint-passing mismatch         ok
Checking for data type usage                                  checking all databases
ok
Checking for presence of required libraries                   ok
Checking database user is the install user                    ok
...


Also, you should put gettext_noop() calls into the .status = "Checking ..."
assignments and arrange to call gettext() where they are used, to maintain
the translatability.




Re: Reducing connection overhead in pg_upgrade compat check phase

От
Daniel Gustafsson
Дата:
> On 13 Sep 2023, at 16:12, Peter Eisentraut <peter@eisentraut.org> wrote:

> The alignment of this output looks a bit funny:
>
> ...
> Checking for prepared transactions                            ok
> Checking for contrib/isn with bigint-passing mismatch         ok
> Checking for data type usage                                  checking all databases
> ok
> Checking for presence of required libraries                   ok
> Checking database user is the install user                    ok
> ...

I was using the progress reporting to indicate that it hadn't stalled for slow
systems, but it's not probably not all that important really.  Removed such
that "ok" aligns.

> Also, you should put gettext_noop() calls into the .status = "Checking ..."
> assignments and arrange to call gettext() where they are used, to maintain
> the translatability.

Ah, yes of course. Fixed.

--
Daniel Gustafsson


Вложения

Re: Reducing connection overhead in pg_upgrade compat check phase

От
Daniel Gustafsson
Дата:
Attached is a v10 rebase of this patch which had undergone significant bitrot
due to recent changes in the pg_upgrade check phase.  This brings in the
changes into the proposed structure without changes to queries, with no
additional changes to the proposed functionality.

Testing with a completely empty v11 cluster fresh from initdb as the old
cluster shows a significant speedup (averaged over multiple runs, adjusted for
outliers):

patched:  53.59ms (52.78ms, 52.49ms, 55.49ms)
master : 125.87ms (125.23 ms, 125.67ms, 126.67ms)

Using a similarly empty cluster from master as the old cluster shows a smaller
speedup, which is expected since many checks only run for older versions:

patched: 33.36ms (32.82ms, 33.78ms, 33.47ms)
master : 44.87ms (44.73ms, 44.90ms 44.99ms)

The latter case is still pretty interesting IMO since it can speed up testing
where every millisecond gained matters.

--
Daniel Gustafsson


Вложения

Re: Reducing connection overhead in pg_upgrade compat check phase

От
vignesh C
Дата:
On Fri, 27 Oct 2023 at 18:50, Daniel Gustafsson <daniel@yesql.se> wrote:
>
> Attached is a v10 rebase of this patch which had undergone significant bitrot
> due to recent changes in the pg_upgrade check phase.  This brings in the
> changes into the proposed structure without changes to queries, with no
> additional changes to the proposed functionality.
>
> Testing with a completely empty v11 cluster fresh from initdb as the old
> cluster shows a significant speedup (averaged over multiple runs, adjusted for
> outliers):
>
> patched:  53.59ms (52.78ms, 52.49ms, 55.49ms)
> master : 125.87ms (125.23 ms, 125.67ms, 126.67ms)
>
> Using a similarly empty cluster from master as the old cluster shows a smaller
> speedup, which is expected since many checks only run for older versions:
>
> patched: 33.36ms (32.82ms, 33.78ms, 33.47ms)
> master : 44.87ms (44.73ms, 44.90ms 44.99ms)
>
> The latter case is still pretty interesting IMO since it can speed up testing
> where every millisecond gained matters.

CFBot shows that the patch does not apply anymore as in [1]:
=== Applying patches on top of PostgreSQL commit ID
55627ba2d334ce98e1f5916354c46472d414bda6 ===
=== applying patch
./v10-0001-pg_upgrade-run-all-data-type-checks-per-connecti.patch
patching file src/bin/pg_upgrade/check.c
Hunk #2 FAILED at 24.
...
1 out of 7 hunks FAILED -- saving rejects to file src/bin/pg_upgrade/check.c.rej

Please post an updated version for the same.

[1] - http://cfbot.cputube.org/patch_46_4200.log

Regards,
Vignesh



Re: Reducing connection overhead in pg_upgrade compat check phase

От
vignesh C
Дата:
On Sat, 27 Jan 2024 at 09:10, vignesh C <vignesh21@gmail.com> wrote:
>
> On Fri, 27 Oct 2023 at 18:50, Daniel Gustafsson <daniel@yesql.se> wrote:
> >
> > Attached is a v10 rebase of this patch which had undergone significant bitrot
> > due to recent changes in the pg_upgrade check phase.  This brings in the
> > changes into the proposed structure without changes to queries, with no
> > additional changes to the proposed functionality.
> >
> > Testing with a completely empty v11 cluster fresh from initdb as the old
> > cluster shows a significant speedup (averaged over multiple runs, adjusted for
> > outliers):
> >
> > patched:  53.59ms (52.78ms, 52.49ms, 55.49ms)
> > master : 125.87ms (125.23 ms, 125.67ms, 126.67ms)
> >
> > Using a similarly empty cluster from master as the old cluster shows a smaller
> > speedup, which is expected since many checks only run for older versions:
> >
> > patched: 33.36ms (32.82ms, 33.78ms, 33.47ms)
> > master : 44.87ms (44.73ms, 44.90ms 44.99ms)
> >
> > The latter case is still pretty interesting IMO since it can speed up testing
> > where every millisecond gained matters.
>
> CFBot shows that the patch does not apply anymore as in [1]:
> === Applying patches on top of PostgreSQL commit ID
> 55627ba2d334ce98e1f5916354c46472d414bda6 ===
> === applying patch
> ./v10-0001-pg_upgrade-run-all-data-type-checks-per-connecti.patch
> patching file src/bin/pg_upgrade/check.c
> Hunk #2 FAILED at 24.
> ...
> 1 out of 7 hunks FAILED -- saving rejects to file src/bin/pg_upgrade/check.c.rej
>
> Please post an updated version for the same.

With no update to the thread and the patch still not applying I'm
marking this as returned with feedback.  Please feel free to resubmit
to the next CF when there is a new version of the patch.

Regards,
Vignesh



Re: Reducing connection overhead in pg_upgrade compat check phase

От
Nathan Bossart
Дата:
On Fri, Feb 02, 2024 at 12:18:25AM +0530, vignesh C wrote:
> With no update to the thread and the patch still not applying I'm
> marking this as returned with feedback.  Please feel free to resubmit
> to the next CF when there is a new version of the patch.

IMHO this patch is worth trying to get into v17.  I'd be happy to take it
forward if Daniel does not intend to work on it.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



Re: Reducing connection overhead in pg_upgrade compat check phase

От
Daniel Gustafsson
Дата:
> On 6 Feb 2024, at 17:32, Nathan Bossart <nathandbossart@gmail.com> wrote:
>
> On Fri, Feb 02, 2024 at 12:18:25AM +0530, vignesh C wrote:
>> With no update to the thread and the patch still not applying I'm
>> marking this as returned with feedback.  Please feel free to resubmit
>> to the next CF when there is a new version of the patch.
>
> IMHO this patch is worth trying to get into v17.  I'd be happy to take it
> forward if Daniel does not intend to work on it.

I actually had the same thought yesterday and spent some time polishing and
rebasing it.  I'll post an updated rebase shortly with the hopes of getting it
committed this week.

--
Daniel Gustafsson




Re: Reducing connection overhead in pg_upgrade compat check phase

От
Nathan Bossart
Дата:
On Tue, Feb 06, 2024 at 05:47:56PM +0100, Daniel Gustafsson wrote:
>> On 6 Feb 2024, at 17:32, Nathan Bossart <nathandbossart@gmail.com> wrote:
>> IMHO this patch is worth trying to get into v17.  I'd be happy to take it
>> forward if Daniel does not intend to work on it.
> 
> I actually had the same thought yesterday and spent some time polishing and
> rebasing it.  I'll post an updated rebase shortly with the hopes of getting it
> committed this week.

Oh, awesome.  Thanks!

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



Re: Reducing connection overhead in pg_upgrade compat check phase

От
Daniel Gustafsson
Дата:
> On 6 Feb 2024, at 17:47, Daniel Gustafsson <daniel@yesql.se> wrote:
>
>> On 6 Feb 2024, at 17:32, Nathan Bossart <nathandbossart@gmail.com> wrote:
>>
>> On Fri, Feb 02, 2024 at 12:18:25AM +0530, vignesh C wrote:
>>> With no update to the thread and the patch still not applying I'm
>>> marking this as returned with feedback.  Please feel free to resubmit
>>> to the next CF when there is a new version of the patch.
>>
>> IMHO this patch is worth trying to get into v17.  I'd be happy to take it
>> forward if Daniel does not intend to work on it.
>
> I actually had the same thought yesterday and spent some time polishing and
> rebasing it.  I'll post an updated rebase shortly with the hopes of getting it
> committed this week.

Attached is a v11 rebased over HEAD with some very minor tweaks.  Unless there
are objections I plan to go ahead with this version this week.

--
Daniel Gustafsson


Вложения

Re: Reducing connection overhead in pg_upgrade compat check phase

От
Peter Eisentraut
Дата:
On 07.02.24 14:25, Daniel Gustafsson wrote:
>> On 6 Feb 2024, at 17:47, Daniel Gustafsson <daniel@yesql.se> wrote:
>>
>>> On 6 Feb 2024, at 17:32, Nathan Bossart <nathandbossart@gmail.com> wrote:
>>>
>>> On Fri, Feb 02, 2024 at 12:18:25AM +0530, vignesh C wrote:
>>>> With no update to the thread and the patch still not applying I'm
>>>> marking this as returned with feedback.  Please feel free to resubmit
>>>> to the next CF when there is a new version of the patch.
>>>
>>> IMHO this patch is worth trying to get into v17.  I'd be happy to take it
>>> forward if Daniel does not intend to work on it.
>>
>> I actually had the same thought yesterday and spent some time polishing and
>> rebasing it.  I'll post an updated rebase shortly with the hopes of getting it
>> committed this week.
> 
> Attached is a v11 rebased over HEAD with some very minor tweaks.  Unless there
> are objections I plan to go ahead with this version this week.

A few more quick comments:

I think the .report_text assignments also need a gettext_noop(), like 
the .status assignments.

The type DataTypesUsageChecks is only used in check.c, so doesn't need 
to be in pg_upgrade.h.


Idea for further improvement: Might be nice if the 
DataTypesUsageVersionCheck struct also included the applicable version 
information, so the additional checks in version.c would no longer be 
necessary.




Re: Reducing connection overhead in pg_upgrade compat check phase

От
Daniel Gustafsson
Дата:
> On 8 Feb 2024, at 11:55, Peter Eisentraut <peter@eisentraut.org> wrote:

> A few more quick comments:

Thanks for reviewing!

> I think the .report_text assignments also need a gettext_noop(), like the .status assignments.

Done in the attached.

> The type DataTypesUsageChecks is only used in check.c, so doesn't need to be in pg_upgrade.h.

Fixed.

> Idea for further improvement: Might be nice if the DataTypesUsageVersionCheck struct also included the applicable
versioninformation, so the additional checks in version.c would no longer be necessary. 

I tried various variants of this when writing it, but since the checks aren't
just checking version but also include catalog version checks it became messy.
One option could perhaps be to include a version number for <= comparison, and
if set to zero a function pointer to a version check function must be provided?
That would handle the simple cases in a single place without messy logic, and
leave the more convoluted checks with a special case function.

--
Daniel Gustafsson



Вложения

Re: Reducing connection overhead in pg_upgrade compat check phase

От
Daniel Gustafsson
Дата:
> On 8 Feb 2024, at 15:16, Daniel Gustafsson <daniel@yesql.se> wrote:

> One option could perhaps be to include a version number for <= comparison, and
> if set to zero a function pointer to a version check function must be provided?
> That would handle the simple cases in a single place without messy logic, and
> leave the more convoluted checks with a special case function.

The attached is a draft version of this approach, each check can define to run
for all versions, set a threshold version for which it runs or define a
callback which implements a more complicated check.

--
Daniel Gustafsson


Вложения

Re: Reducing connection overhead in pg_upgrade compat check phase

От
Daniel Gustafsson
Дата:
> On 9 Feb 2024, at 00:04, Daniel Gustafsson <daniel@yesql.se> wrote:
>
>> On 8 Feb 2024, at 15:16, Daniel Gustafsson <daniel@yesql.se> wrote:
>
>> One option could perhaps be to include a version number for <= comparison, and
>> if set to zero a function pointer to a version check function must be provided?
>> That would handle the simple cases in a single place without messy logic, and
>> leave the more convoluted checks with a special case function.
>
> The attached is a draft version of this approach, each check can define to run
> for all versions, set a threshold version for which it runs or define a
> callback which implements a more complicated check.

And again pgindented and with documentation on the struct members to make it
easy to add new checks.  A repetitive part of the report text was also moved to
a single place.

--
Daniel Gustafsson


Вложения

Re: Reducing connection overhead in pg_upgrade compat check phase

От
Daniel Gustafsson
Дата:
Attached is a fresh rebase with only minor cosmetic touch-ups which I would
like to go ahead with during this CF.

Peter: does this address the comments you had on translation and code
duplication?

--
Daniel Gustafsson


Вложения

Re: Reducing connection overhead in pg_upgrade compat check phase

От
Peter Eisentraut
Дата:
On 18.03.24 13:11, Daniel Gustafsson wrote:
> Attached is a fresh rebase with only minor cosmetic touch-ups which I would
> like to go ahead with during this CF.
> 
> Peter: does this address the comments you had on translation and code
> duplication?

Yes, this looks good.




Re: Reducing connection overhead in pg_upgrade compat check phase

От
Daniel Gustafsson
Дата:
> On 19 Mar 2024, at 08:07, Peter Eisentraut <peter@eisentraut.org> wrote:
>
> On 18.03.24 13:11, Daniel Gustafsson wrote:
>> Attached is a fresh rebase with only minor cosmetic touch-ups which I would
>> like to go ahead with during this CF.
>> Peter: does this address the comments you had on translation and code
>> duplication?
>
> Yes, this looks good.

Thanks for review! I took another look at this and pushed it.

--
Daniel Gustafsson