Обсуждение: bug or lacking doc hint
On 6/25/23 10:01, Marc Millas wrote: > Hi, > > I have had a perf (++) pb with a join plan cf the pb with join plan thread. > I did simplify the thing up to when its a simple join between a 15M lines > table and a 30k lines table. > if I put in the on part something like table1.a=table2.b, Postgres does > the job in around 1 seconde. > > if in the on part of the join I write table1.a=table2.b OR > substr(table1.c,x,x) =table2.d That left side SUBSTR() can't be good for the query optimizer. (Or I'm old and still using habits picked up 25 years ago on a different RDBMS.) Maybe a function index on table1 would help. -- Born in Arizona, moved to Babylonia.
On Mon, 26 Jun 2023 at 03:02, Marc Millas <marc.millas@mokadb.com> wrote: > When I ask this list, David Rowley suggest to rewrite the SQL, replacing the OR by a union. > > Fine, this do work, even if a bit complex as the original SQL was a set of intricate joins. > > > So, either this behaviour ( postgres unable to find an appropriate plan for join with OR clauses) > is a true bug which is waiting to be corrected, either the doc is lacking a chapter on which one can read that Postgresdoes NOT support that syntax as soon as there is some data volumes. The problem is that out of the 3 methods PostgreSQL uses to join tables, only 1 of them supports join conditions with an OR clause. Merge Join cannot do this because results can only be ordered one way at a time. Hash Join technically could do this, but it would require that it built multiple hash tables. Currently, it only builds one table. That leaves Nested Loop as the join method to implement joins with OR clauses. Unfortunately, nested loops are quadratic and the join condition must be evaluated once per each cartesian product row. That does not scale well. Tom Lane did start some work [1] to allow the planner to convert some queries to use UNION instead of evaluating OR clauses, but, if I remember correctly, it didn't handle ORs in join conditions, though perhaps having it do that would be a natural phase 2. I don't recall why the work stopped. > So, now I am writing some kind of recipe book for the users of that DB. > > What should I write ? You're asking a public mailing list to write private documentation for the company you work for? hmm. David [1] https://www.postgresql.org/message-id/flat/20180904015910.GA1797012%40rfd.leadboat.com#0ddccc114c74d1ddaedcb07b64530967
On Mon, 26 Jun 2023 at 03:02, Marc Millas <marc.millas@mokadb.com> wrote:
> When I ask this list, David Rowley suggest to rewrite the SQL, replacing the OR by a union.
>
> Fine, this do work, even if a bit complex as the original SQL was a set of intricate joins.
>
>
> So, either this behaviour ( postgres unable to find an appropriate plan for join with OR clauses)
> is a true bug which is waiting to be corrected, either the doc is lacking a chapter on which one can read that Postgres does NOT support that syntax as soon as there is some data volumes.
The problem is that out of the 3 methods PostgreSQL uses to join
tables, only 1 of them supports join conditions with an OR clause.
Merge Join cannot do this because results can only be ordered one way
at a time. Hash Join technically could do this, but it would require
that it built multiple hash tables. Currently, it only builds one
table. That leaves Nested Loop as the join method to implement joins
with OR clauses. Unfortunately, nested loops are quadratic and the
join condition must be evaluated once per each cartesian product row.
That does not scale well.
Tom Lane did start some work [1] to allow the planner to convert some
queries to use UNION instead of evaluating OR clauses, but, if I
remember correctly, it didn't handle ORs in join conditions, though
perhaps having it do that would be a natural phase 2. I don't recall
why the work stopped.
> So, now I am writing some kind of recipe book for the users of that DB.
>
> What should I write ?
You're asking a public mailing list to write private documentation for
the company you work for? hmm.
David
[1] https://www.postgresql.org/message-id/flat/20180904015910.GA1797012%40rfd.leadboat.com#0ddccc114c74d1ddaedcb07b64530967
David Rowley <dgrowleyml@gmail.com> writes: > The problem is that out of the 3 methods PostgreSQL uses to join > tables, only 1 of them supports join conditions with an OR clause. > Merge Join cannot do this because results can only be ordered one way > at a time. Hash Join technically could do this, but it would require > that it built multiple hash tables. Currently, it only builds one > table. That leaves Nested Loop as the join method to implement joins > with OR clauses. Unfortunately, nested loops are quadratic and the > join condition must be evaluated once per each cartesian product row. We can do better than that if the OR'd conditions are each amenable to an index scan on one of the tables: then it can be a nestloop with a bitmap-OR'd inner index scan. I thought the upthread advice to convert the substr() condition into something that could be indexed was on-point. > Tom Lane did start some work [1] to allow the planner to convert some > queries to use UNION instead of evaluating OR clauses, but, if I > remember correctly, it didn't handle ORs in join conditions, though > perhaps having it do that would be a natural phase 2. I don't recall > why the work stopped. As I recall, I was having difficulty convincing myself that de-duplication of results (for cases where the same row satisfies more than one of the OR'd conditions) would work correctly. You can't just blindly make it a UNION because that might remove identical rows that *should* appear more than once in the result. regards, tom lane
David Rowley <dgrowleyml@gmail.com> writes:
> The problem is that out of the 3 methods PostgreSQL uses to join
> tables, only 1 of them supports join conditions with an OR clause.
> Merge Join cannot do this because results can only be ordered one way
> at a time. Hash Join technically could do this, but it would require
> that it built multiple hash tables. Currently, it only builds one
> table. That leaves Nested Loop as the join method to implement joins
> with OR clauses. Unfortunately, nested loops are quadratic and the
> join condition must be evaluated once per each cartesian product row.
We can do better than that if the OR'd conditions are each amenable
to an index scan on one of the tables: then it can be a nestloop with
a bitmap-OR'd inner index scan. I thought the upthread advice to
convert the substr() condition into something that could be indexed
was on-point.
> Tom Lane did start some work [1] to allow the planner to convert some
> queries to use UNION instead of evaluating OR clauses, but, if I
> remember correctly, it didn't handle ORs in join conditions, though
> perhaps having it do that would be a natural phase 2. I don't recall
> why the work stopped.
As I recall, I was having difficulty convincing myself that
de-duplication of results (for cases where the same row satisfies
more than one of the OR'd conditions) would work correctly.
You can't just blindly make it a UNION because that might remove
identical rows that *should* appear more than once in the result.
regards, tom lane
On Sun, Jun 25, 2023 at 11:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:David Rowley <dgrowleyml@gmail.com> writes:
> The problem is that out of the 3 methods PostgreSQL uses to join
> tables, only 1 of them supports join conditions with an OR clause.
> Merge Join cannot do this because results can only be ordered one way
> at a time. Hash Join technically could do this, but it would require
> that it built multiple hash tables. Currently, it only builds one
> table. That leaves Nested Loop as the join method to implement joins
> with OR clauses. Unfortunately, nested loops are quadratic and the
> join condition must be evaluated once per each cartesian product row.
We can do better than that if the OR'd conditions are each amenable
to an index scan on one of the tables: then it can be a nestloop with
a bitmap-OR'd inner index scan. I thought the upthread advice to
convert the substr() condition into something that could be indexed
was on-point.ok. but one of the tables within the join(s) tables is 10 billions rows, splitted in 120 partitions. Creating something like 20 more indexes to fulfill that condition do have its own problems.
> Tom Lane did start some work [1] to allow the planner to convert some
> queries to use UNION instead of evaluating OR clauses, but, if I
> remember correctly, it didn't handle ORs in join conditions, though
> perhaps having it do that would be a natural phase 2. I don't recall
> why the work stopped.
As I recall, I was having difficulty convincing myself that
de-duplication of results (for cases where the same row satisfies
more than one of the OR'd conditions) would work correctly.
You can't just blindly make it a UNION because that might remove
identical rows that *should* appear more than once in the result.I did rewrite the query using a cte and union(s). For that query, no dedup point.But my pb is that that DB will be used by a bunch of people writing raw SQL queries, and I cannot let them write queries that are going to go on for ages, and eventually crash over temp_file_limit after hours every now and then.So, my understanding of the above is that I must inform the users NOT to use OR clauses into joins.which maybe a pb by itself.regardsMarc
regards, tom laneMarc MILLAS
Sounds like the problem you are having is, the server is running out of temporary resources for the operation that users are trying to do. So according to Tom, on the postgres side, the operation cannot be optimized further.I think you have few choices here,- See if increasing the resources of the server will allow them to run the operation- Ask users not to do that operation- Use a extension like citus to scale horizontallyBut I'm thinking why a massively inefficient join is needed in the first place. Shouldn't joins be for following keyed relationships. So ideally a unique indexed column, but at the very least an indexed column. Why is a join required on a dynamically calculated substring? Can it be made into a static computed value and indexed? Substring sounds like an op that should be in the filter stage.Can you describe your data model? Maybe we can give some specific advice.
Regards,AvinOn Mon, Jun 26, 2023 at 3:57 AM Marc Millas <marc.millas@mokadb.com> wrote:On Sun, Jun 25, 2023 at 11:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:David Rowley <dgrowleyml@gmail.com> writes:
> The problem is that out of the 3 methods PostgreSQL uses to join
> tables, only 1 of them supports join conditions with an OR clause.
> Merge Join cannot do this because results can only be ordered one way
> at a time. Hash Join technically could do this, but it would require
> that it built multiple hash tables. Currently, it only builds one
> table. That leaves Nested Loop as the join method to implement joins
> with OR clauses. Unfortunately, nested loops are quadratic and the
> join condition must be evaluated once per each cartesian product row.
We can do better than that if the OR'd conditions are each amenable
to an index scan on one of the tables: then it can be a nestloop with
a bitmap-OR'd inner index scan. I thought the upthread advice to
convert the substr() condition into something that could be indexed
was on-point.ok. but one of the tables within the join(s) tables is 10 billions rows, splitted in 120 partitions. Creating something like 20 more indexes to fulfill that condition do have its own problems.
> Tom Lane did start some work [1] to allow the planner to convert some
> queries to use UNION instead of evaluating OR clauses, but, if I
> remember correctly, it didn't handle ORs in join conditions, though
> perhaps having it do that would be a natural phase 2. I don't recall
> why the work stopped.
As I recall, I was having difficulty convincing myself that
de-duplication of results (for cases where the same row satisfies
more than one of the OR'd conditions) would work correctly.
You can't just blindly make it a UNION because that might remove
identical rows that *should* appear more than once in the result.I did rewrite the query using a cte and union(s). For that query, no dedup point.But my pb is that that DB will be used by a bunch of people writing raw SQL queries, and I cannot let them write queries that are going to go on for ages, and eventually crash over temp_file_limit after hours every now and then.So, my understanding of the above is that I must inform the users NOT to use OR clauses into joins.which maybe a pb by itself.regardsMarc
regards, tom laneMarc MILLAS
On Mon, Jun 26, 2023 at 5:47 AM Avin Kavish <avin@baseboard.ai> wrote:Sounds like the problem you are having is, the server is running out of temporary resources for the operation that users are trying to do. So according to Tom, on the postgres side, the operation cannot be optimized further.I think you have few choices here,- See if increasing the resources of the server will allow them to run the operation- Ask users not to do that operation- Use a extension like citus to scale horizontallyBut I'm thinking why a massively inefficient join is needed in the first place. Shouldn't joins be for following keyed relationships. So ideally a unique indexed column, but at the very least an indexed column. Why is a join required on a dynamically calculated substring? Can it be made into a static computed value and indexed? Substring sounds like an op that should be in the filter stage.Can you describe your data model? Maybe we can give some specific advice.There is a set of big tables containing fine grain health data. The DB is accessed by people doing research on various kind of sicknesses. So, by nature, all columns (I mean ALL) can be used for every kind of SQL including, obviously, lots of intricated joins.
This sounds like the kind of problem normally solved by data warehouses. Is your schema designed like a DW, or is it in 3NF?
Born in Arizona, moved to Babylonia.
On 6/26/23 07:22, Marc Millas wrote:On Mon, Jun 26, 2023 at 5:47 AM Avin Kavish <avin@baseboard.ai> wrote:Sounds like the problem you are having is, the server is running out of temporary resources for the operation that users are trying to do. So according to Tom, on the postgres side, the operation cannot be optimized further.I think you have few choices here,- See if increasing the resources of the server will allow them to run the operation- Ask users not to do that operation- Use a extension like citus to scale horizontallyBut I'm thinking why a massively inefficient join is needed in the first place. Shouldn't joins be for following keyed relationships. So ideally a unique indexed column, but at the very least an indexed column. Why is a join required on a dynamically calculated substring? Can it be made into a static computed value and indexed? Substring sounds like an op that should be in the filter stage.Can you describe your data model? Maybe we can give some specific advice.There is a set of big tables containing fine grain health data. The DB is accessed by people doing research on various kind of sicknesses. So, by nature, all columns (I mean ALL) can be used for every kind of SQL including, obviously, lots of intricated joins.
This sounds like the kind of problem normally solved by data warehouses. Is your schema designed like a DW, or is it in 3NF?
--
Born in Arizona, moved to Babylonia.
On Mon, Jun 26, 2023 at 4:05 PM Ron <ronljohnsonjr@gmail.com> wrote:On 6/26/23 07:22, Marc Millas wrote:On Mon, Jun 26, 2023 at 5:47 AM Avin Kavish <avin@baseboard.ai> wrote:Sounds like the problem you are having is, the server is running out of temporary resources for the operation that users are trying to do. So according to Tom, on the postgres side, the operation cannot be optimized further.I think you have few choices here,- See if increasing the resources of the server will allow them to run the operation- Ask users not to do that operation- Use a extension like citus to scale horizontallyBut I'm thinking why a massively inefficient join is needed in the first place. Shouldn't joins be for following keyed relationships. So ideally a unique indexed column, but at the very least an indexed column. Why is a join required on a dynamically calculated substring? Can it be made into a static computed value and indexed? Substring sounds like an op that should be in the filter stage.Can you describe your data model? Maybe we can give some specific advice.There is a set of big tables containing fine grain health data. The DB is accessed by people doing research on various kind of sicknesses. So, by nature, all columns (I mean ALL) can be used for every kind of SQL including, obviously, lots of intricated joins.
This sounds like the kind of problem normally solved by data warehouses. Is your schema designed like a DW, or is it in 3NF?it's, indeed, some kind of dwh.but it's neither a star nor a snowflake .at least not used like those standard schemas.in one of the big tables (10 billions+ rows), there is around 60 columns, describing one event: some guy have had a given sickness, got a given medoc etcThe pb is that its not one simple event with a set of dimensions, the people using that db are NOT looking for an event according to various criterias, they are looking for correlations between each of the 60+ columns.As a consequence very few indexes are used as most requests end in some kind of huge sequential reads.The machine was built for this and perform well, but some requests are posing pb and we must find solutions/workaround.one of the users did rewrite the request using a select distinct matched with left join(s) and table.a is not null set of conditions.looks crazy, but does work. I'll get the request tomorrow.
And please be careful with Views that depend on views. While it works, it creates downsides when you attempt to
Hi All
Got a weird one. I am using the regex below to case match numeric only values.
'^([0-9]+[.]?[0-9]*)$'
This works well by and large but not for a value like “1234:567”. This seems to match positive and then fails due to not being an actual number in the subsequent cast.
Any ideas? (PG13)
Z
Hi All
Got a weird one. I am using the regex below to case match numeric only values.
'^([0-9]+[.]?[0-9]*)$'
This works well by and large but not for a value like “1234:567”. This seems to match positive and then fails due to not being an actual number in the subsequent cast.
psql (15.3 (Homebrew), server 13.7)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, compression: off)
Type "help" for help.
bcaas=> select '1234:56' ~ '^([0-9]+[.]?[0-9]*)$';
?column?
----------
f
(1 row)
bcaas=> select '1234.56' ~ '^([0-9]+[.]?[0-9]*)$';
?column?
----------
t
(1 row)
Hi All
Got a weird one. I am using the regex below to case match numeric only values.
'^([0-9]+[.]?[0-9]*)$'
This works well by and large but not for a value like “1234:567”. This seems to match positive and then fails due to not being an actual number in the subsequent cast.
Any ideas? (PG13)
Z
On Tue, 27 Jun 2023 at 14:59, Zahir Lalani <ZahirLalani@oliver.agency> wrote: > > Hi All > > > > Got a weird one. I am using the regex below to case match numeric only values. > > > > '^([0-9]+[.]?[0-9]*)$' > > > > This works well by and large but not for a value like “1234:567”. This seems to match positive and then fails due to notbeing an actual number in the subsequent cast. > > > > Any ideas? (PG13) > > > > Z > > When executed from psql it works ok. What tool did you use to run your query? I would suspect that the tool does interpret square brackets somehow and your regex actually becomes '^([0-9]+.?[0-9]*)$' which matches any character between digits. If you enable query logging in your system - either uncomment log_statement = 'all' in postgresql.conf or run ALTER SYSTEM SET log_statement = 'all'; and restart, then you will find the actual query in server's log file: 2023-06-27 16:12:35.221 EDT [23580] LOG: statement: select '1234:56' ~ '^([0-9]+[.]?[0-9]*)$'; make sure it remains the same. -- Sergey
> -----Original Message----- > > > > Hi All > > > > > > > > Got a weird one. I am using the regex below to case match numeric only > values. > > > > > > > > '^([0-9]+[.]?[0-9]*)$' > > > > > > > > This works well by and large but not for a value like “1234:567”. This seems > to match positive and then fails due to not being an actual number in the > subsequent cast. > > > > > > > > Any ideas? (PG13) > > > > > > > > Z > > > > My apologies for wasting your time ☹ The issue was in an unrelated field!! Thank you Z