Re: pb with join plan - Mailing list pgsql-general
From | Kirk Wolak |
---|---|
Subject | Re: pb with join plan |
Date | |
Msg-id | CACLU5mSRW1K+4Z_rfOkzaas5iTuXNvLELsTRXFfac7ut02wTRQ@mail.gmail.com Whole thread Raw |
In response to | Re: pb with join plan (Marc Millas <marc.millas@mokadb.com>) |
Responses |
Re: pb with join plan
|
List | pgsql-general |
On Wed, Jun 21, 2023 at 12:10 PM Marc Millas <marc.millas@mokadb.com> wrote:
Marc MILLASOn Wed, Jun 21, 2023 at 12:43 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:On 6/21/23 00:26, Marc Millas wrote:
>
>
> On Tue, Jun 20, 2023 at 11:19 PM David Rowley <dgrowleyml@gmail.com
> <mailto:dgrowleyml@gmail.com>> wrote:
>
> On Wed, 21 Jun 2023 at 08:34, Marc Millas <marc.millas@mokadb.com
> <mailto:marc.millas@mokadb.com>> wrote:
> >
> > On Tue, Jun 20, 2023 at 10:14 PM David Rowley
> <dgrowleyml@gmail.com <mailto:dgrowleyml@gmail.com>> wrote:
> >>
> >> On Wed, 21 Jun 2023 at 07:42, Marc Millas <marc.millas@mokadb.com
> <mailto:marc.millas@mokadb.com>> wrote:
> >> > But if I do the same with clause one OR clause 2, I have to
> kill the request after an hour, seeing the filesystem showing more
> than 140 Mb of increased usage.
It's a bit weird the "victor" table is joined seemingly without any join
conditions, leading to a cross join (which massively inflates the cost
for joins above it). Maybe the anonymized plan mangles it somehow.So I did try to simplify my pb.I create a table with the result of the first 3 joins.That table do have 15M lines. all tables have been vacuum analyzeNow if I do an explain analyze of a simple join between that table and my original table 4using a simple = clause, I get a result in one second (around). and the planner guesses for rows seems in line with the observed values .if I use a substr(table1.a)= table2.b, the explain analyze get a result in 21 seconds and the planner estimates a 65M rows result set while the observed is 330 k rowsso here its 20 times slower and the discrepency between planner rows guess and reality is a 200 ratio.Now, if I try an explain analyze with join on a=b or substr(c)=d or e=fthen... I kill the query after a quarter an hour without any answer.if I try to just explain the query, the planner rows guess becomes more than 2 Billions....the extremely simple query and plan are here, without automatic obfuscation
First, I am not sure why you cannot send us the explain analyze. But moving on...
substr() is a function that mutilates a value such that the index becomes useless...
If you are looking for the LEFT() of the value, then an INDEX can be used.
I have COLLATION "C" and when I query:
WHERE fld like fld_b||"%"
The optimizer constructs a query that uses the index on "fld"...
But when I try:
WHERE fld like CONCAT_WS("", fld_b,"%")
It doesn't use the index version. (because the function call is too complicated to see through)
When using functions in where clauses, indexes either have to be made on those functions, or often times the index cannot be used.
BTW, I noted the COLLATION. That turned out to be important, because my first DB test did NOT use that collation, and the result
WHERE fld like CONCAT_WS("", fld_b,"%")
It doesn't use the index version. (because the function call is too complicated to see through)
When using functions in where clauses, indexes either have to be made on those functions, or often times the index cannot be used.
BTW, I noted the COLLATION. That turned out to be important, because my first DB test did NOT use that collation, and the result
of the LIKE was the non-indexed version...
I hope you find something useful in here.
Also, WHERE fld <> 72... (unless you have a heavily skewed set of statistics, I read that as. SCAN everything, and check later,
I hope you find something useful in here.
Also, WHERE fld <> 72... (unless you have a heavily skewed set of statistics, I read that as. SCAN everything, and check later,
because this should filter very few rows), whereas fld = 72 will be blazingly fast.
Kirk
Kirk
pgsql-general by date: