Thread: Problem with indices from 10 to 13

Problem with indices from 10 to 13

From
Daniel Diniz
Date:
Hello I migrated from postgres 10 to 13 and I noticed that there was a big increase in a querie that I use, I did explain in 10 and 13 and the difference is absurd, the indices and data are the same in 2. I've re-created and re-indexed but I don't know what changed from 10 to 13 which made the performance so bad, I don't know if it needs some extra parameter in some conf on 13.

Postgres 13

"QUERY PLAN"
"Limit  (cost=1.13..26855.48 rows=30 width=137) (actual time=10886.585..429803.463 rows=4 loops=1)"
"  ->  Nested Loop  (cost=1.13..19531164.71 rows=21819 width=137) (actual time=10886.584..429803.457 rows=4 loops=1)"
"        Join Filter: (h.ult_eve_id = ev.evento_id)"
"        Rows Removed by Join Filter: 252"
"        ->  Nested Loop  (cost=1.13..19457514.32 rows=21819 width=62) (actual time=10886.326..429803.027 rows=4 loops=1)"
"              ->  Nested Loop  (cost=0.85..19450780.70 rows=21819 width=55) (actual time=10886.259..429802.908 rows=4 loops=1)"
"                    ->  Index Scan Backward using hawbs_pkey on hawbs h  (cost=0.57..19444209.67 rows=21819 width=46) (actual time=10886.119..429802.676 rows=4 loops=1)"
"                          Filter: ((tipo_hawb_id = ANY ('{1,10,3}'::integer[])) AND ((nome_des)::text ~~* convert_from('\x255354455048414e592053544f4557204c45414e44524f25'::bytea, 'LATIN1'::name)))"
"                          Rows Removed by Filter: 239188096"
"                    ->  Index Scan using empresas_pkey on empresas e  (cost=0.28..0.30 rows=1 width=17) (actual time=0.028..0.028 rows=1 loops=4)"
"                          Index Cond: (empresa_id = h.cliente_id)"
"              ->  Index Scan using contratos_pkey on contratos c  (cost=0.28..0.31 rows=1 width=15) (actual time=0.014..0.014 rows=1 loops=4)"
"                    Index Cond: (ctt_id = h.ctt_id)"
"        ->  Materialize  (cost=0.00..7.23 rows=215 width=27) (actual time=0.009..0.025 rows=64 loops=4)"
"              ->  Seq Scan on eventos ev  (cost=0.00..6.15 rows=215 width=27) (actual time=0.029..0.066 rows=67 loops=1)"
"Planning Time: 11.690 ms"
"Execution Time: 429803.611 ms"


Postgres 10

"QUERY PLAN"
"Limit  (cost=28489.06..28494.39 rows=30 width=137) (actual time=211.568..211.581 rows=4 loops=1)"
"  ->  Result  (cost=28489.06..32296.61 rows=21451 width=137) (actual time=211.566..211.578 rows=4 loops=1)"
"        ->  Sort  (cost=28489.06..28542.69 rows=21451 width=105) (actual time=211.548..211.551 rows=4 loops=1)"
"              Sort Key: h.hawb_id DESC"
"              Sort Method: quicksort  Memory: 25kB"
"              ->  Hash Join  (cost=2428.77..27855.52 rows=21451 width=105) (actual time=211.520..211.537 rows=4 loops=1)"
"                    Hash Cond: (h.ult_eve_id = ev.evento_id)"
"                    ->  Hash Join  (cost=2419.93..27735.63 rows=21451 width=62) (actual time=211.315..211.329 rows=4 loops=1)"
"                          Hash Cond: (h.ctt_id = c.ctt_id)"
"                          ->  Hash Join  (cost=2085.82..27345.18 rows=21451 width=55) (actual time=206.516..206.529 rows=4 loops=1)"
"                                Hash Cond: (h.cliente_id = e.empresa_id)"
"                                ->  Bitmap Heap Scan on hawbs h  (cost=1058.34..26261.32 rows=21451 width=46) (actual time=201.956..201.966 rows=4 loops=1)"
"                                      Recheck Cond: ((nome_des)::text ~~* convert_from('\x255354455048414e592053544f4557204c45414e44524f25'::bytea, 'LATIN1'::name))"
"                                      Filter: (tipo_hawb_id = ANY ('{1,10,3}'::integer[]))"
"                                      Heap Blocks: exact=4"
"                                      ->  Bitmap Index Scan on idx_nome_des  (cost=0.00..1052.98 rows=22623 width=0) (actual time=201.942..201.943 rows=4 loops=1)"
"                                            Index Cond: ((nome_des)::text ~~* convert_from('\x255354455048414e592053544f4557204c45414e44524f25'::bytea, 'LATIN1'::name))"
"                                ->  Hash  (cost=982.77..982.77 rows=3577 width=17) (actual time=4.542..4.542 rows=3577 loops=1)"
"                                      Buckets: 4096  Batches: 1  Memory Usage: 211kB"
"                                      ->  Seq Scan on empresas e  (cost=0.00..982.77 rows=3577 width=17) (actual time=0.007..3.189 rows=3577 loops=1)"
"                          ->  Hash  (cost=255.16..255.16 rows=6316 width=15) (actual time=4.777..4.777 rows=6316 loops=1)"
"                                Buckets: 8192  Batches: 1  Memory Usage: 361kB"
"                                ->  Seq Scan on contratos c  (cost=0.00..255.16 rows=6316 width=15) (actual time=0.006..2.420 rows=6316 loops=1)"
"                    ->  Hash  (cost=6.15..6.15 rows=215 width=27) (actual time=0.186..0.186 rows=215 loops=1)"
"                          Buckets: 1024  Batches: 1  Memory Usage: 21kB"
"                          ->  Seq Scan on eventos ev  (cost=0.00..6.15 rows=215 width=27) (actual time=0.008..0.103 rows=215 loops=1)"
"Planning time: 2.267 ms"
"Execution time: 211.776 ms"

Comand:

explain analyse
select*
from hawbs h
inner join empresas e on h.cliente_id = e.empresa_id
inner join contratos c on h.ctt_id = c.ctt_id
inner join eventos ev on h.ult_eve_id = ev.evento_id
where h.nome_des ilike '%STEPHANY STOEW LEANDRO%'  
and h.tipo_hawb_id in (1,10,3)  order by h.hawb_id desc  limit 30;

Daniel Diniz
Desenvolvimento

Cel.: 11981464923


www.flashcourier.com.br

   
 
#SomosTodosFlash #GrupoMOVE3




"Esta mensagem e seus anexos são dirigidos exclusivamente para os seus destinatários, podendo conter informação confidencial e/ou legalmente privilegiada. Se você não for destinatário desta mensagem, não deve revelar, copiar, distribuir ou de qualquer forma utilizá-la. A empresa não se responsabiliza por alterações no conteúdo desta mensagem depois do seu envio."

Attachment

Re: Problem with indices from 10 to 13

From
Ranier Vilela
Date:
Em ter., 28 de set. de 2021 às 12:40, Daniel Diniz <daniel@flashcourier.com.br> escreveu:
Hello I migrated from postgres 10 to 13 and I noticed that there was a big increase in a querie that I use, I did explain in 10 and 13 and the difference is absurd, the indices and data are the same in 2. I've re-created and re-indexed but I don't know what changed from 10 to 13 which made the performance so bad, I don't know if it needs some extra parameter in some conf on 13.

Postgres 13

"QUERY PLAN"
"Limit  (cost=1.13..26855.48 rows=30 width=137) (actual time=10886.585..429803.463 rows=4 loops=1)"
"  ->  Nested Loop  (cost=1.13..19531164.71 rows=21819 width=137) (actual time=10886.584..429803.457 rows=4 loops=1)"
"        Join Filter: (h.ult_eve_id = ev.evento_id)"
"        Rows Removed by Join Filter: 252"
"        ->  Nested Loop  (cost=1.13..19457514.32 rows=21819 width=62) (actual time=10886.326..429803.027 rows=4 loops=1)"
"              ->  Nested Loop  (cost=0.85..19450780.70 rows=21819 width=55) (actual time=10886.259..429802.908 rows=4 loops=1)"
"                    ->  Index Scan Backward using hawbs_pkey on hawbs h  (cost=0.57..19444209.67 rows=21819 width=46) (actual time=10886.119..429802.676 rows=4 loops=1)"
"                          Filter: ((tipo_hawb_id = ANY ('{1,10,3}'::integer[])) AND ((nome_des)::text ~~* convert_from('\x255354455048414e592053544f4557204c45414e44524f25'::bytea, 'LATIN1'::name)))"
"                          Rows Removed by Filter: 239188096"
Index Scan Backward looks suspicious to me.
239,188,096  rows removed by filter it's a lot of work.

Do you, run analyze?

regards,
Ranier Vilela

Re: Problem with indices from 10 to 13

From
Tom Lane
Date:
Daniel Diniz <daniel@flashcourier.com.br> writes:
> Hello I migrated from postgres 10 to 13 and I noticed that there was a big increase in a querie that I use, I did
explainin 10 and 13 and the difference is absurd, the indices and data are the same in 2. I've re-created and
re-indexedbut I don't know what changed from 10 to 13 which made the performance so bad, I don't know if it needs some
extraparameter in some conf on 13. 

This complaint is missing an awful lot of supporting information.

> "                                ->  Bitmap Heap Scan on hawbs h  (cost=1058.34..26261.32 rows=21451 width=46)
(actualtime=201.956..201.966 rows=4 loops=1)" 
> "                                      Recheck Cond: ((nome_des)::text ~~*
convert_from('\x255354455048414e592053544f4557204c45414e44524f25'::bytea,'LATIN1'::name))" 
> "                                      Filter: (tipo_hawb_id = ANY ('{1,10,3}'::integer[]))"
> "                                      Heap Blocks: exact=4"
> "                                      ->  Bitmap Index Scan on idx_nome_des  (cost=0.00..1052.98 rows=22623 width=0)
(actualtime=201.942..201.943 rows=4 loops=1)" 
> "                                            Index Cond: ((nome_des)::text ~~*
convert_from('\x255354455048414e592053544f4557204c45414e44524f25'::bytea,'LATIN1'::name))" 

For starters, how in the world did you get that query condition out of

> where h.nome_des ilike '%STEPHANY STOEW LEANDRO%'

?  What data type is h.nome_des, anyway?  And what kind of index
is that --- it couldn't be a plain btree, because we wouldn't consider
~~* to be indexable by a btree.

However, the long and the short of it is that this rowcount estimate
is off by nearly four orders of magnitude (21451 estimated vs. 4
actual is pretty awful).  It's probably just luck that you got an
acceptable plan out of v10, and bad luck that you didn't get one
out of v13 --- v13's estimate is not better, but it's not much
worse either.  You need to do something about improving that
estimate if you'd like reliable query planning.  Since I'm not
too sure which operator you're actually invoking, it's hard to
offer good advice about how hard that might be.

            regards, tom lane



RE: Problem with indices from 10 to 13

From
Daniel Diniz
Date:
Tom,

The index I use is the GIN. I've been using it for about 2 years in 10 it always gave me an almost immediate response with ilike.
But testing on 13 I don't know why it takes I already redid the index and reindexed but without significant improvement from 10 seconds to minutes or even hour on 13. The brtree indices has the same behavior only that I have GIN q this occurs.

Name de index : "idx_nome_des" gin (nome_des) 

Daniel Diniz
Desenvolvimento

Cel.: 11981464923


www.flashcourier.com.br

   
 
#SomosTodosFlash #GrupoMOVE3




"Esta mensagem e seus anexos são dirigidos exclusivamente para os seus destinatários, podendo conter informação confidencial e/ou legalmente privilegiada. Se você não for destinatário desta mensagem, não deve revelar, copiar, distribuir ou de qualquer forma utilizá-la. A empresa não se responsabiliza por alterações no conteúdo desta mensagem depois do seu envio."


De: Tom Lane <tgl@sss.pgh.pa.us>
Enviado: terça-feira, 28 de setembro de 2021 14:45
Para: Daniel Diniz <daniel@flashcourier.com.br>
Cc: pgsql-performance@lists.postgresql.org <pgsql-performance@lists.postgresql.org>
Assunto: Re: Problem with indices from 10 to 13
 
Daniel Diniz <daniel@flashcourier.com.br> writes:
> Hello I migrated from postgres 10 to 13 and I noticed that there was a big increase in a querie that I use, I did explain in 10 and 13 and the difference is absurd, the indices and data are the same in 2. I've re-created and re-indexed but I don't know what changed from 10 to 13 which made the performance so bad, I don't know if it needs some extra parameter in some conf on 13.

This complaint is missing an awful lot of supporting information.

> "                                ->  Bitmap Heap Scan on hawbs h  (cost=1058.34..26261.32 rows=21451 width=46) (actual time=201.956..201.966 rows=4 loops=1)"
> "                                      Recheck Cond: ((nome_des)::text ~~* convert_from('\x255354455048414e592053544f4557204c45414e44524f25'::bytea, 'LATIN1'::name))"
> "                                      Filter: (tipo_hawb_id = ANY ('{1,10,3}'::integer[]))"
> "                                      Heap Blocks: exact=4"
> "                                      ->  Bitmap Index Scan on idx_nome_des  (cost=0.00..1052.98 rows=22623 width=0) (actual time=201.942..201.943 rows=4 loops=1)"
> "                                            Index Cond: ((nome_des)::text ~~* convert_from('\x255354455048414e592053544f4557204c45414e44524f25'::bytea, 'LATIN1'::name))"

For starters, how in the world did you get that query condition out of

> where h.nome_des ilike '%STEPHANY STOEW LEANDRO%'

?  What data type is h.nome_des, anyway?  And what kind of index
is that --- it couldn't be a plain btree, because we wouldn't consider
~~* to be indexable by a btree.

However, the long and the short of it is that this rowcount estimate
is off by nearly four orders of magnitude (21451 estimated vs. 4
actual is pretty awful).  It's probably just luck that you got an
acceptable plan out of v10, and bad luck that you didn't get one
out of v13 --- v13's estimate is not better, but it's not much
worse either.  You need to do something about improving that
estimate if you'd like reliable query planning.  Since I'm not
too sure which operator you're actually invoking, it's hard to
offer good advice about how hard that might be.

                        regards, tom lane
Attachment

RE: Problem with indices from 10 to 13

From
Daniel Diniz
Date:
Ranier,
ran vacuumdb -U postgres -j100 -p5434 -azv
and even so it didn't improve.
Now ir running for 1h10min and not finished de explain after run the comand up.😥


Daniel Diniz
Desenvolvimento

Cel.: 11981464923


www.flashcourier.com.br

   
 
#SomosTodosFlash #GrupoMOVE3




"Esta mensagem e seus anexos são dirigidos exclusivamente para os seus destinatários, podendo conter informação confidencial e/ou legalmente privilegiada. Se você não for destinatário desta mensagem, não deve revelar, copiar, distribuir ou de qualquer forma utilizá-la. A empresa não se responsabiliza por alterações no conteúdo desta mensagem depois do seu envio."


De: Ranier Vilela <ranier.vf@gmail.com>
Enviado: terça-feira, 28 de setembro de 2021 14:27
Para: Daniel Diniz <daniel@flashcourier.com.br>
Cc: pgsql-performance@lists.postgresql.org <pgsql-performance@lists.postgresql.org>
Assunto: Re: Problem with indices from 10 to 13
 
Em ter., 28 de set. de 2021 às 12:40, Daniel Diniz <daniel@flashcourier.com.br> escreveu:
Hello I migrated from postgres 10 to 13 and I noticed that there was a big increase in a querie that I use, I did explain in 10 and 13 and the difference is absurd, the indices and data are the same in 2. I've re-created and re-indexed but I don't know what changed from 10 to 13 which made the performance so bad, I don't know if it needs some extra parameter in some conf on 13.

Postgres 13

"QUERY PLAN"
"Limit  (cost=1.13..26855.48 rows=30 width=137) (actual time=10886.585..429803.463 rows=4 loops=1)"
"  ->  Nested Loop  (cost=1.13..19531164.71 rows=21819 width=137) (actual time=10886.584..429803.457 rows=4 loops=1)"
"        Join Filter: (h.ult_eve_id = ev.evento_id)"
"        Rows Removed by Join Filter: 252"
"        ->  Nested Loop  (cost=1.13..19457514.32 rows=21819 width=62) (actual time=10886.326..429803.027 rows=4 loops=1)"
"              ->  Nested Loop  (cost=0.85..19450780.70 rows=21819 width=55) (actual time=10886.259..429802.908 rows=4 loops=1)"
"                    ->  Index Scan Backward using hawbs_pkey on hawbs h  (cost=0.57..19444209.67 rows=21819 width=46) (actual time=10886.119..429802.676 rows=4 loops=1)"
"                          Filter: ((tipo_hawb_id = ANY ('{1,10,3}'::integer[])) AND ((nome_des)::text ~~* convert_from('\x255354455048414e592053544f4557204c45414e44524f25'::bytea, 'LATIN1'::name)))"
"                          Rows Removed by Filter: 239188096"
Index Scan Backward looks suspicious to me.
239,188,096  rows removed by filter it's a lot of work.

Do you, run analyze?

regards,
Ranier Vilela
Attachment

Re: Problem with indices from 10 to 13

From
Alan Hodgson
Date:
Em ter., 28 de set. de 2021 às 12:40, Daniel Diniz <daniel@flashcourier.com.br> escreveu:
Hello I migrated from postgres 10 to 13 and I noticed that there was a big increase in a querie that I use, I did explain in 10 and 13 and the difference is absurd, the indices and data are the same in 2. I've re-created and re-indexed but I don't know what changed from 10 to 13 which made the performance so bad, I don't know if it needs some extra parameter in some conf on 13.

Postgres 13

"QUERY PLAN"
"Limit  (cost=1.13..26855.48 rows=30 width=137) (actual time=10886.585..429803.463 rows=4 loops=1)"
"  ->  Nested Loop  (cost=1.13..19531164.71 rows=21819 width=137) (actual time=10886.584..429803.457 rows=4 loops=1)"
"        Join Filter: (h.ult_eve_id = ev.evento_id)"
"        Rows Removed by Join Filter: 252"
"        ->  Nested Loop  (cost=1.13..19457514.32 rows=21819 width=62) (actual time=10886.326..429803.027 rows=4 loops=1)"
"              ->  Nested Loop  (cost=0.85..19450780.70 rows=21819 width=55) (actual time=10886.259..429802.908 rows=4 loops=1)"
"                    ->  Index Scan Backward using hawbs_pkey on hawbs h  (cost=0.57..19444209.67 rows=21819 width=46) (actual time=10886.119..429802.676 rows=4 loops=1)"
"                          Filter: ((tipo_hawb_id = ANY ('{1,10,3}'::integer[])) AND ((nome_des)::text ~~* convert_from('\x255354455048414e592053544f4557204c45414e44524f25'::bytea, 'LATIN1'::name)))"
"                          Rows Removed by Filter: 239188096"
Index Scan Backward looks suspicious to me.
239,188,096  rows removed by filter it's a lot of work.

Do you, run analyze?

PostgreSQL has an unfortunate love of scanning the pkey index backwards when you use LIMIT.

Try pushing your actual query into a subquery (with an offset 0 to prevent it being optimized out) and then do the LIMIT outside it.

Re: Problem with indices from 10 to 13

From
Tom Lane
Date:
Daniel Diniz <daniel@flashcourier.com.br> writes:
> The index I use is the GIN.

pg_trgm, you mean?  That answers one question, but you still didn't
explain what type h.nome_des is, nor how bytea and convert_from()
are getting into the picture.

The second part of that is probably not critical, since the planner
should be willing to reduce the convert_from() call to a constant
for planning purposes, so I'm unclear as to why the estimate for
the ilike clause is so bad.  Have you tried increasing the statistics
target for h.nome_des to see if the estimate gets better?

            regards, tom lane



RE: Problem with indices from 10 to 13

From
Daniel Diniz
Date:
Tom,

"pg_trgm, you mean?  That answers one question, but you still didn't
explain what type h.nome_des is, nor how bytea and convert_from()
are getting into the picture."
The column type is: nome_des                | character varying(60)

"The second part of that is probably not critical, since the planner
should be willing to reduce the convert_from() call to a constant
for planning purposes, so I'm unclear as to why the estimate for
the ilike clause is so bad.  Have you tried increasing the statistics
target for h.nome_des to see if the estimate gets better?"
How do i increase  the statistics target for h.nome_des?
And why uploading the dump at 10 and at 13 is there this difference?

Thanks

Daniel Diniz
Desenvolvimento

Cel.: 11981464923


www.flashcourier.com.br

   
 
#SomosTodosFlash #GrupoMOVE3




"Esta mensagem e seus anexos são dirigidos exclusivamente para os seus destinatários, podendo conter informação confidencial e/ou legalmente privilegiada. Se você não for destinatário desta mensagem, não deve revelar, copiar, distribuir ou de qualquer forma utilizá-la. A empresa não se responsabiliza por alterações no conteúdo desta mensagem depois do seu envio."


De: Tom Lane <tgl@sss.pgh.pa.us>
Enviado: terça-feira, 28 de setembro de 2021 19:41
Para: Daniel Diniz <daniel@flashcourier.com.br>
Cc: pgsql-performance@lists.postgresql.org <pgsql-performance@lists.postgresql.org>
Assunto: Re: Problem with indices from 10 to 13
 
Daniel Diniz <daniel@flashcourier.com.br> writes:
> The index I use is the GIN.

pg_trgm, you mean?  That answers one question, but you still didn't
explain what type h.nome_des is, nor how bytea and convert_from()
are getting into the picture.

The second part of that is probably not critical, since the planner
should be willing to reduce the convert_from() call to a constant
for planning purposes, so I'm unclear as to why the estimate for
the ilike clause is so bad.  Have you tried increasing the statistics
target for h.nome_des to see if the estimate gets better?

                        regards, tom lane
Attachment

Re: Problem with indices from 10 to 13

From
Justin Pryzby
Date:
On Wed, Sep 29, 2021 at 02:11:15AM +0000, Daniel Diniz wrote:
> How do i increase  the statistics target for h.nome_des?
> And why uploading the dump at 10 and at 13 is there this difference?

It's like ALTER TABLE h ALTER nome_des SET STATISTICS 2000; ANALYZE h;
https://www.postgresql.org/docs/current/sql-altertable.html

-- 
Justin



RE: Problem with indices from 10 to 13

From
Daniel Diniz
Date:
Justin tested it with some parameters 200, 2000, 10000, -1 and the 3 spent more or less the same time

exemple ALTER TABLE hawbs ALTER nome_des SET STATISTICS 2000; ANALYZE hawbs;:
"QUERY PLAN"
"Limit  (cost=1.13..28049.86 rows=30 width=137) (actual time=5462.123..363089.923 rows=4 loops=1)"
"  ->  Nested Loop  (cost=1.13..19523788.64 rows=20882 width=137) (actual time=5462.122..363089.915 rows=4 loops=1)"
"        Join Filter: (h.ult_eve_id = ev.evento_id)"
"        Rows Removed by Join Filter: 252"
"        ->  Nested Loop  (cost=1.13..19453301.90 rows=20882 width=62) (actual time=5461.844..363089.429 rows=4 loops=1)"
"              ->  Nested Loop  (cost=0.85..19446849.38 rows=20882 width=55) (actual time=5461.788..363089.261 rows=4 loops=1)"
"                    ->  Index Scan Backward using hawbs_pkey on hawbs h  (cost=0.57..19440557.11 rows=20882 width=46) (actual time=5461.644..363088.839 rows=4 loops=1)"
"                          Filter: ((tipo_hawb_id = ANY ('{1,10,3}'::integer[])) AND ((nome_des)::text ~~* convert_from('\x255354455048414e592053544f4557204c45414e44524f25'::bytea, 'LATIN1'::name)))"
"                          Rows Removed by Filter: 239188096"
"                    ->  Index Scan using empresas_pkey on empresas e  (cost=0.28..0.30 rows=1 width=17) (actual time=0.037..0.038 rows=1 loops=4)"
"                          Index Cond: (empresa_id = h.cliente_id)"
"              ->  Index Scan using contratos_pkey on contratos c  (cost=0.28..0.31 rows=1 width=15) (actual time=0.021..0.021 rows=1 loops=4)"
"                    Index Cond: (ctt_id = h.ctt_id)"
"        ->  Materialize  (cost=0.00..7.23 rows=215 width=27) (actual time=0.011..0.023 rows=64 loops=4)"
"              ->  Seq Scan on eventos ev  (cost=0.00..6.15 rows=215 width=27) (actual time=0.033..0.052 rows=67 loops=1)"
"Planning Time: 10.452 ms"
"Execution Time: 363090.127 ms"
                                                                                        

Daniel Diniz
Desenvolvimento

Cel.: 11981464923


www.flashcourier.com.br

   
 
#SomosTodosFlash #GrupoMOVE3




"Esta mensagem e seus anexos são dirigidos exclusivamente para os seus destinatários, podendo conter informação confidencial e/ou legalmente privilegiada. Se você não for destinatário desta mensagem, não deve revelar, copiar, distribuir ou de qualquer forma utilizá-la. A empresa não se responsabiliza por alterações no conteúdo desta mensagem depois do seu envio."


De: Justin Pryzby <pryzby@telsasoft.com>
Enviado: terça-feira, 28 de setembro de 2021 23:18
Para: Daniel Diniz <daniel@flashcourier.com.br>
Cc: Tom Lane <tgl@sss.pgh.pa.us>; pgsql-performance@lists.postgresql.org <pgsql-performance@lists.postgresql.org>
Assunto: Re: Problem with indices from 10 to 13
 
On Wed, Sep 29, 2021 at 02:11:15AM +0000, Daniel Diniz wrote:
> How do i increase  the statistics target for h.nome_des?
> And why uploading the dump at 10 and at 13 is there this difference?

It's like ALTER TABLE h ALTER nome_des SET STATISTICS 2000; ANALYZE h;
https://www.postgresql.org/docs/current/sql-altertable.html

--
Justin
Attachment

Re: Problem with indices from 10 to 13

From
Pavel Stehule
Date:


st 29. 9. 2021 v 6:01 odesílatel Daniel Diniz <daniel@flashcourier.com.br> napsal:
Justin tested it with some parameters 200, 2000, 10000, -1 and the 3 spent more or less the same time

exemple ALTER TABLE hawbs ALTER nome_des SET STATISTICS 2000; ANALYZE hawbs;:
"QUERY PLAN"
"Limit  (cost=1.13..28049.86 rows=30 width=137) (actual time=5462.123..363089.923 rows=4 loops=1)"
"  ->  Nested Loop  (cost=1.13..19523788.64 rows=20882 width=137) (actual time=5462.122..363089.915 rows=4 loops=1)"
"        Join Filter: (h.ult_eve_id = ev.evento_id)"
"        Rows Removed by Join Filter: 252"
"        ->  Nested Loop  (cost=1.13..19453301.90 rows=20882 width=62) (actual time=5461.844..363089.429 rows=4 loops=1)"
"              ->  Nested Loop  (cost=0.85..19446849.38 rows=20882 width=55) (actual time=5461.788..363089.261 rows=4 loops=1)"
"                    ->  Index Scan Backward using hawbs_pkey on hawbs h  (cost=0.57..19440557.11 rows=20882 width=46) (actual time=5461.644..363088.839 rows=4 loops=1)"
"                          Filter: ((tipo_hawb_id = ANY ('{1,10,3}'::integer[])) AND ((nome_des)::text ~~* convert_from('\x255354455048414e592053544f4557204c45414e44524f25'::bytea, 'LATIN1'::name)))"
"                          Rows Removed by Filter: 239188096"
"                    ->  Index Scan using empresas_pkey on empresas e  (cost=0.28..0.30 rows=1 width=17) (actual time=0.037..0.038 rows=1 loops=4)"
"                          Index Cond: (empresa_id = h.cliente_id)"
"              ->  Index Scan using contratos_pkey on contratos c  (cost=0.28..0.31 rows=1 width=15) (actual time=0.021..0.021 rows=1 loops=4)"
"                    Index Cond: (ctt_id = h.ctt_id)"
"        ->  Materialize  (cost=0.00..7.23 rows=215 width=27) (actual time=0.011..0.023 rows=64 loops=4)"
"              ->  Seq Scan on eventos ev  (cost=0.00..6.15 rows=215 width=27) (actual time=0.033..0.052 rows=67 loops=1)"
"Planning Time: 10.452 ms"
"Execution Time: 363090.127 ms"

Maybe you can try composite index based on  hawbs_pkey, and tipo_hawb_id

the second problem can be the low value of LIMIT - got you faster result without LIMIT clause?

Regards

Pavel


                                                                                        

Daniel Diniz
Desenvolvimento

Cel.: 11981464923


www.flashcourier.com.br

   
 
#SomosTodosFlash #GrupoMOVE3




"Esta mensagem e seus anexos são dirigidos exclusivamente para os seus destinatários, podendo conter informação confidencial e/ou legalmente privilegiada. Se você não for destinatário desta mensagem, não deve revelar, copiar, distribuir ou de qualquer forma utilizá-la. A empresa não se responsabiliza por alterações no conteúdo desta mensagem depois do seu envio."


De: Justin Pryzby <pryzby@telsasoft.com>
Enviado: terça-feira, 28 de setembro de 2021 23:18
Para: Daniel Diniz <daniel@flashcourier.com.br>
Cc: Tom Lane <tgl@sss.pgh.pa.us>; pgsql-performance@lists.postgresql.org <pgsql-performance@lists.postgresql.org>
Assunto: Re: Problem with indices from 10 to 13
 
On Wed, Sep 29, 2021 at 02:11:15AM +0000, Daniel Diniz wrote:
> How do i increase  the statistics target for h.nome_des?
> And why uploading the dump at 10 and at 13 is there this difference?

It's like ALTER TABLE h ALTER nome_des SET STATISTICS 2000; ANALYZE h;
https://www.postgresql.org/docs/current/sql-altertable.html

--
Justin
Attachment