Re: Postgres chooses slow query plan from time to time - Mailing list pgsql-performance

From Kristjan Mustkivi
Subject Re: Postgres chooses slow query plan from time to time
Date
Msg-id CAOQPKati1jk6PniYNcodoC6T=4+ZrWxnmoVHOJe92-BpMsRHmg@mail.gmail.com
Whole thread Raw
In response to Re: Postgres chooses slow query plan from time to time  (Jeff Janes <jeff.janes@gmail.com>)
Responses Re: Postgres chooses slow query plan from time to time  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-performance
I am very sorry, I indeed copy-pasted an incomplete plan. Here it is in full:

2021-09-14 06:55:33 UTC, pid=27576  db=mydb, usr=myuser, client=ip,
app=PostgreSQL JDBC Driver, line=55 LOG:  duration: 5934.165 ms  plan:
        Query Text: SELECT *   FROM myschema.mytable pbh WHERE
pbh.product_code = $1   AND pbh.cage_player_id = $2   AND
pbh.cage_code = $3   AND balance_type = $4   AND pbh.modified_time <
$5 ORDER BY pbh.modified_time DESC FETCH FIRST 1 ROWS ONLY
        Limit  (cost=0.70..6.27 rows=1 width=66) (actual
time=5934.154..5934.155 rows=1 loops=1)
          Buffers: shared hit=7623 read=18217
          ->  Index Scan Backward using player_balance_history_idx2 on
mytable pbh  (cost=0.70..21639.94 rows=3885 width=66) (actual
time=5934.153..5934.153 rows=1 loops=1)
                Index Cond: ((cage_code = $3) AND (cage_player_id =
$2) AND (modified_time < $5))
                Filter: (((product_code)::text = ($1)::text) AND
((balance_type)::text = ($4)::text))
                Rows Removed by Filter: 95589
                Buffers: shared hit=7623 read=18217

Also, I have made incrementally the following changes: set autovacuum
more aggressive, then added column based stats targets and then
created a statistics object. Yet there is no change in the plan
behavior. Table as it is now:

\d+ myschema.mytable
                                        Table "myschema.mytable"
     Column     │            Type             │ Collation │ Nullable │
Default │ Storage  │ Stats target │ Description
────────────────┼─────────────────────────────┼───────────┼──────────┼─────────┼──────────┼──────────────┼─────────────
 cage_code      │ integer                     │           │ not null │
        │ plain    │ 500          │
 cage_player_id │ bigint                      │           │ not null │
        │ plain    │ 500          │
 product_code   │ character varying(30)       │           │ not null │
        │ extended │ 500          │
 balance_type   │ character varying(30)       │           │ not null │
        │ extended │ 500          │
 version        │ bigint                      │           │ not null │
        │ plain    │              │
 modified_time  │ timestamp(3) with time zone │           │ not null │
        │ plain    │ 500          │
 amount         │ numeric(38,8)               │           │ not null │
        │ main     │              │
 change         │ numeric(38,8)               │           │ not null │
        │ main     │              │
 transaction_id │ bigint                      │           │ not null │
        │ plain    │              │
Indexes:
    "mytable_pk" PRIMARY KEY, btree (cage_code, cage_player_id,
product_code, balance_type, version)
    "mytable_idx1" btree (modified_time)
    "mytable_idx2" btree (cage_code, cage_player_id, modified_time)
Statistics objects:
    "myschema"."statistics_pbh_1" (ndistinct, dependencies) ON
cage_code, cage_player_id, product_code, balance_type FROM
myschema.mytable
Options: autovacuum_vacuum_scale_factor=0.0, autovacuum_vacuum_threshold=1000

I will investigate the index bloat and if something is blocking the
query as suggested by Laurenz Albe.

Best,

Kristjan

On Tue, Sep 14, 2021 at 3:26 PM Jeff Janes <jeff.janes@gmail.com> wrote:
>
> On Tue, Sep 14, 2021 at 3:55 AM Kristjan Mustkivi <sonicmonkey@gmail.com> wrote:
>>
>> Hello Tomas,
>>
>> The auto explain analyze caught this:
>>
>> 2021-09-14 06:55:33 UTC, pid=12345  db=mydb, usr=myuser, client=ip,
>> app=PostgreSQL JDBC Driver, line=55 LOG:  duration: 5934.165 ms  plan:
>>   Query Text: SELECT *   FROM myschema.mytable pbh WHERE
>> pbh.product_code = $1   AND pbh.cage_player_id = $2   AND
>> pbh.cage_code = $3   AND balance_type = $4   AND pbh.modified_time <
>> $5 ORDER BY pbh.modified_time DESC FETCH FIRST 1 ROWS ONLY
>>   Limit  (cost=0.70..6.27 rows=1 width=66) (actual
>> time=5934.154..5934.155 rows=1 loops=1)
>>     Buffers: shared hit=7623 read=18217
>>     ->  Index Scan Backward using mytable_idx2 on mytable pbh
>> (cost=0.70..21639.94 rows=3885 width=66) (actual
>> time=5934.153..5934.153 rows=1 loops=1)
>>           Index Cond: ((cage_code = $3) AND (cage_player_id = $2) AND
>> (modified_time < $5))
>>
>> So it expected to get 3885 rows, but got just 1. So this is the
>> statistics issue, right?
>
>
> That would be true if there were no LIMIT.  But with the LIMIT, all this means is that it stopped actually scanning
afterit found one row, but it estimates that if it didn't stop it would have found 3885.  So it is not very
informative. But the above plan appears incomplete, there should be a line for "Rows Removed by Filter", and I think
thatthat is what we really want to see in this case. 
>
> Cheers,
>
> Jeff
> Cheers,
>
> Jeff



--
Kristjan Mustkivi

Email: kristjan.mustkivi@gmail.com



pgsql-performance by date:

Previous
From: Jeff Janes
Date:
Subject: Re: Postgres chooses slow query plan from time to time
Next
From: Tom Lane
Date:
Subject: Re: Postgres chooses slow query plan from time to time