BUG #17540: Prepared statement: PG switches to a generic query plan which is consistently much slower - Mailing list pgsql-bugs
From | PG Bug reporting form |
---|---|
Subject | BUG #17540: Prepared statement: PG switches to a generic query plan which is consistently much slower |
Date | |
Msg-id | 17540-7aa1855ad5ec18b4@postgresql.org Whole thread Raw |
Responses |
Re: BUG #17540: Prepared statement: PG switches to a generic query plan which is consistently much slower
("David G. Johnston" <david.g.johnston@gmail.com>)
Re: BUG #17540: Prepared statement: PG switches to a generic query plan which is consistently much slower (David Rowley <dgrowleyml@gmail.com>) |
List | pgsql-bugs |
The following bug has been logged on the website: Bug reference: 17540 Logged by: William Duclot Email address: william.duclot@gmail.com PostgreSQL version: 14.4 Operating system: GNU/Linux (Red Hat 8.5.0) Description: My application uses prepared statements. This section of the documentation is going to be very relevant to the rest of this report: https://www.postgresql.org/docs/current/sql-prepare.html#SQL-PREPARE-NOTES. This is a minimal reproduction of the problem I observe, which I will explain below: https://dbfiddle.uk/?rdbms=postgres_14&fiddle=6b01d161da27379844e7602a16543626 Scenario: - I create a fairly simple table (id + timestamp). Timestamp is indexed. - I create a simple-ish prepared statement for `SELECT MIN(id), MAX(id) from relation_tuple_transaction WHERE timestamp >= $1;` - I execute the prepared statement multiple times (> 5 times) From the 6th time onwards, the query plan used by Postgres changes, which isn't fully unexpected as the documentation linked above does make it clear that Postgres might decide to change the query plan for a generic query plan after the 5th execution. And indeed, the estimated "cost" of the generic plan is lower than the custom plan's: therefore the query planner behaves correctly according to the documentation. Now, the problem: the execution of the generic plan is multiple orders of magnitude slower than the custom query plan ("actual time" for the generic plan is over 6500x slower), yet Postgres decides to stick with the generic plan. Very unexpected for me: I was very happy with the first 5 plans, yet Postgres decides to change the plan for another that's enormously slower and stick with it. Giving a different parameter passed to the prepared statement (eg `now() - interval '5 days'`) does give a "slow" custom plan (similar to the generic plan). This means that the query planner does not realise that the actual parameter value matters a lot, and that the parameters used _in practice_ result in a faster plan than the generic plan (100% of the first 5 executions), and that therefore it shouldn't stick to the generic plan. It is particularly insidious as actually I wasn't even aware I was using prepared statements. Like most applications I use a database driver (pgx, in Go) which I learnt uses `PQexecPrepared` under the hood, which creates a sort of "unnamed prepared statement" behaving the same as this minimal reproduction without me ever being aware that prepared statements are involved anywhere between my code and the database. This makes debugging very complex as there's no reason to suspect anything prepared-statement-related and a manual EXPLAIN ANALYZE outside of a prepared statement won't show the problem. Note: setting `plan_cache_mode = force_custom_plan` database-wide solved the immediate problem but is a workaround. It was a very welcome workaround, though.
pgsql-bugs by date: