On Fri, Dec 10, 2021 at 2:36 AM Peter Geoghegan <pg@bowt.ie> wrote:
Sounds like a problem with get_actual_variable_range(), which can scan indexes at plan time to determine minimum or maximum values.
This actually has been improved quite a bit since Postgres 10. So as Jeff said, seems like you might benefit from upgrading to a newer major version. v11 has improved things in this exact area.
On my Docker instance when I execute EXPLAIN it starts reading a lot of data. The indexes of the biggest table the query reads are 50GB, so my guess is that it reads those indexes.
I allowed EXPLAIN in Docker to finish and it took almost 500 seconds and it was reading data all the time. After I reindexed the biggest table, EXPLAIN finished instantly. Can the index corruption cause this?
Note that this started happening in production after we deleted a few million rows from the biggest table.