Re: increasing effective_cache_size slows down join queries by a factor of 4000x - Mailing list pgsql-general

From Michael Lewis
Subject Re: increasing effective_cache_size slows down join queries by a factor of 4000x
Date
Msg-id CAHOFxGpDDSP2kXazrRjuvTWO8r=KK+474uZFL=MrQCjpFwdfhQ@mail.gmail.com
Whole thread Raw
In response to Re: increasing effective_cache_size slows down join queries by a factor of 4000x  (A Shaposhnikov <artyom@gmail.com>)
List pgsql-general
I would guess that autovacuum is analyzing the table and causing the stats to change which is resulting in different estimates that result in different plans. Unless you can get the estimate much more accurate, you won't get far with expecting a stable plan that performs well.

How is data_class_pkey? If you run a query like this, how far off are the estimates?


explain analyze
select d.time as time,d.id as id, a.query_symbol as query_symbol

from
data as d
join data_class as dc ON dc.data_id = d.id
join class as a ON dc.class_id = a.id
where
d.id > 205284974
order by d.id
limit 1000;

If you run 'analyze data( id );' and then run the query again, do you get a better estimate? Have you tried adjusting default_stats_target? Are you running the default value for random_page_cost with SSDs?

I'm seeing Index Only Scan nodes, but a high number of fetches so it seems like you would benefit from vacuum to update pg_class.relallvisible value.

pgsql-general by date:

Previous
From: Ron
Date:
Subject: Re: pg_cron for vacuum - dynamic table set
Next
From: Abhishek Bhola
Date:
Subject: Re: Subscription stuck at initialize state