On Wed, 14 Jun 2023 at 07:28, Patrick O'Toole <patrick.otoole@sturdy.ai> wrote: > Maybe we are barking up the wrong tree with the previous questions. Are there other configuration parameters we should consider first to improve performance in situations like the one illustrated?
random_page_cost and effective_cache_size are the main settings which will influence plan A vs plan B. Larger values of effective_cache_size will have the planner apply more seq_page_costs to the index scan.
Squeezing otherwise-random page costs towards seq_page_costs is what bitmap scans do, and what large index scans with high pg_stats.correlation do. But effective_cache_size does something else, it squeezes the per page costs towards zero, not towards seq_page_costs. This is surely not accurate, as the costs of locking the buffer mapping partition, finding the buffer or reading it from the kernel cache if not found, maybe faulting the buffer from main memory into on-CPU memory, pinning the buffer, and read-locking it are certainly well above zero, even if not nearly as high as seq_page_cost. I'd guess they are truly about 2 to 5 times a cpu_tuple_cost per buffer. But zero is what they currently get, there is no knob to twist to change that.
Lower values of effective_cache_size will mean more pages will be assumed to cost random_page_cost.
Sure, but it addresses the issue only obliquely (as does raising random_page_cost) not directly. So the change you need to make to them will be large, and will likely make other things worse.