Re: Query Performance / Planner estimate off - Mailing list pgsql-performance

From Mats Olsen
Subject Re: Query Performance / Planner estimate off
Date
Msg-id 61c9fd35-bdb8-3fe6-bda0-81c7e6a9eaf8@duneanalytics.com
Whole thread Raw
In response to Re: Query Performance / Planner estimate off  (Sebastian Dressler <sebastian@swarm64.com>)
Responses Re: Query Performance / Planner estimate off
Re: Query Performance / Planner estimate off
Re: Query Performance / Planner estimate off
List pgsql-performance


On 10/21/20 2:38 PM, Sebastian Dressler wrote:
Hi Mats,

On 20. Oct 2020, at 11:37, Mats Julian Olsen <mats@duneanalytics.com> wrote:

[...]

1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR
2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK
3) enable_nestloop=off; enable_seqscan=off (2 min): https://explain.depesz.com/s/0WXx 

How can I get Postgres not to loop over 12M rows?

I looked at the plans and your config and there are some thoughts I'm having:

- The row estimate is off, as you possibly noticed. This can be possibly solved by raising `default_statistics_target` to e.g. 2500 (we typically use that) and run ANALYZE
I've `set default_statistics_target=2500` and ran analyze on both tables involved, unfortunately the plan is the same. The columns we use for joining here are hashes and we expect very few duplicates in the tables. Hence I think extended statistics (storing most common values and histogram bounds) aren't useful for this kind of data. Would you say the same thing?

- I however think that the misestimate might be caused by the evt_tx_hash being of type bytea. I believe that PG cannot estimate this very well for JOINs and will rather pick row numbers too low. Hence the nested loop is picked and there might be no way around this. I have experienced similar things when applying JOINs on VARCHAR with e.g. more than 3 fields for comparison.

This is very interesting, and I have never heard of issues with using `bytea` for joins. Our entire database is filled with them, as we deal with hashes of different lengths. In fact I would estimate that 60% of columns are bytea's. My intuition would say that it's better to store the hashes as byte arrays, rather than `text` fields as you can compare the raw bytes directly without encoding first?  Do you have any references for this?

Alternatively, since I know the length of the hashes in advance, I could've used `varchar(n)`, but I don't think there's any gains to be had in postgres by doing that? Something like `bytea(n)` would also have been interesting, had postgres been able to exploit that information.

- Other things to look into:

    - work_mem seems too low to me with 56MB, consider raising this to the GB range to avoid disk-based operations
    - min_parallel_table_scan_size - try 0
    - parallel_setup_cost (default 1000, maybe try 500)
    - parallel_tuple_cost (default 1.0, maybe try 0.1)
    - random_page_cost (as mentioned consider raising this maybe much higher, factor 10 or sth like this) or (typically) seq_page_cost can be possibly much lower (0.1, 0.01) depending on your storage

I've tried various settings of these parameters now, and unfortunately the only parameter that alters the query plan is the last one (random_page_cost), which also has the side effect of (almost) forcing sequential scans for most queries as far as I understand? Our storage is Google Cloud pd-ssd.

Thank you so much for you response, I'm looking forward to keep the discussion going.


I hope this helps to get to a parallel plan without setting `nested_loop = off`. EXPLAIN should be enough already to see the difference.

Best,
Sebastian

--

Sebastian Dressler, Solution Architect 
+49 30 994 0496 72 | sebastian@swarm64.com 

Swarm64 AS
Parkveien 41 B | 0258 Oslo | Norway
Registered at Brønnøysundregistrene in Norway under Org.-Number 911 662 787
CEO/Geschäftsführer (Daglig Leder): Thomas Richter; Chairman/Vorsitzender (Styrets Leder): Dr. Sverre Munck 

Swarm64 AS Zweigstelle Hive
Ullsteinstr. 120 | 12109 Berlin | Germany
Registered at Amtsgericht Charlottenburg - HRB 154382 B 
 


Best,

Mats

Attachment

pgsql-performance by date:

Previous
From: Sebastian Dressler
Date:
Subject: Re: Query Performance / Planner estimate off
Next
From: Michael Lewis
Date:
Subject: Re: Query Performance / Planner estimate off