Planner misestimation for JOIN with VARCHAR - Mailing list pgsql-general

From Sebastian Dressler
Subject Planner misestimation for JOIN with VARCHAR
Date
Msg-id FCAF8126-4002-47E5-9ED6-1F04CDE6F18F@swarm64.com
Whole thread Raw
Responses Re: Planner misestimation for JOIN with VARCHAR  (Michael Lewis <mlewis@entrata.com>)
List pgsql-general
Helloes,

I do have a set of tables which contain user data and users can choose to have columns as constrained VARCHAR, limit is
typically100. While users can also choose from different types, quite often they go the VARCHAR route. Furthermore,
theycan pick PKs almost freely. As a result, I quite often see tables with the following DDL:
 

    CREATE TABLE example_1(
        a VARCHAR(100)
      , b VARCHAR(100)
      , c VARCHAR(100)
      , payload TEXT
    );
    ALTER TABLE example_1 ADD PRIMARY KEY (a, b, c);

Due to processing, these need to be joined together sometimes considering the complete PK. For instance, assume
example_1and example_2 having the same structure as above. Then, when I do
 

    SELECT *
    FROM example_1 t1
    INNER JOIN example_2 t2 ON(
          t1.a = t2.a
      AND t1.b = t2.b
      AND t1.c = t2.c
    );

the planner will very likely estimate a single resulting row for this operation. For instance:

     Gather  (cost=1510826.53..3100992.19 rows=1 width=138)
       Workers Planned: 13
       ->  Parallel Hash Join  (cost=1510726.53..3100892.04 rows=1 width=138)
             Hash Cond: (((t1.a)::text = (t2.a)::text) AND ((t1.b)::text = (t2.b)::text) AND ((t1.c)::text =
(t1.c)::text))
             ->  Parallel Seq Scan on example_1 t1  (cost=0.00..1351848.61 rows=7061241 width=69)
             ->  Parallel Hash  (cost=1351848.61..1351848.61 rows=7061241 width=69)
                   ->  Parallel Seq Scan on example_2 t2  (cost=0.00..1351848.61 rows=7061241 width=69)

This does not create a problem when joining just two tables on their own. However, with a more complex query, there
willbe more than one single-row estimates. Hence, I typically see a nested loop which takes very long to process
eventually.

This runs on PG 12, and I ensured that the tables are analyzed, my default_statistics_target is 2500. However, it
seems,that with more VARCHARs being in the JOIN, the estimates becomes worse. Given the table definition as above, I
wonderwhether I have overlooked anything in terms of settings or additional indices which could help here.
 

Things tried so far without any noticeable change:

- Add an index on top of the whole PK
- Add indexes onto other columns trying to help the JOIN
- Add additional statistics on two related columns

Another idea I had was to make use of generated columns and hash the PKs together to an BIGINT and solely use this for
theJOIN. However, this would not work when not all columns of the PK are used for the JOIN.
 


Thanks,
Sebastian

--

Sebastian Dressler, Solution Architect 
+49 30 994 0496 72 | sebastian@swarm64.com 

Swarm64 AS
Parkveien 41 B | 0258 Oslo | Norway
Registered at Brønnøysundregistrene in Norway under Org.-Number 911 662 787
CEO/Geschäftsführer (Daglig Leder): Thomas Richter; Chairman/Vorsitzender (Styrets Leder): Dr. Sverre Munck 

Swarm64 AS Zweigstelle Hive
Ullsteinstr. 120 | 12109 Berlin | Germany
Registered at Amtsgericht Charlottenburg - HRB 154382 B 


pgsql-general by date:

Previous
From: Peter
Date:
Subject: Re: Something else about Redo Logs disappearing
Next
From: Peter
Date:
Subject: Re: Something else about Redo Logs disappearing