Optimizing a huge_table/tiny_table join - Mailing list pgsql-performance

From
Subject Optimizing a huge_table/tiny_table join
Date
Msg-id 200605250052.k4P0qrT11965@panix3.panix.com
Whole thread Raw
Responses Re: Optimizing a huge_table/tiny_table join  ("Joshua D. Drake" <jd@commandprompt.com>)
Re: Optimizing a huge_table/tiny_table join  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-performance



I want to optimize this simple join:

SELECT * FROM huge_table h, tiny_table t WHERE UPPER( h.id ) = UPPER( t.id )

huge_table has about 2.5 million records, can be assumed as fixed, and
has the following index:

CREATE INDEX huge_table_index ON huge_table( UPPER( id ) );

...while tiny_table changes with each user request, and typically will
contain on the order of 100-1000 records.  For this analysis, I put
300 records in tiny_table, resulting in 505 records in the join.

I tried several approaches.  In order of increasing speed of
execution:

1. executed as shown above, with enable_seqscan on: about 100 s.

2. executed as shown above, with enable_seqscan off: about 10 s.

3. executed with a LIMIT 6000 clause added to the SELECT statement, and
   enable_seqscan on: about 5 s.

4. executed with a LIMIT 600 clause added to the SELECT statement, and
   enable_seqscan on: less than 1 s.



Clearly, using LIMIT is the way to go.  Unfortunately I *do* want all
the records that would have been produced without the LIMIT clause,
and I don't have a formula for the limit that will guarantee this.  I
could use a very large value (e.g. 20x the size of tiny_table, as in
approach 3 above) which would make the probability of hitting the
limit very small, but unfortunately, the query plan in this case is
different from the query plan when the limit is just above the
expected number of results (approach 4 above).

The query plan for the fastest approach is this:

                                               QUERY PLAN
---------------------------------------------------------------------------------------------------------
 Limit  (cost=0.01..2338.75 rows=600 width=84)
   ->  Nested Loop  (cost=0.01..14766453.89 rows=3788315 width=84)
         ->  Seq Scan on tiny_table t  (cost=0.00..19676.00 rows=300 width=38)
         ->  Index Scan using huge_table_index on huge_table h  (cost=0.01..48871.80 rows=12628 width=46)
               Index Cond: (upper(("outer".id)::text) = upper((h.id)::text))



How can I *force* this query plan even with a higher limit value?

I found, by dumb trial and error, that in this case the switch happens
at LIMIT 5432, which, FWIW, is about 0.2% of the size of huge_table.
Is there a simpler way to determine this limit (hopefully
programmatically)?


Alternatively, I could compute the value for LIMIT as 2x the number of
records in tiny_table, and if the number of records found is *exactly*
this number, I would know that (most likely) some records were left
out.  In this case, I could use the fact that, according to the query
plan above, the scan of tiny_table is sequential to infer which
records in tiny_table were disregarded when the limit was reached, and
then repeat the query with only these left over records in tiny_table.

What's your opinion of this strategy?  Is there a good way to improve
it?

Many thanks in advance!

kj

PS:  FWIW, the query plan for the query with LIMIT 6000 is this:

                                     QUERY PLAN
-------------------------------------------------------------------------------------
 Limit  (cost=19676.75..21327.99 rows=6000 width=84)
   ->  Hash Join  (cost=19676.75..1062244.81 rows=3788315 width=84)
         Hash Cond: (upper(("outer".id)::text) = upper(("inner".id)::text))
         ->  Seq Scan on huge_table h  (cost=0.00..51292.43 rows=2525543 width=46)
         ->  Hash  (cost=19676.00..19676.00 rows=300 width=38)
               ->  Seq Scan on tiny_table t  (cost=0.00..19676.00 rows=300 width=38)

------------=_1148485808-20617-3--


pgsql-performance by date:

Previous
From: "Jim C. Nasby"
Date:
Subject: Re: Selects query stats?
Next
From: "Joshua D. Drake"
Date:
Subject: Re: Optimizing a huge_table/tiny_table join