Re: Joining 2 tables with 300 million rows - Mailing list pgsql-performance

From Manfred Koizar
Subject Re: Joining 2 tables with 300 million rows
Date
Msg-id 0osrp1hs1s4vokl16l04u62n1pbsn8j77s@4ax.com
Whole thread Raw
In response to Joining 2 tables with 300 million rows  (Amit V Shah <ashah@tagaudit.com>)
List pgsql-performance
On Thu, 8 Dec 2005 11:59:24 -0500 , Amit V Shah <ashah@tagaudit.com>
wrote:
>  CONSTRAINT pk_runresult_has_catalogtable PRIMARY KEY
>(runresult_id_runresult, catalogtable_id_catalogtable, value)

>'              ->  Index Scan using runresult_has_catalogtable_id_runresult
>on runresult_has_catalogtable runresult_has_catalogtable_1
>(cost=0.00..76.65 rows=41 width=8) (actual time=0.015..0.017 rows=1
>loops=30)'
>'                    Index Cond:
>(runresult_has_catalogtable_1.runresult_id_runresult =
>"outer".runresult_id_runresult)'
>'                    Filter: ((catalogtable_id_catalogtable = 54) AND (value
>= 1))'

If I were the planner, I'd use the primary key index.  You seem to
have a redundant(?) index on
runresult_has_catalogtable(runresult_id_runresult).  Dropping it might
help, or it might make things much worse.  But at this stage this is
pure speculation.

Give us more information first.  Show us the complete definition
(including *all* indices) of all tables occurring in your query.  What
Postgres version is this?  And please post EXPLAIN ANALYSE output of a
*slow* query.
Servus
 Manfred

pgsql-performance by date:

Previous
From: "Jim C. Nasby"
Date:
Subject: Re: Small table or partial index?
Next
From: Tom Lane
Date:
Subject: Re: How much expensive are row level statistics?