Tom:
Thank you for your response. The actual table that will have 400
million rows. The last time I created an index on an integer field on
a table that size it was too big for Postgres to use (to high a cost
of using the index). Hence, Postgres reverted back to sequential scan.
I would like to figure out a better way of partitioning my index such
that it still remains useful for Postgres. I will appreciate any tips
that you can provide in this regard. Thanks.
Saadat.
On 7/4/07, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> "s anwar" <sanwar@gmail.com> writes:
> > The two queries blow require radically different query times 1600ms vs 10ms:
>
> Try not to be so fancy with a bunch of somewhat-overlapping partial indexes.
> The planner is not so smart as you, and will not always be able to prove
> to itself that it can use these indexes. A single, non-partial index on
> ock would perform at least as well as this hodgepodge.
>
> regards, tom lane
>