Re: Dealing with big tables - Mailing list pgsql-performance

From Merlin Moncure
Subject Re: Dealing with big tables
Date
Msg-id b42b73150712031414y7b66a5cn5987e0a26e6b617@mail.gmail.com
Whole thread Raw
In response to Re: Dealing with big tables  ("Mindaugas" <ml@kilimas.com>)
List pgsql-performance
On Dec 2, 2007 7:35 AM, Mindaugas <ml@kilimas.com> wrote:
>   I'd like this query to respond under a specific time limit. 5s now is OK but 50s later for 10000 rows is too slow.
>
>   Also off list I was pointed out about covering indexes in MySQL. But they are not supported in PostgreSQL, aren't
they?

The PostgreSQL architecture does not allow that...some of the MVCC
info is stored in the tuple.  An all index access strategy would only
help anyways if all the information being looked up was in the
index...this probably isn't practical in really big table anyways (I
think said feature is overrated).  What would help, at least
temporarily, would be to cluster the table on your index so at least
your lookups would have some locality.  Just be aware that cluster is
a one shot deal...if there are changes to the table it will gradually
fall out of order.  Otherwise it would be an ordered table, which is
what everyone wants...some of the big commercial databases have this.

In any case, you should still at least post an explain analyze to make
sure something funky isn't going on.  Barring that, I would be looking
at some type of caching to optimize around the problem...perhaps look
at the table design?

merlin

pgsql-performance by date:

Previous
From: Robert Treat
Date:
Subject: Re: Training Recommendations
Next
From: "Joshua D. Drake"
Date:
Subject: Re: Training Recommendations