Re: Asking advice on speeding up a big table - Mailing list pgsql-general

From hubert depesz lubaczewski
Subject Re: Asking advice on speeding up a big table
Date
Msg-id 9e4684ce0604110052h71314e84qf49f2b8260151321@mail.gmail.com
Whole thread Raw
In response to Re: Asking advice on speeding up a big table  (felix@crowfix.com)
Responses Re: Asking advice on speeding up a big table  (felix@crowfix.com)
List pgsql-general
On 4/10/06, felix@crowfix.com <felix@crowfix.com> wrote:
It is, but it is only 32 msec because the  query has already run and
cached the useful bits.  And since I have random values, as soon as I
look up some new values, they are cached and no longer new.


according to my experiene i would vote for too slow filesystem
 
What I was hoping for was some general insight from the EXPLAIN
ANALYZE, that maybe extra or different indices would help, or if there
is some better method for finding one row from 100 million.  I realize
I am asking a vague question which probably can't be solved as
presented.

hmm .. perhaps you can try to denormalize the table, and then use multicolumn indices?

depesz

pgsql-general by date:

Previous
From: "Dave Page"
Date:
Subject: Re: Debian package for freeradius_postgresql module
Next
From: Richard Huxton
Date:
Subject: Re: installing and using autodoc