Re: Asking advice on speeding up a big table - Mailing list pgsql-general

From felix@crowfix.com
Subject Re: Asking advice on speeding up a big table
Date
Msg-id 20060410212021.GA28712@crowfix.com
Whole thread Raw
In response to Re: Asking advice on speeding up a big table  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Asking advice on speeding up a big table  ("hubert depesz lubaczewski" <depesz@gmail.com>)
List pgsql-general
On Mon, Apr 10, 2006 at 02:51:30AM -0400, Tom Lane wrote:
> felix@crowfix.com writes:
> > I have a simple benchmark which runs too slow on a 100M row table, and
> > I am not sure what my next step is to make it faster.
>
> The EXPLAIN ANALYZE you showed ran in 32 msec, which ought to be fast
> enough for anyone on that size table.  You need to show us data on the
> problem case ...

It is, but it is only 32 msec because the  query has already run and
cached the useful bits.  And since I have random values, as soon as I
look up some new values, they are cached and no longer new.

What I was hoping for was some general insight from the EXPLAIN
ANALYZE, that maybe extra or different indices would help, or if there
is some better method for finding one row from 100 million.  I realize
I am asking a vague question which probably can't be solved as
presented.

--
            ... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
     Felix Finch: scarecrow repairman & rocket surgeon / felix@crowfix.com
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

pgsql-general by date:

Previous
From: Hugo
Date:
Subject: trigger firing order
Next
From: Scott Marlowe
Date:
Subject: Re: how to prevent generating same clipids