Re: speeding up a query on a large table - Mailing list pgsql-general

From Mike Rylander
Subject Re: speeding up a query on a large table
Date
Msg-id b918cf3d05081714557259d7eb@mail.gmail.com
Whole thread Raw
In response to Re: speeding up a query on a large table  (Manfred Koizar <mkoi-pg@aon.at>)
Responses Re: speeding up a query on a large table  (Kevin Murphy <murphy@genome.chop.edu>)
List pgsql-general
On 8/17/05, Manfred Koizar <mkoi-pg@aon.at> wrote:
> On Mon, 25 Jul 2005 17:50:55 -0400, Kevin Murphy
> <murphy@genome.chop.edu> wrote:
> > and because the number of possible search terms is so large, it
> >would be nice if the entire index could somehow be preloaded into memory
> >and encouraged to stay there.
>
> Postgres does not have such a feature and I wouldn't recommend to mess
> around inside Postgres.  You could try to copy the relevant index
> file(s) to /dev/null to populate the OS cache ...

That actually works fine.  When I had big problems with a large GiST
index I just used cat to dump it at /dev/null and the OS grabbed it.
Of course, that was on linux so YMMV.

--
Mike Rylander
mrylander@gmail.com
GPLS -- PINES Development
Database Developer
http://open-ils.org

pgsql-general by date:

Previous
From: Bruno Wolff III
Date:
Subject: Re: Field order
Next
From: Matt Miller
Date:
Subject: Re: Schema design question