Re: speeding up a query on a large table - Mailing list pgsql-general

From Kevin Murphy
Subject Re: speeding up a query on a large table
Date
Msg-id 4303EBC8.2090503@genome.chop.edu
Whole thread Raw
In response to Re: speeding up a query on a large table  (Mike Rylander <mrylander@gmail.com>)
List pgsql-general
Mike Rylander wrote:

>On 8/17/05, Manfred Koizar <mkoi-pg@aon.at> wrote:
>
>
>>On Mon, 25 Jul 2005 17:50:55 -0400, Kevin Murphy
>><murphy@genome.chop.edu> wrote:
>>
>>
>>>and because the number of possible search terms is so large, it
>>>would be nice if the entire index could somehow be preloaded into memory
>>>and encouraged to stay there.
>>>
>>>
>>You could try to copy the relevant index
>>file(s) to /dev/null to populate the OS cache ...
>>
>>
>
>That actually works fine.  When I had big problems with a large GiST
>index I just used cat to dump it at /dev/null and the OS grabbed it.
>Of course, that was on linux so YMMV.
>
>
>
Thanks, Manfred & Mike.  That is a very nice solution.  And just for the
sake of the archive ... I can find the filename of the relevant index or
table file name(s) by finding pg_class.relfilenode where
pg_class.relname is the name of the entity, then doing, e.g.: sudo -u
postgres find /usr/local/pgsql/data -name "somerelfilenode*".

-Kevin Murphy



pgsql-general by date:

Previous
From: Bruno Wolff III
Date:
Subject: Re: Field order
Next
From: Michael Fuhr
Date:
Subject: Re: trigger question