Re: Is there a way to run tables in RAM? - Mailing list pgsql-general

From Merlin Moncure
Subject Re: Is there a way to run tables in RAM?
Date
Msg-id b42b73150607140829i32fe87f1p4cbd80034edd089@mail.gmail.com
Whole thread Raw
In response to Re: Is there a way to run tables in RAM?  ("Karen Hill" <karen_hill22@yahoo.com>)
List pgsql-general
On 13 Jul 2006 14:32:42 -0700, Karen Hill <karen_hill22@yahoo.com> wrote:
>
> Roy Souther wrote:
> > I would like to know if there is anyway to move a section of some tables
> > into RAM to work on them.
> >
> > I have large table, about 700MB or so and growing. I also have a bizarre
> > collection of queries that run hundreds of queries on a small section of
> > this table. These queries only look at about 100 or so records at a time
> > and they run hundreds of queries on the data looking for patterns. This
> > causes the program to run very slowly because of hard drive access time.
> > Some times it needs to write changes back to the records it is working
> > with.

> If you are using linux, create a ramdisk and then add a Postgresql
> tablespace to that.

I don't think this will help much.  While the ramdisk might be better
than the o/s file cache, it just limits the o/s ability to give memory
to other things.  Any modern o/s essentially has a giant ram disk that
runs all the time.  It dynamically resizes it depending on what is
going on at the time.  It is smart enough to keep frequently used
portions of file in ram all the time and less frequently used portions
on disk to free up memory for sorting, etc.

if fast write access is needed (no syncs), just create a temp table.
just let the operating system do it's thing. if the table is
thrashing, you have two choices, optimize the database to be more
cache friendly or buy more ram.

merlin

pgsql-general by date:

Previous
From: "Christian Rengstl"
Date:
Subject: Antw: Re: Problem with archive_command
Next
From: Florian Weimer
Date:
Subject: Re: Timestamp vs timestamptz