Thread: Is there a way to run tables in RAM?
I have large table, about 700MB or so and growing. I also have a bizarre collection of queries that run hundreds of queries on a small section of this table. These queries only look at about 100 or so records at a time and they run hundreds of queries on the data looking for patterns. This causes the program to run very slowly because of hard drive access time. Some times it needs to write changes back to the records it is working with.
Is there anyway that I can move a few hundred records of the table into RAM and work on it there, it would be much faster.
Is there anyway to create a temporary table that will only exist in RAM and not be written to the hard drive? Or do temporary tables already do that?
If I can use a temporary table this way is there any simple way to merge the changes back into the original table?
|
Attachment
On Thu, Jul 13, 2006 at 11:44:43AM -0600, Roy Souther wrote: > I would like to know if there is anyway to move a section of some tables > into RAM to work on them. > > I have large table, about 700MB or so and growing. I also have a bizarre > collection of queries that run hundreds of queries on a small section of > this table. These queries only look at about 100 or so records at a time > and they run hundreds of queries on the data looking for patterns. This > causes the program to run very slowly because of hard drive access time. > Some times it needs to write changes back to the records it is working > with. Increase your shared buffers so it can hold all the data at once. Even then I'm surprised, because any excess data should have been buffered by the OS. Have you looked at what the traffic actually is that is being generated? Have a nice day, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to litigate.
Attachment
> Is there anyway to create a temporary table that will only exist in RAM > and not be written to the hard drive? Or do temporary tables already do > that? That is what temp tables do... but you should only use them for throwaway data :) Joshua D. Drake > > If I can use a temporary table this way is there any simple way to merge > the changes back into the original table? > > _Royce Souther <mailto:roy@SiliconTao.com>_ > _www.SiliconTao.com <http://www.SiliconTao.com>_ > Let Open Source help your business move beyond. > > For security this message is digitally authenticated by _GnuPG > <http://www.gnupg.org>_. > > > > -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/
On 7/13/06, Roy Souther <roy@silicontao.com> wrote: > > > I would like to know if there is anyway to move a section of some tables into RAM to work on them. the operating system should cache the table if there is memory available. while disk cache is somewhat slower than raw memory read, now you have memory for lots of other things, like sorting. merlin
I can’t say for certain from the detail you’ve given, but partial indexes may be an acceptable solution to your problem.
http://www.postgresql.org/docs/8.1/interactive/indexes-partial.html
From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Roy Souther
Sent: Thursday, July 13, 2006 12:45 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Is there a way to run tables in RAM?
I would like to know if there is anyway to move a section of some tables into RAM to work on them.
I have large table, about 700MB or so and growing. I also have a bizarre collection of queries that run hundreds of queries on a small section of this table. These queries only look at about 100 or so records at a time and they run hundreds of queries on the data looking for patterns. This causes the program to run very slowly because of hard drive access time. Some times it needs to write changes back to the records it is working with.
Is there anyway that I can move a few hundred records of the table into RAM and work on it there, it would be much faster.
Is there anyway to create a temporary table that will only exist in RAM and not be written to the hard drive? Or do temporary tables already do that?
If I can use a temporary table this way is there any simple way to merge the changes back into the original table?
|
Roy Souther wrote: > I would like to know if there is anyway to move a section of some tables > into RAM to work on them. > > I have large table, about 700MB or so and growing. I also have a bizarre > collection of queries that run hundreds of queries on a small section of > this table. These queries only look at about 100 or so records at a time > and they run hundreds of queries on the data looking for patterns. This > causes the program to run very slowly because of hard drive access time. > Some times it needs to write changes back to the records it is working > with. > > Is there anyway that I can move a few hundred records of the table into > RAM and work on it there, it would be much faster. > > Is there anyway to create a temporary table that will only exist in RAM > and not be written to the hard drive? Or do temporary tables already do > that? > If you are using linux, create a ramdisk and then add a Postgresql tablespace to that. regards,
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Roy Souther wrote: > I would like to know if there is anyway to move a section of some tables > into RAM to work on them. > > I have large table, about 700MB or so and growing. I also have a bizarre > collection of queries that run hundreds of queries on a small section of > this table. These queries only look at about 100 or so records at a time > and they run hundreds of queries on the data looking for patterns. This > causes the program to run very slowly because of hard drive access time. > Some times it needs to write changes back to the records it is working > with. > > Is there anyway that I can move a few hundred records of the table into > RAM and work on it there, it would be much faster. > > Is there anyway to create a temporary table that will only exist in RAM > and not be written to the hard drive? Or do temporary tables already do > that? > > If I can use a temporary table this way is there any simple way to merge > the changes back into the original table? What if you COPY OUT, sort(1) by the most common fields in the WHERE clauses, and COPY IN the data? That way, you get locality of data, and so the desired data is more likely to be in the OS block cache? - -- Ron Johnson, Jr. Jefferson LA USA Is "common sense" really valid? For example, it is "common sense" to white-power racists that whites are superior to blacks, and that those with brown skins are mud people. However, that "common sense" is obviously wrong. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFEtsY7S9HxQb37XmcRAtXEAKCQKW6fxRKPlSrnsVhmcsvdLCRU+wCcCKLK FTlvERCCH/ylgVbI2EwujBc= =KRzE -----END PGP SIGNATURE-----
On 13 Jul 2006 14:32:42 -0700, Karen Hill <karen_hill22@yahoo.com> wrote: > > Roy Souther wrote: > > I would like to know if there is anyway to move a section of some tables > > into RAM to work on them. > > > > I have large table, about 700MB or so and growing. I also have a bizarre > > collection of queries that run hundreds of queries on a small section of > > this table. These queries only look at about 100 or so records at a time > > and they run hundreds of queries on the data looking for patterns. This > > causes the program to run very slowly because of hard drive access time. > > Some times it needs to write changes back to the records it is working > > with. > If you are using linux, create a ramdisk and then add a Postgresql > tablespace to that. I don't think this will help much. While the ramdisk might be better than the o/s file cache, it just limits the o/s ability to give memory to other things. Any modern o/s essentially has a giant ram disk that runs all the time. It dynamically resizes it depending on what is going on at the time. It is smart enough to keep frequently used portions of file in ram all the time and less frequently used portions on disk to free up memory for sorting, etc. if fast write access is needed (no syncs), just create a temp table. just let the operating system do it's thing. if the table is thrashing, you have two choices, optimize the database to be more cache friendly or buy more ram. merlin