Re: Is There Any Way .... - Mailing list pgsql-performance

From Douglas J. Trainor
Subject Re: Is There Any Way ....
Date
Msg-id 8bafaa62907deae2460ba2ecbb022741@transborder.net
Whole thread Raw
In response to Re: Is There Any Way ....  (Ron Peacetree <rjpeace@earthlink.net>)
Responses Re: Is There Any Way ....
List pgsql-performance
Ron Peacetree sounds like someone talking out of his _AZZ_.
He can save his unreferenced flapdoodle for his SQL Server
clients.  Maybe he will post references so that we may all
learn at the feet of Master Peacetree.  :-)

     douglas

On Oct 4, 2005, at 7:33 PM, Ron Peacetree wrote:

> pg is _very_ stupid about caching.  Almost all of the caching is left
> to the OS, and it's that way by design (as post after post by TL has
> pointed out).
>
> That means pg has almost no ability to take application domain
> specific knowledge into account when deciding what to cache.
> There's plenty of papers on caching out there that show that
> context dependent knowledge leads to more effective caching
> algorithms than context independent ones are capable of.
>
> (Which means said design choice is a Mistake, but unfortunately
> one with too much inertia behind it currentyl to change easily.)
>
> Under these circumstances, it is quite possible that an expert class
> human could optimize memory usage better than the OS + pg.
>
> If one is _sure_ they know what they are doing, I'd suggest using
> tmpfs or the equivalent for critical read-only tables.  For "hot"
> tables that are rarely written to and where data loss would not be
> a disaster, "tmpfs" can be combined with an asyncronous writer
> process push updates to HD.  Just remember that a power hit
> means that
>
> The (much) more expensive alternative is to buy SSD(s) and put
> the critical tables on it at load time.
>
> Ron
>
>
> -----Original Message-----
> From: "Jim C. Nasby" <jnasby@pervasive.com>
> Sent: Oct 4, 2005 4:57 PM
> To: Stefan Weiss <spaceman@foo.at>
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Is There Any Way ....
>
> On Tue, Oct 04, 2005 at 12:31:42PM +0200, Stefan Weiss wrote:
>> On 2005-09-30 01:21, Lane Van Ingen wrote:
>>>   (3) Assure that a disk-based table is always in memory (outside of
>>> keeping
>>> it in
>>>       memory buffers as a result of frequent activity which would
>>> prevent
>>> LRU
>>>       operations from taking it out) ?
>>
>> I was wondering about this too. IMO it would be useful to have a way
>> to tell
>> PG that some tables were needed frequently, and should be cached if
>> possible. This would allow application developers to consider joins
>> with
>> these tables as "cheap", even when querying on columns that are not
>> indexed.
>> I'm thinking about smallish tables like users, groups, *types, etc
>> which
>> would be needed every 2-3 queries, but might be swept out of RAM by
>> one
>> large query in between. Keeping a table like "users" on a RAM fs
>> would not
>> be an option, because the information is not volatile.
>
> Why do you think you'll know better than the database how frequently
> something is used? At best, your guess will be correct and PostgreSQL
> (or the kernel) will keep the table in memory. Or, your guess is wrong
> and you end up wasting memory that could have been used for something
> else.
>
> It would probably be better if you describe why you want to force this
> table (or tables) into memory, so we can point you at more appropriate
> solutions.


pgsql-performance by date:

Previous
From: "Jim C. Nasby"
Date:
Subject: Re: Is There Any Way ....
Next
From: Emil Briggs
Date:
Subject: Indexes on ramdisk