Re: Working on huge RAM based datasets - Mailing list pgsql-performance

From Merlin Moncure
Subject Re: Working on huge RAM based datasets
Date
Msg-id 6EE64EF3AB31D5448D0007DD34EEB34101AECA@Herge.rcsinc.local
Whole thread Raw
In response to Working on huge RAM based datasets  ("Andy Ballingall" <andy_ballingall@bigfoot.com>)
List pgsql-performance
Andy wrote:
> Whether the OS caches the data or PG does, you still want it cached.
If
> your
> sorting backends gobble up the pages that otherwise would be filled
with
> the
> database buffers, then your postmaster will crawl, as it'll *really*
have
> to
> wait for stuff from disk. In my scenario, you'd spec the machine so
that
> there would be plenty of memory for *everything*.

That's the whole point: memory is a limited resource.  If pg is
crawling, then the problem is simple: you need more memory.  The
question is: is it postgresql's responsibility to manage that resource?
Pg is a data management tool, not a memory management tool.  The same
'let's manage everything' argument also frequently gets brought up wrt
file i/o, because people assume the o/s sucks at file management.  In
reality, they are quite good, and through use of the generic interface
the administrator is free to choose a file system that best suits the
needs of the application.

At some point, hard disks will be replaced by solid state memory
technologies...do you really want to recode your memory manager when
this happens because all your old assumptions are no longer correct?

Merlin

pgsql-performance by date:

Previous
From: "Merlin Moncure"
Date:
Subject: Re: Working on huge RAM based datasets
Next
From: Rod Taylor
Date:
Subject: Re: Working on huge RAM based datasets