Re: Redundant file server for postgres - Mailing list pgsql-general

From Scott Marlowe
Subject Re: Redundant file server for postgres
Date
Msg-id dcc563d10803161220s7dcc97f3n796abfe7e0737960@mail.gmail.com
Whole thread Raw
In response to Re: Redundant file server for postgres  (Karl Denninger <karl@denninger.net>)
List pgsql-general
On Sun, Mar 16, 2008 at 1:02 PM, Karl Denninger <karl@denninger.net> wrote:
>
>  >
>  The key issue on RAM is not whether the database will fit into RAM (for
>  all but the most trivial applications, it will not)

I would argue that many applications where the data fits into memory
are not trivial.  Especially if we're talking about the working set.
If you operate on 1 Gig sets out of a terabyte range for a reporting
database, then your data fits into (or damned well should :) ) memory.

Also, many applications with small datasets can be quite complex, like
control systems.  The actual amount of might be 100 Meg, but the
throughput might be very high, and require a battery backed cache
because of all the writes going in.

So there are plenty of times your data will fit in memory.

>  It is whether the key INDICES will fit into RAM.  If they will, then you
>  get a HUGE win in performance.

When they don't, you often need to start looking at some form of
partitioning if you want to keep good performance.  By partitioning
I'm not just limiting that to using inherited tables to do it, it
could include things like horizontal partitioning of data across
different pg servers.

Note that I'm not disagreeing with everything you said, just a slight
clarification on data sets that do / don't fit into memory.

pgsql-general by date:

Previous
From: Karl Denninger
Date:
Subject: Re: Redundant file server for postgres
Next
From: "Kynn Jones"
Date:
Subject: UPDATE stalls when run in "batch mode"