Re: WAL archiving to network drive - Mailing list pgsql-general

From Glen Parker
Subject Re: WAL archiving to network drive
Date
Msg-id 48AE0F14.5060505@nwlink.com
Whole thread Raw
In response to Re: WAL archiving to network drive  (Greg Smith <gsmith@gregsmith.com>)
Responses Re: WAL archiving to network drive
List pgsql-general
Greg Smith wrote:
> On Wed, 20 Aug 2008, Glen Parker wrote:
> The database will continue accumulating WAL segments it can't recycle if
> the archiver keeps failing, which can cause the size of the pg_xlog
> directory (often mounted into a separate, smaller partition or disk) to
> increase dramatically.  You do not want to be the guy who caused the
> database to go down because the xlog disk filled after some network
> mount flaked out.  I've seen that way too many times in WAN environments
> where the remote location was unreachable for days, due to natural
> disaster for example, and since under normal operation pg_xlog never got
> very big it wasn't sized for that.
>
> It will also slow things down a bit under heavy write loads, as every
> segment change will result in creating a new segment file rather than
> re-using an old one.

So you advocate archiving the WAL files from a small xlog volume, to a
larger local volume.  Why not just make the xlog volume large enough to
handle overruns, since you obviously have the space?  Copying each WAL
from one place to another on the local machine FAR outweighs the extra
overhead created when WAL files most be created rather than recycled.

Also, you mention days of down time, natural disasters, and a WAN.  My
DBMS and archive machines are in the same room.  If I had to deal with
different locations, I'd build more safety into the system.  In fact, in
a way, I have.  My WALs are archived immediately to another machine,
where they are (hours later) sent to tape in batches, which is then
hiked off location; emulating to some extent your decoupled system.


> OK, maybe you're smarter than that and used a separate script.  DBAs are
> also not happy changing a script that gets called by the database every
> couple of minutes, and as soon as there's more than one piece involved
> it can be difficult to do an atomic update of said script.


Yes I'm smarter than that, and I'm also the DBA, so I don't mind much ;-)


-Glen




pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: oracle rank() over partition by queries
Next
From: tuanhoanganh
Date:
Subject: Re: Pg dump Error