Thanks for the tips Ragnar...although I wish you would explain more about
FIFO. The only thing that rings a bell on this is First In First Out from
the inventory chaper of my accounting textbook.....
----- Original Message -----
From: "Ragnar Kjørstad" <postgres@ragnark.vestdata.no>
To: "root" <root@dennis.veritime.com>
Cc: <pgsql-admin@postgresql.org>
Sent: Thursday, November 23, 2000 8:28 AM
Subject: Re: [ADMIN] Redundant databases/real-time backup
> On Thu, Nov 16, 2000 at 11:14:15AM -0500, root wrote:
> > It is necessary to create/alter the postgresql startup script. I have
included
> > a copy of mine. The database to be mirrored must start up with logging
> > enabled:
> >
> > su -l postgres -c '/usr/bin/postmaster -i -D/home/postgres/data
>/home/postgres/data/query_log 2>&1 &'
>
> But why do you do 2>&1 in your startup-script? This will redirect data
> on stderr to stdout - I don't think that is a good idea.
>
> > Other points:
> > The query_log can get large rather quickly. You cannot simply issue a
rm -rf
> > query_log, touch query_log and chmod. Even with the appropriate
permissions
> > the daemon will not write to a new file, for some reason you must
restart
> > postgres using the startup script. Perhaps one of the developers has an
answer
> > to this problem.....
>
> This is becuase the postgres deamon keeps the file open, and will keep
> on writing to the old file even after you delete it. Most deamons will
> reopen their log-files if you send them a HUP signal, so a possible
> solution is:
>
> mv logfile logfile.1
> kill -HUP <pid>
> process logfile.1
> rm logfile.1
>
> The order is important to make sure you don't have a race-condition.
>
> A different alternative is to use a FIFO instead of a normal file, and
> process it continually istead of in batch-jobs.
>
>
>
>
> A totally different approach would be to have a "sql-proxy" relay all
> requests to two (or more) different servers to always keep them in sync.
>
>
> --
> Ragnar Kjørstad
>