"Jeff Eckermann" <jeff_eckermann@yahoo.com> wrote in message
news:20040311233802.42300.qmail@web20802.mail.yahoo.com...
> Why? In particular, why do you want to concurrently
> run local and server versions of the database?
>
> No need, if you are using bound forms. On the one
> hand you will always have the primary key value handy,
> which may help you avoid some error cases. But then
> you need a network round trip to get it, for every
> insert. I doubt that it's worth the trouble.
>
Thanks for the help; I'll use bound forms and passthrough queries.
The replication question is probably best restated as a backup question:
We will just run pg on the main (only) server, which has nightly DAT backup.
pg would be available on the Windows boxes for temporary use in case of
hardware failure. My thought was to have a pg_dump from the previous evening
available, if needed. That should be sufficient for our small office.
pg_dump piped through gzip could be run each night as a cron job, then
burned to DVD with BackupEdge. BackupEdge would recognize the .gz file as
compressed, and perform bit-level verification but not further compression,
so readable with tar. The resultant tar archive could be copied to one of
the windows boxes, the pg_dump file extracted with cygwin gzip and tar
utilities, then loaded into pg for emergency use.
Another option would be to copy the gzipped pg_dump file over the network,
using a scheduled Windows Backup job or a simple batch file copy with
Windows Task Scheduler. The pg_dump file would be written to a directory on
the server accessible via Samba.
I'm inclined to do both, as very easy to configure. Any thoughts?
Thanks,
David P. Lurie