Re: pg_restore - Mailing list pgsql-general

From Mike Orr
Subject Re: pg_restore
Date
Msg-id BANLkTi=koiyFRm+4p9AQ7Tunyvow=iyaRw@mail.gmail.com
Whole thread Raw
In response to Re: pg_restore  (Merlin Moncure <mmoncure@gmail.com>)
List pgsql-general
On Fri, Apr 1, 2011 at 2:39 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
> On Wed, Mar 30, 2011 at 3:56 PM, Mike Orr <sluggoster@gmail.com> wrote:
>> I'm converting a MySQL webapp to PostgreSQL. I have a backup server
>> which is refreshed twice daily with mysqldump/mysql and has a
>> continuously-running copy of the webapp. I want to replicate this with
>> pg_dump/pg_restore.  Ideally I'd like to restore just a few tables,
>> without stopping the webapp and closing/reopening its connections. Is
>> this possible with pg_restore?
>>
>> MySQL simply locks the tables, drops/recreates them, loads the data,
>> and unlocks the tables. Other connections have to wait but they don't
>> have to be closed/reopened. The PostgreSQL manual recommends restoring
>> into an empty database using template0, which would require first
>> closing the other connections and then dropping the database. It would
>> also take unnecessary time to recreate the database and tables that
>> aren't changing. So I'm wondering if there's a less obtrusive way to
>> refresh the data.
>>
>> The tables to be updated do have 1:many relationships. How would I
>> restore all of them at once? List them all as '-t' args to the same
>> pg_restore command?
>
> pg_dump and pg_restore can do it.  pg_dump -Fc is just like regular
> dump but you can pull out specific tables by name.  or you can just
> take are regular backup and just pg_dump out specific tables with the
> proper options set in a small script.
>
> because pg is transactional, you can do this any time of the day
> without blocking  users (although there may be a small performance
> hit).

I'm tentatively going with a psql script:

"""
\set quiet 1
\timing off
BEGIN;
TRUNCATE incidents, entries, locator, events;
\i /tmp/sync-pg/hotline.sql
\i /tmp/sync-pg/locator.sql
\i /tmp/sync-pg/events.sql
COMMIT;
ANALYZE;
"""

And a shell script that can do the saving, transfer, and loading in
discrete parts. (So I can have the same script on both hosts, and do
the parts individually for testing or together for production):

"""
usage: daily-pg [-hdtlk]

Synchronize Postgres data to the backup server. Options:
  -d: Dump the data to /tmp/sync-pg
  -t HOSTNAME: Rsync the data to the specified host
  -l: Load the data from /tmp/sync-pg
  -k: Keep the dump directory after loading (otherwise delete it)
  -h: Print this help message and exit

If no options, do nothing.
"""

Dumping also compresses the files, and loading uncompresses them if
they're compressed.

--
Mike Orr <sluggoster@gmail.com>

pgsql-general by date:

Previous
From: Clemens Eisserer
Date:
Subject: Re: Howto use "COPY FROM" with the native API?
Next
From: Merlin Moncure
Date:
Subject: Re: PostgreSQL backend process high memory usage issue