Thread: Backup/Restore of single table in multi TB database

Backup/Restore of single table in multi TB database

From
"John Smith"
Date:
Hi,

I have a large database (multiple TBs) where I'd like to be able to do
a backup/restore of just a particular table (call it foo).  Because
the database is large, the time for a full backup would be
prohibitive.  Also, whatever backup mechanism we do use needs to keep
the system online (i.e., users must still be allowed to update table
foo while we're taking the backup).

After reading the documentation, it seems like the following might
work.  Suppose the database has two tables foo and bar, and we're only
interested in backing up table foo:

1. Call pg_start_backup

2. Use the pg_class table in the catalog to get the data file names
for tables foo and bar.

3. Copy the system files and the data file for foo.  Skip the data file for bar.

4. Call pg_stop_backup()

5. Copy WAL files generated between 1. and 4. to another location.

Later, if we want to restore the database somewhere with just table
foo, we just use postgres's normal recovery mechanism and point it at
the files we backed up in 2. and the WAL files from 5.

Does anyone see a problem with this approach (e.g., correctness,
performance, etc.)?  Or is there perhaps an alternative approach using
some other postgresql mechanism that I'm not aware of?

Thanks!
- John

Re: Backup/Restore of single table in multi TB database

From
"David Wilson"
Date:
On Wed, May 7, 2008 at 4:02 PM, John Smith <sodgodofall@gmail.com> wrote:

>  Does anyone see a problem with this approach (e.g., correctness,
>  performance, etc.)?  Or is there perhaps an alternative approach using
>  some other postgresql mechanism that I'm not aware of?

Did you already look at and reject pg_dump for some reason? You can
restrict it to specific tables to dump, and it can work concurrently
with a running system. Your database is large, but how large are the
individual tables you're interested in backing up? pg_dump will be
slower than a file copy, but may be sufficient for your purpose and
will have guaranteed correctness.

I'm fairly certain that you have to be very careful about doing simple
file copies while the system is running, as the files may end up out
of sync based on when each individual one is copied. I haven't done it
myself, but I do know that there are a lot of caveats that someone
with more experience doing that type of backup can hopefully point you
to.

--
- David T. Wilson
david.t.wilson@gmail.com

Re: Backup/Restore of single table in multi TB database

From
"Joshua D. Drake"
Date:
On Wed, 7 May 2008 13:02:57 -0700
"John Smith" <sodgodofall@gmail.com> wrote:

> Hi,
>
> I have a large database (multiple TBs) where I'd like to be able to do
> a backup/restore of just a particular table (call it foo).  Because
> the database is large, the time for a full backup would be
> prohibitive.  Also, whatever backup mechanism we do use needs to keep
> the system online (i.e., users must still be allowed to update table
> foo while we're taking the backup).

> Does anyone see a problem with this approach (e.g., correctness,
> performance, etc.)?  Or is there perhaps an alternative approach using
> some other postgresql mechanism that I'm not aware of?

Why are you not just using pg_dump -t ? Are you saying the backup of
the single table pg_dump takes to long? Perhaps you could use slony
with table sets?

Joshua D. Drake



--
The PostgreSQL Company since 1997: http://www.commandprompt.com/
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate



Attachment

Re: Backup/Restore of single table in multi TB database

From
"Joshua D. Drake"
Date:
On Wed, 7 May 2008 16:09:45 -0400
"David Wilson" <david.t.wilson@gmail.com> wrote:

> I'm fairly certain that you have to be very careful about doing simple
> file copies while the system is running, as the files may end up out
> of sync based on when each individual one is copied. I haven't done it
> myself, but I do know that there are a lot of caveats that someone
> with more experience doing that type of backup can hopefully point you
> to.

Besides the fact that it seems to be a fairly hacky thing to do... it
is going to be fragile. Consider:

(serverA) create table foo();
(serverB) create table foo();

(serverA) Insert stuff;
(serverA) Alter table foo add column;

Oops...

(serverA) alter table foo drop column;

You now have different version of the files than on serverb regardless
of the table name.

Joshua D. Drake




--
The PostgreSQL Company since 1997: http://www.commandprompt.com/
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate



Attachment

Re: Backup/Restore of single table in multi TB database

From
Simon Riggs
Date:
On Wed, 2008-05-07 at 13:02 -0700, John Smith wrote:

> I have a large database (multiple TBs) where I'd like to be able to do
> a backup/restore of just a particular table (call it foo).  Because
> the database is large, the time for a full backup would be
> prohibitive.  Also, whatever backup mechanism we do use needs to keep
> the system online (i.e., users must still be allowed to update table
> foo while we're taking the backup).

Have a look at pg_snapclone. It's specifically designed to significantly
improve dump times for very large objects.

http://pgfoundry.org/projects/snapclone/

--
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


Re: Backup/Restore of single table in multi TB database

From
Tom Lane
Date:
"John Smith" <sodgodofall@gmail.com> writes:
> After reading the documentation, it seems like the following might
> work.  Suppose the database has two tables foo and bar, and we're only
> interested in backing up table foo:

> 1. Call pg_start_backup

> 2. Use the pg_class table in the catalog to get the data file names
> for tables foo and bar.

> 3. Copy the system files and the data file for foo.  Skip the data file for bar.

> 4. Call pg_stop_backup()

> 5. Copy WAL files generated between 1. and 4. to another location.

> Later, if we want to restore the database somewhere with just table
> foo, we just use postgres's normal recovery mechanism and point it at
> the files we backed up in 2. and the WAL files from 5.

> Does anyone see a problem with this approach

Yes: it will not work, not even a little bit, because the WAL files will
contain updates for all the tables.  You can't just not have the tables
there during restore.

Why are you not using pg_dump?

            regards, tom lane

Re: Backup/Restore of single table in multi TB database

From
"John Smith"
Date:
Hi Tom,

Actually, I forgot to mention one more detail in my original post.
For the table that we're looking to backup, we also want to be able to
do incremental backups.  pg_dump will cause the entire table to be
dumped out each time it is invoked.

With the pg_{start,stop}_backup approach, incremental backups could be
implemented by just rsync'ing the data files for example and applying
the incremental WALs.   So if table foo didn't change very much since
the first backup, we would only need to rsync a small amount of data
plus the WALs to get an incremental backup for table foo.

Besides picking up data on unwanted tables from the WAL (e.g., bar
would appear in our recovered database even though we only wanted
foo), do you see any other problems with this pg_{start,stop}_backup
approach?  Admittedly, it does seem a bit hacky.

Thanks,
- John

On Wed, May 7, 2008 at 2:41 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> "John Smith" <sodgodofall@gmail.com> writes:
>  > After reading the documentation, it seems like the following might
>  > work.  Suppose the database has two tables foo and bar, and we're only
>  > interested in backing up table foo:
>
>  > 1. Call pg_start_backup
>
>  > 2. Use the pg_class table in the catalog to get the data file names
>  > for tables foo and bar.
>
>  > 3. Copy the system files and the data file for foo.  Skip the data file for bar.
>
>  > 4. Call pg_stop_backup()
>
>  > 5. Copy WAL files generated between 1. and 4. to another location.
>
>  > Later, if we want to restore the database somewhere with just table
>  > foo, we just use postgres's normal recovery mechanism and point it at
>  > the files we backed up in 2. and the WAL files from 5.
>
>  > Does anyone see a problem with this approach
>
>  Yes: it will not work, not even a little bit, because the WAL files will
>  contain updates for all the tables.  You can't just not have the tables
>  there during restore.
>
>  Why are you not using pg_dump?
>
>                         regards, tom lane
>

Re: Backup/Restore of single table in multi TB database

From
Simon Riggs
Date:
On Wed, 2008-05-07 at 15:24 -0700, John Smith wrote:

> Actually, I forgot to mention one more detail in my original post.
> For the table that we're looking to backup, we also want to be able to
> do incremental backups.  pg_dump will cause the entire table to be
> dumped out each time it is invoked.
>
> With the pg_{start,stop}_backup approach, incremental backups could be
> implemented by just rsync'ing the data files for example and applying
> the incremental WALs.   So if table foo didn't change very much since
> the first backup, we would only need to rsync a small amount of data
> plus the WALs to get an incremental backup for table foo.
>
> Besides picking up data on unwanted tables from the WAL (e.g., bar
> would appear in our recovered database even though we only wanted
> foo), do you see any other problems with this pg_{start,stop}_backup
> approach?  Admittedly, it does seem a bit hacky.

You wouldn't be the first to ask to restore only a single table.

I can produce a custom version that does that if you like, though I'm
not sure that feature would be accepted into the main code.

--
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


Ubuntu question

From
Q Master
Date:
Hello,

I had postgresql 7.4 on ubuntu and over one year ago I moved to 8.2
Till now I was backing up my db via pgadmin remotely from windows but
now I want to do it from the ubuntu server.

When I run the command pgdump it said that the database is 8.2 but the
tool is 7.4 - my question is, where in the world is the pgdump for 8.2 -
I can't find it.

pg_dump, pg_dumpall are all in /usr/bin but where are the 8.2 ones ?

TIA,
Q



Re: Ubuntu question

From
Martijn van Oosterhout
Date:
On Thu, May 08, 2008 at 01:52:17AM -0500, Q Master wrote:
> I had postgresql 7.4 on ubuntu and over one year ago I moved to 8.2
> Till now I was backing up my db via pgadmin remotely from windows but
> now I want to do it from the ubuntu server.

I suggest looking at the README.Debian for postgres, it contains much
important information you need to understand how multiple concurrently
installed versions work.

> When I run the command pgdump it said that the database is 8.2 but the
> tool is 7.4 - my question is, where in the world is the pgdump for 8.2 -
> I can't find it.
>
> pg_dump, pg_dumpall are all in /usr/bin but where are the 8.2 ones ?

First, check what you have installed with pg_lsclusters (this will give
you the port number). Normally you can specify the cluster directly to
pg_dump but if you want the actual binary go to:

/usr/lib/postgresql/<version>/bin/pg_dump.

Have a nice day,
--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> Please line up in a tree and maintain the heap invariant while
> boarding. Thank you for flying nlogn airlines.

Attachment

Re: Ubuntu question

From
Justin
Date:

Q Master wrote:
> Hello,
>
> I had postgresql 7.4 on ubuntu and over one year ago I moved to 8.2
> Till now I was backing up my db via pgadmin remotely from windows but
> now I want to do it from the ubuntu server.
>
> When I run the command pgdump it said that the database is 8.2 but the
> tool is 7.4 - my question is, where in the world is the pgdump for 8.2
> - I can't find it.
>
> pg_dump, pg_dumpall are all in /usr/bin but where are the 8.2 ones ?
You need to download the pgcontrib package  from ubuntu package site. I
use the gnome package manager from ubuntu to handle this plus it
automatically handles the updates if any apply

>
> TIA,
> Q
>
>
>





Re: Backup/Restore of single table in multi TB database

From
Francisco Reyes
Date:
Simon Riggs wrote:
> Have a look at pg_snapclone. It's specifically designed to significantly
> improve dump times for very large objects.
>
> http://pgfoundry.org/projects/snapclone/
>
Also, in case the original poster is not aware, by default pg_dump
allows to backup single tables.
Just add -t <table name>.



Does pg_snapclone works mostly on large rows or will it also be faster
than pg_dump for narrow tables?

Re: Backup/Restore of single table in multi TB database

From
Simon Riggs
Date:
On Fri, 2008-07-18 at 20:25 -0400, Francisco Reyes wrote:

> Does pg_snapclone works mostly on large rows or will it also be faster
> than pg_dump for narrow tables?

It allows you to run your dump in multiple pieces. Thats got nothing to
do with narrow or wide.

--
 Simon Riggs           www.2ndQuadrant.com
 PostgreSQL Training, Services and Support