On Wed, 7 May 2008 13:02:57 -0700
"John Smith" <sodgodofall@gmail.com> wrote:
> Hi,
>
> I have a large database (multiple TBs) where I'd like to be able to do
> a backup/restore of just a particular table (call it foo). Because
> the database is large, the time for a full backup would be
> prohibitive. Also, whatever backup mechanism we do use needs to keep
> the system online (i.e., users must still be allowed to update table
> foo while we're taking the backup).
> Does anyone see a problem with this approach (e.g., correctness,
> performance, etc.)? Or is there perhaps an alternative approach using
> some other postgresql mechanism that I'm not aware of?
Why are you not just using pg_dump -t ? Are you saying the backup of
the single table pg_dump takes to long? Perhaps you could use slony
with table sets?
Joshua D. Drake
--
The PostgreSQL Company since 1997: http://www.commandprompt.com/
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate