Thread: Backup strategies
I'm running a medium-traffic Web site that has been running for a few years, and which uses about four PostgreSQL databases on a regular basis. I'm currently running 8.2, although I'm planning to upgrade to 8.3 in the coming week or two, in part because of the problems that I'm having. The databases consume a combined total of 35 GB. Like a good boy, I've been backing the system up overnight, when we have less traffic, since the site began to run. I use pg_dump to back up, saving both schemas and data for a full restore in case of failure. pg_dump typically executes from another machine on a local network; if it would help to run pg_dump locally, then I'm certainly open to doing that. Over the last month or two, database performance has become increasingly problematic during the hours that I run pg_dump. Moreover, the size of the database has gotten to the point where it takes a good number of hours to dump everything to disk. This ends up interfering with our users on the East Coast of the United States, when they access our site early in the morning. One possible solution is for me to backup our main database more regularly, and our development database less regularly. But given the growth in our traffic (about double what it was 12 months ago), I have to assume that this isn't a long-term solution. I'm also considering taking our oldest data and sticking into a separate database (sort of a data warehouse), so that the production database becomes smaller, and thus easier to back up. But before I do any of these things, I want to hear what others have discovered in terms of high-performance backups. Is there a way to stop pg_dump from locking up the database so much? Is there a knob that I can turn to do a low-priority backup while the live site is running? Is there a superior backup strategy than pg_dump every 24 hours? Thanks in advance for any advice you can offer! Reuven -- Reuven M. Lerner -- Web development, consulting, and training Mobile: +972-54-496-8405 * US phone: 847-230-9795 Skype/AIM: reuvenlerner
>>> "Reuven M. Lerner" <reuven@lerner.co.il> wrote: > Is there a superior backup strategy than pg_dump > every 24 hours? You should probably consider: http://www.postgresql.org/docs/8.2/interactive/continuous-archiving.html -Kevin
Reuven M. Lerner wrote: > But before I do any of these things, I want to hear what others have > discovered in terms of high-performance backups. Is there a way to stop > pg_dump from locking up the database so much? Is there a knob that I > can turn to do a low-priority backup while the live site is running? Is > there a superior backup strategy than pg_dump every 24 hours? If you are sysadmin-minded and your operating system & file system support snapshots, an easy solution (and the one I use) is to create a read-only snapshot of the file system with the (binary) database files and back that up. The approach has some benefits: * It won't interfere with "normal" database operations (no locking; though I'm not sure that locking is your problem here as pgsql uses MVCC) * It works at disk speeds instead of converting data back to SQL for storage * Restoring the database works automagically - no need to import the data from SQL back * It's convenient to backup snapshots with usual file system backup utilities. Tar works fine. It also has some significant disadvantages: * The binary database representation is usually much larger than the SQL text one (because of indexes and internal structures). OTOH you can easily use tar with gzip to compress it on the fly. * Technically, the snapshot of the database you're taking represents a corrupted database, which is repaired automatically when it's restored. It's similar to as if you pulled the plug on the server while it was working - PostgreSQL will repair itself. * You cannot restore the database to a different version of PostgreSQL. The same rules apply as if upgrading - for example you can run data from 8.3.0 on 8.3.3 but not from 8.2.0 to 8.3.0. Warning: DO NOT do on-the-fly binary backups without snapshots. Archiving the database directory with tar on a regular file system, while the server is running, will result in an archive that most likely won't work when restored.
Attachment
Ivan Voras wrote: > Warning: DO NOT do on-the-fly binary backups without snapshots. > Archiving the database directory with tar on a regular file system, > while the server is running, will result in an archive that most likely > won't work when restored. You can do non-snapshot-based filesystem level backups with pg_start_backup() and pg_stop_backup() as part of a PITR setup. See: http://www.postgresql.org/docs/8.3/static/continuous-archiving.html That's the setup I use, with a full backup taken weekly and WAL files archived from then until the next full backup. There is always at least one full backup at any time in case a backup fails, and I can roll back in time for a minimum of a week if anything goes wrong. I also include plain SQL dumps from pg_dump in the nightly disaster recovery backups. -- Craig Ringer
Ivan Voras wrote: > Warning: DO NOT do on-the-fly binary backups without snapshots. > Archiving the database directory with tar on a regular file system, > while the server is running, will result in an archive that most likely > won't work when restored. Even if you do a "pg_start_backup/pg_stop_backup" as specified here: http://www.postgresql.org/docs/8.3/interactive/continuous-archiving.html => Making a Base backup. ?? It worked when I tested it, but I may just have been darn lucky. -- Jesper
Jesper Krogh wrote: > Ivan Voras wrote: >> Warning: DO NOT do on-the-fly binary backups without snapshots. >> Archiving the database directory with tar on a regular file system, >> while the server is running, will result in an archive that most likely >> won't work when restored. > > Even if you do a "pg_start_backup/pg_stop_backup" as specified here: > > http://www.postgresql.org/docs/8.3/interactive/continuous-archiving.html > => Making a Base backup. > > ?? > > It worked when I tested it, but I may just have been darn lucky. No, it should be ok - I just didn't catch up with the times :) At least that's my interpretation of this paragraph in documentation: """Some backup tools that you might wish to use emit warnings or errors if the files they are trying to copy change while the copy proceeds. This situation is normal, and not an error, when taking a base backup of an active database; so you need to ensure that you can distinguish complaints of this sort from real errors...""" It looks like PostgreSQL freezes the state of the "data" directory in this case (and new data goes only to the transaction log - pg_xlog), which effectively creates an application-level snapshot. Good to know.
Attachment
Ivan Voras a écrit : > Jesper Krogh wrote: >>[...] >> It worked when I tested it, but I may just have been darn lucky. > > No, it should be ok - I just didn't catch up with the times :) At least > that's my interpretation of this paragraph in documentation: > > """Some backup tools that you might wish to use emit warnings or errors > if the files they are trying to copy change while the copy proceeds. > This situation is normal, and not an error, when taking a base backup of > an active database; so you need to ensure that you can distinguish > complaints of this sort from real errors...""" > > It looks like PostgreSQL freezes the state of the "data" directory in > this case (and new data goes only to the transaction log - pg_xlog), > which effectively creates an application-level snapshot. Good to know. > Nope. Even files in data directory change. That's why the documentation warns against tools that emit errors for files that change during the copy. -- Guillaume. http://www.postgresqlfr.org http://dalibo.com
Guillaume Lelarge wrote: > Ivan Voras a écrit : >> It looks like PostgreSQL freezes the state of the "data" directory in >> this case (and new data goes only to the transaction log - pg_xlog), >> which effectively creates an application-level snapshot. Good to know. > > Nope. Even files in data directory change. That's why the documentation > warns against tools that emit errors for files that change during the copy. Ok, thanks. This is a bit off-topic, but if it's not how I imagine it, then how is it implemented?
Attachment
On Wed, 15 Oct 2008, Ivan Voras wrote: >> Nope. Even files in data directory change. That's why the documentation >> warns against tools that emit errors for files that change during the copy. > > Ok, thanks. This is a bit off-topic, but if it's not how I imagine it, > then how is it implemented? The files may change, but it doesn't matter, because there is enough information in the xlog to correct it all. Matthew -- Of course it's your fault. Everything here's your fault - it says so in your contract. - Quark
Matthew Wakeling wrote: > On Wed, 15 Oct 2008, Ivan Voras wrote: >>> Nope. Even files in data directory change. That's why the documentation >>> warns against tools that emit errors for files that change during the >>> copy. >> >> Ok, thanks. This is a bit off-topic, but if it's not how I imagine it, >> then how is it implemented? > > The files may change, but it doesn't matter, because there is enough > information in the xlog to correct it all. I'm thinking about these paragraphs in the documentation: """ Be certain that your backup dump includes all of the files underneath the database cluster directory (e.g., /usr/local/pgsql/data). If you are using tablespaces that do not reside underneath this directory, be careful to include them as well (and be sure that your backup dump archives symbolic links as links, otherwise the restore will mess up your tablespaces). You can, however, omit from the backup dump the files within the pg_xlog/ subdirectory of the cluster directory. This slight complication is worthwhile because it reduces the risk of mistakes when restoring. This is easy to arrange if pg_xlog/ is a symbolic link pointing to someplace outside the cluster directory, which is a common setup anyway for performance reasons. """ So, pg_start_backup() freezes the data at the time it's called but still data and xlog are changed, in a different way that's safe to backup? Why not run with pg_start_backup() always enabled?
Attachment
2008/10/15 Ivan Voras <ivoras@freebsd.org>: > Matthew Wakeling wrote: >> On Wed, 15 Oct 2008, Ivan Voras wrote: >>>> Nope. Even files in data directory change. That's why the documentation >>>> warns against tools that emit errors for files that change during the >>>> copy. >>> >>> Ok, thanks. This is a bit off-topic, but if it's not how I imagine it, >>> then how is it implemented? >> >> The files may change, but it doesn't matter, because there is enough >> information in the xlog to correct it all. > > I'm thinking about these paragraphs in the documentation: > > """ > Be certain that your backup dump includes all of the files underneath > the database cluster directory (e.g., /usr/local/pgsql/data). If you are > using tablespaces that do not reside underneath this directory, be > careful to include them as well (and be sure that your backup dump > archives symbolic links as links, otherwise the restore will mess up > your tablespaces). > > You can, however, omit from the backup dump the files within the > pg_xlog/ subdirectory of the cluster directory. This slight complication > is worthwhile because it reduces the risk of mistakes when restoring. > This is easy to arrange if pg_xlog/ is a symbolic link pointing to > someplace outside the cluster directory, which is a common setup anyway > for performance reasons. > """ > > So, pg_start_backup() freezes the data at the time it's called but still > data and xlog are changed, in a different way that's safe to backup? Why > not run with pg_start_backup() always enabled? > Because nothing would get vacuumed and your data would just grow and grow. Your data is held at the point in time when you typed pg_start_backup so when you restore your data is back at that point. If you need to go forward you need the xlog. (hence point in time backup....) This is all part of the mvcc feature that PostgreSQL has. PostgreSQL never delete anything until nothing can read it anymore, So if you vacuum during a backup it will only delete stuff that was finished with before the backup started. If you don't do a pg_start_backup first you don't have this promise that vacuum will not remove somthing you need. (Oh I think checkpoints might come into this as well but I'm not sure how) Or at least thats my understanding... So if your base backup takes a while I would advise running vacuum afterwards. But then if your running autovacuum there is probably very little need to worry. Peter Childs
2008/10/15 Aidan Van Dyk <aidan@highrise.ca>: > * Ivan Voras <ivoras@freebsd.org> [081015 10:05]: > >> So, pg_start_backup() freezes the data at the time it's called but still >> data and xlog are changed, in a different way that's safe to backup? Why >> not run with pg_start_backup() always enabled? > > I think your missing the whole point of "pg_start_backup()". > pg_start_backup()" is *part* of a full PITR/backup run. i.e. you use it > when you have an archive command working as well. It's *not* mean tto > just allow you to do a filesystem copy inside a running data directory. Possibly - that's why I'm sticking to this thread :) My context is doing full filesystem-only copies/backups of the database (xlogs & all) - is pg_start_backup() applicable?
Ivan Voras <ivoras@freebsd.org> writes: > Matthew Wakeling wrote: >> The files may change, but it doesn't matter, because there is enough >> information in the xlog to correct it all. > I'm thinking about these paragraphs in the documentation: >> You can, however, omit from the backup dump the files within the >> pg_xlog/ subdirectory of the cluster directory. The assumption is that a PITR backup configuration provides a separate pathway for the xlog files to get to the slave database. regards, tom lane
On Wed, Oct 15, 2008 at 8:58 AM, Ivan Voras <ivoras@freebsd.org> wrote: > 2008/10/15 Aidan Van Dyk <aidan@highrise.ca>: >> * Ivan Voras <ivoras@freebsd.org> [081015 10:05]: >> >>> So, pg_start_backup() freezes the data at the time it's called but still >>> data and xlog are changed, in a different way that's safe to backup? Why >>> not run with pg_start_backup() always enabled? >> >> I think your missing the whole point of "pg_start_backup()". >> pg_start_backup()" is *part* of a full PITR/backup run. i.e. you use it >> when you have an archive command working as well. It's *not* mean tto >> just allow you to do a filesystem copy inside a running data directory. > > Possibly - that's why I'm sticking to this thread :) My context is > doing full filesystem-only copies/backups of the database (xlogs & > all) - is pg_start_backup() applicable? Just an FYI, there are some issues with using filesystems that support snapshots, depending on the OS / filesystem. For instance, the LVM, which linux uses that allows snapshots, has issues with write barriers and also has a maximum throughput of about 300Meg/second. It's all a trade-off, but I don't run my db files on LVM because of those two problems.
On Wed, 2008-10-15 at 16:05 +0200, Ivan Voras wrote: > So, pg_start_backup() freezes the data at the time it's called but > still > data and xlog are changed, in a different way that's safe to backup? No, that's not how it works. The pg_start_backup() records the point that we must rollforward from. There is no freezing. > Why > not run with pg_start_backup() always enabled? It's not a mode that can be enabled/disabled. Its a starting point. You should run pg_start_backup() each time you run a backup, just like the fine manual describes. Check your backups... -- Simon Riggs www.2ndQuadrant.com PostgreSQL Training, Services and Support