RE: Backup/dump of huge tables and performance - Mailing list pgsql-general

From Matthew
Subject RE: Backup/dump of huge tables and performance
Date
Msg-id 183FA749499ED311B6550000F87E206C0C92AD@SRV
Whole thread Raw
In response to Backup/dump of huge tables and performance  (brianb-pggeneral@edsamail.com)
List pgsql-general
Have you tried playing with the priority of the process?  If you gave it
a high nice value that would lower the impact on the other processes and
not drag down the system as much.

my 2¢

Matt

> -----Original Message-----
> From:    Philip Warner [SMTP:pjw@rhyme.com.au]
> Sent:    Friday, July 28, 2000 5:51 AM
> To:    brianb-pggeneral@edsamail.com
> Cc:    pgsql-general@postgresql.org; paul@edsamail.com
> Subject:    Re: [GENERAL] Backup/dump of huge tables and performance
>
> At 10:11 28/07/00 GMT, brianb-pggeneral@edsamail.com wrote:
> >
> >Philip Warner writes:
> >> At 02:26 28/07/00 GMT, brianb-pggeneral@edsamail.com wrote:
> >> >1. pg_dump or COPY take up a lot of I/O resources. That's not
> surprising
> >> >considering the size of the tables, but the impact on the overall
> >> >production system's performance is not acceptable.
> >>
> >> Firstly, you are not using '--inserts', I hope. It is very slow for
> both
> >> backup & restore. Also, do you know if pg_dump is the IO
> bottleneck, or the
> >> backend?
> >
> >Nope, I don't use --inserts. The bottleneck seems to be at the
> backend,
> >since it also happens with COPY.
>
> I'm not sure you can conclude that, but you are probably right. If
> possible
> can you use whatever tools you have to see which process *is* doing
> the
> I/O? Just to be sure.
>
>
> >> Other than that, I'm not sure what can be done about it - the I/O
> has to be
> >> done some time. Possibly causing pg_dump to (optionally) pause
> between
> >> records, but that seems like a bad idea, especially with 10s of
> millions of
> >> records.
> >
> >> You could always do a file-based backup of the database, and
> restore it
> >> somewhere else, and drop the tables you don't need. Not very
> elegant, I
> >> realize.
> >
> >What file-based backup are you referring to? From reading previous
> posts,
> >I'd inferred that I couldn't just copy my /var/lib/pgsql over to
> another
> >server with Postgres and start postmaster.
>
> You probably can't/shouldn't. To make it work, I suspect you'd have to
> kick
> all the users off the DB, which is not going to help.
>
>
> >>
> >> I'm not sure what should be changed in pg_dump; delaying between
> records
> >> seems like a bad idea since it does the dump in a single TX, and
> besides,
> >> sleeping while a TX is open seems evil to me.
> >
> >I suppose it's a special case, then. I just need to move the table,
> and I'd
> >rather have the I/O load spread over a longer period of time to
> soften the
> >impact on the rest of the system.
>
> It might bear thinking about...especially if anyone else has an
> opinion on
> the options.
>
>
>
>
>
> ----------------------------------------------------------------
> Philip Warner                    |     __---_____
> Albatross Consulting Pty. Ltd.   |----/       -  \
> (A.C.N. 008 659 498)             |          /(@)   ______---_
> Tel: (+61) 0500 83 82 81         |                 _________  \
> Fax: (+61) 0500 83 82 82         |                 ___________ |
> Http://www.rhyme.com.au          |                /           \|
>                                  |    --________--
> PGP key available upon request,  |  /
> and from pgp5.ai.mit.edu:11371   |/

pgsql-general by date:

Previous
From: Ernie
Date:
Subject: Re: performance help
Next
From: "Keith G. Murphy"
Date:
Subject: Re: Re: 4 billion record limit?