Thread: pg_dump & performance degradation

pg_dump & performance degradation

From
Philip Warner
Date:
Brian Baquiran in the [GENERAL] list recently asked if it was possible to
'throttle-down' pg_dump so that it did not cause an IO bottleneck when
copying large tables.

Can anyone see a reason not to pause periodically?

The only problem I have with pausing is that pg_dump runs in a single
transaction, and I have an aversion to keeping TX's open too long, but this
is born of experience with other databases, and may not be relevant to PG.

If it is deemed acceptable, can anyone offer a sensible scheme for pausing?

eg. Allow the user to specify an active:sleep ratio, then after ever 'get'
on the COPY command, see how much time has elaped since it last slept, and
if more than, say, 100ms, then sleep for an amount of time based on the
user's choice.

Finally, can anyone point me to the most portable subsecond timer routines?


----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.C.N. 008 659 498)             |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 0500 83 82 82         |                 ___________ |
Http://www.rhyme.com.au          |                /           \|                                |    --________--
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


Re: pg_dump & performance degradation

From
Tom Lane
Date:
Philip Warner <pjw@rhyme.com.au> writes:
> Brian Baquiran in the [GENERAL] list recently asked if it was possible to
> 'throttle-down' pg_dump so that it did not cause an IO bottleneck when
> copying large tables.

> Can anyone see a reason not to pause periodically?

Because it'd slow things down?

As long as the default behavior is "no pauses", I have no strong
objection.

> Finally, can anyone point me to the most portable subsecond timer routines?

You do not want a timer routine, you want a delay.  I think using a
dummy select() with a timeout parameter might be the most portable way.
Anyway we've used it for a long time --- see the spinlock backoff code
in s_lock.c.
        regards, tom lane


Re: pg_dump & performance degradation

From
Philip Warner
Date:
At 12:22 28/07/00 -0400, Tom Lane wrote:
>Philip Warner <pjw@rhyme.com.au> writes:
>> Brian Baquiran in the [GENERAL] list recently asked if it was possible to
>> 'throttle-down' pg_dump so that it did not cause an IO bottleneck when
>> copying large tables.
>
>> Can anyone see a reason not to pause periodically?
>
>Because it'd slow things down?

Cute.


>> Finally, can anyone point me to the most portable subsecond timer routines?
>
>You do not want a timer routine, you want a delay.  I think using a
>dummy select() with a timeout parameter might be the most portable way.
>Anyway we've used it for a long time --- see the spinlock backoff code
>in s_lock.c.

Well...pg_dump sits in a loop reading COPY output; my hope was to see how
long the copy took, and then wait an appropriate amount of time. The dummy
select works nicely as a sleep call, but I can't really tell how long to
sleep without a sub-second timer, or something that tells me the time
between two calls.

Would there be a portability problem with using setitimer, pause, & sigaction?


----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.C.N. 008 659 498)             |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 0500 83 82 82         |                 ___________ |
Http://www.rhyme.com.au          |                /           \|                                |    --________--
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


Re: pg_dump & performance degradation

From
Tom Lane
Date:
Philip Warner <pjw@rhyme.com.au> writes:
>> You do not want a timer routine, you want a delay.  I think using a
>> dummy select() with a timeout parameter might be the most portable way.
>> Anyway we've used it for a long time --- see the spinlock backoff code
>> in s_lock.c.

> Well...pg_dump sits in a loop reading COPY output; my hope was to see how
> long the copy took, and then wait an appropriate amount of time. The dummy
> select works nicely as a sleep call, but I can't really tell how long to
> sleep without a sub-second timer, or something that tells me the time
> between two calls.

Seems like just delaying for a user-specifiable number of microseconds
between blocks or lines of COPY output would get the job done.  I'm not
clear what the reason is for needing to measure anything --- the user is
going to be tweaking the parameter anyway to arrive at what he feels is
an acceptable overall system load from the backup operation, so how are
you making his life easier by varying the delay?

> Would there be a portability problem with using setitimer, pause, &
> sigaction?

Signal behavior is not very portable, and I'd counsel against
introducing any new portability risks for what's fundamentally a pretty
third-order feature.  (AFAIR no one's ever asked for this before, so...)
We do have an existing dependency on gettimeofday() in postgres.c's
ShowUsage(), so if you really feel a compulsion to measure then that's
what to use.  I don't see what it's buying you though.
        regards, tom lane


Re: pg_dump & performance degradation

From
Philip Warner
Date:
At 13:36 28/07/00 -0400, Tom Lane wrote:
>Philip Warner <pjw@rhyme.com.au> writes:
>> Well...pg_dump sits in a loop reading COPY output; my hope was to see how
>> long the copy took, and then wait an appropriate amount of time. The dummy
>> select works nicely as a sleep call, but I can't really tell how long to
>> sleep without a sub-second timer, or something that tells me the time
>> between two calls.
>
>Seems like just delaying for a user-specifiable number of microseconds
>between blocks or lines of COPY output would get the job done. 

You're probably right; and if I can't trust setitimer, sigaction and pause,
then I guess I have no choice.


> I'm not
>clear what the reason is for needing to measure anything --- the user is
>going to be tweaking the parameter anyway to arrive at what he feels is
>an acceptable overall system load from the backup operation, so how are
>you making his life easier by varying the delay?
...
>We do have an existing dependency on gettimeofday() in postgres.c's
>ShowUsage(), so if you really feel a compulsion to measure then that's
>what to use.  I don't see what it's buying you though.

The plan was for the user to specify a single number that was the ratio of
time spent sleeping to the time spent 'working' (ie. reading COPY lines).

In the ordinary case this value would be 0 (no sleep), and for a very low
load model it might be as high as 10 - for every 100ms spent working it
spends 1000ms sleeping.

This was intended to handle the arbitrary speed variations that occur when
reading, eg, large toasted rows and reading lots of small normal rows. A
simple 'wait 200ms' model would be fine for the former, but way too long
for the latter.

>(AFAIR no one's ever asked for this before, so...)

like most of these things (at least for me), it is personally relevant: I
also experience severe peformance degradation during backups.

I'll look at gettimeofday...


----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.C.N. 008 659 498)             |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 0500 83 82 82         |                 ___________ |
Http://www.rhyme.com.au          |                /           \|                                |    --________--
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


Re: pg_dump & performance degradation

From
Tom Lane
Date:
Philip Warner <pjw@rhyme.com.au> writes:
> The plan was for the user to specify a single number that was the ratio of
> time spent sleeping to the time spent 'working' (ie. reading COPY lines).

> In the ordinary case this value would be 0 (no sleep), and for a very low
> load model it might be as high as 10 - for every 100ms spent working it
> spends 1000ms sleeping.

> This was intended to handle the arbitrary speed variations that occur when
> reading, eg, large toasted rows and reading lots of small normal rows.

But ... but ... you have no idea at all how much time the backend has
expended to provide you with those rows, nor how much of the elapsed
time was used up by unrelated processes.  It's pointless to suppose
that you are regulating system load this way --- and I maintain that
system load is what the dbadmin would really like to regulate.

You may as well keep it simple and not introduce unpredictable
dependencies into the behavior of the feature.
        regards, tom lane


Re: pg_dump & performance degradation

From
Philip Warner
Date:
At 00:57 29/07/00 -0400, Tom Lane wrote:
>Philip Warner <pjw@rhyme.com.au> writes:
>> The plan was for the user to specify a single number that was the ratio of
>> time spent sleeping to the time spent 'working' (ie. reading COPY lines).
>
>> In the ordinary case this value would be 0 (no sleep), and for a very low
>> load model it might be as high as 10 - for every 100ms spent working it
>> spends 1000ms sleeping.
>
>> This was intended to handle the arbitrary speed variations that occur when
>> reading, eg, large toasted rows and reading lots of small normal rows.
>
>But ... but ... you have no idea at all how much time the backend has
>expended to provide you with those rows, nor how much of the elapsed
>time was used up by unrelated processes.

True & true.

But where the time was used is less important to me; if it was used by PG,
or by another process, then it still means that there was a consumer who I
was fighting. All I am trying to do is prevent the consumption of all
available resources by pg_dump. I realize that this is totally opposed to
the notion of a good scheduler, but it does produce good results for me:
when I put delays in pg_dump, I couldn't really tell (from the system
performance) that backup was running.


> It's pointless to suppose
>that you are regulating system load this way --- 

That is true, but what I am regulating is consumption of available
resources (except in the case of delays caused by excessive lock contention).

For the most part, my backups go to 100%CPU, and huge numbers of I/Os. Most
importantly this affects web server response times as well as the time
taken for 'production' database queries (usually via the web).

With a backup process that sleeps, I get free CPU time & I/Os for
opportunistic processes (web servers & db queries), and a backup that takes
more time. This seems like a Good Thing.

Since backups on VMS never cause this sort of problem, I assume I am just
battling the Linux scheduler, rather than a deficiency in Postgres. Maybe
things would be different if I could set the priority on the backend from
the client...that might bear thinking about, but for R/W transactions it
would be a disaster to allow setting of priorities of backend processes.


>and I maintain that
>system load is what the dbadmin would really like to regulate.

In my case, because the scheduler does not cope well at 100% load, I think
I need to keep some resources in reserve. But I agree in principal. 


>You may as well keep it simple and not introduce unpredictable
>dependencies into the behavior of the feature.

This is certainly still an option; I might base the choice on some
empirical tests. I get very different results between a large table with
many columns and a large table with a small number of columns. I'll have to
keep investigating the causes.



----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.C.N. 008 659 498)             |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 0500 83 82 82         |                 ___________ |
Http://www.rhyme.com.au          |                /           \|                                |    --________--
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


Re: pg_dump & performance degradation

From
Don Baccus
Date:
At 02:14 PM 7/29/00 +1000, Philip Warner wrote:

>>(AFAIR no one's ever asked for this before, so...)
>
>like most of these things (at least for me), it is personally relevant: I
>also experience severe peformance degradation during backups.

I can't think of any Unix utility that does this, offhand.

"nice" doesn't help at all when you try it?



- Don Baccus, Portland OR <dhogaza@pacifier.com> Nature photos, on-line guides, Pacific Northwest Rare Bird Alert
Serviceand other goodies at http://donb.photo.net.
 


Re: pg_dump & performance degradation

From
Philip Warner
Date:
At 05:28 29/07/00 -0700, Don Baccus wrote:
>At 02:14 PM 7/29/00 +1000, Philip Warner wrote:
>
>>>(AFAIR no one's ever asked for this before, so...)
>>
>>like most of these things (at least for me), it is personally relevant: I
>>also experience severe peformance degradation during backups.
>
>I can't think of any Unix utility that does this, offhand.
>
>"nice" doesn't help at all when you try it?

Only marginally; and what I really need to do is 'nice' the backend, and
when I do that it still only helps only a little - even if I drop it to the
lowest priority. I think a process with only a little CPU can still do a
lot of I/O requests, and perhaps it is getting swapped back in to service
the requests. This is just guesswork.




----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.C.N. 008 659 498)             |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 0500 83 82 82         |                 ___________ |
Http://www.rhyme.com.au          |                /           \|                                |    --________--
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


Re: pg_dump & performance degradation

From
Tom Lane
Date:
Philip Warner <pjw@rhyme.com.au> writes:
>> "nice" doesn't help at all when you try it?

> Only marginally; and what I really need to do is 'nice' the backend, and
> when I do that it still only helps only a little - even if I drop it to the
> lowest priority. I think a process with only a little CPU can still do a
> lot of I/O requests, and perhaps it is getting swapped back in to service
> the requests. This is just guesswork.

I think that's true --- on most Unixes, 'nice' level only affects CPU
scheduling not I/O scheduling.

It would be a bad idea to nice down a backend anyway, if the intent is
to speed up other backends.  The Unix scheduler has no idea about
application-level locking, so you'll get priority-inversion problems:
once the nice'd backend has acquired any sort of lock, other backends
that may be waiting for that lock are at the mercy of the low priority
setting.  In effect, your entire database setup may be running at the
nice'd priority relative to anything else on the system.

I think Philip's idea of adding some delays into pg_dump is a reasonable
answer.  I'm just recommending a KISS approach to implementing the
delay, in the absence of evidence that a more complex mechanism will
actually buy anything...
        regards, tom lane


Re: pg_dump & performance degradation

From
Philip Warner
Date:
At 11:34 29/07/00 -0400, Tom Lane wrote:
>
>I think Philip's idea of adding some delays into pg_dump is a reasonable
>answer.  I'm just recommending a KISS approach to implementing the
>delay, in the absence of evidence that a more complex mechanism will
>actually buy anything...
>

The results of some experiments:

Unfortunately I have been unable to devise a proper test of the 'fixed'
sleep method: the COPY loop is sufficiently tight so that on a fast machine
even a small delay inside *each* iteration makes for a very slow process
(on my machine, I think the smallest allowed delay is 10ms, and the COPY
loop runs in about 100-300 usec intervals while the COPY buffer is being
dumped). As a result I have had to activate the sleep code only when the
time since it last sleep is > 100ms. Which means that the 'throttle time'
specified by the user is effectively a ratio anyway, and I have implemented
it as such.

Basically, the user can specify a number of ms to wait per second of
'running'. This ratio is checked on each iteration, and when the amount of
time to sleep exceeds 100ms, the sleep call is made, and the timer is reset.

eg. 
   pg_dump dbname -T1000 --tab=big-table > /dev/null

will rest for an average of 1 second for each second running (during COPY);
the actual 'sleeps' will occur every 100ms or so, and last for 100ms.

   pg_dump dbname -T30000 --tab=big-table > /dev/null

will rest for an average of 30 seconds for each second running (during
COPY); the actual 'sleeps' will occur every 3ms or so and last for 100ms
   pg_dump dbname -T500 --tab=big-table > /dev/null

will rest for an average of 0.5 seconds for each second running (during
COPY); the actual 'sleeps' will occur every 200ms or so and last for 100ms.

etc.

This is actually more complex that I had hoped (originally I planned to
just do a simple ratio), but experimentation of a *very* slow mashine (P90)
and a fast-ish one (PIII 550) showed that a range of values to achieve 50%
CPU utilization (on an unloaded machine) varied from as high as 30:1 down
to 0.3:1 (the CPU boost from IO on a P90 is enormous - postmaster runs at
90% CPU until a ratio of about 15:1).

These times are obviously highly subjective, and the only real conclusion I
can draw from them are:

- It's too hard to predict (as Tom suggested)
- It's important to allow for a very tight loop in coding the 'sleep' code.

This is disappointing in the sense that I had hoped to get a
one-number-suits-all-tables-at-a-given-time-on-a-given-machine solution,
but experiments with tables with lots of columns and tables with large
toasted values reveals quite a wide variation even with this model.

Unless someone has a further suggestion, I'll just clean it up and submit
it...


----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.C.N. 008 659 498)             |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 0500 83 82 82         |                 ___________ |
Http://www.rhyme.com.au          |                /           \|                                |    --________--
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


Re: pg_dump & performance degradation

From
Bruce Momjian
Date:
I do not think it is a good idea to add some sleep-ability to pg_dump
unless we can get a general agreement that this is a valuable feature.


> At 13:36 28/07/00 -0400, Tom Lane wrote:
> >Philip Warner <pjw@rhyme.com.au> writes:
> >> Well...pg_dump sits in a loop reading COPY output; my hope was to see how
> >> long the copy took, and then wait an appropriate amount of time. The dummy
> >> select works nicely as a sleep call, but I can't really tell how long to
> >> sleep without a sub-second timer, or something that tells me the time
> >> between two calls.
> >
> >Seems like just delaying for a user-specifiable number of microseconds
> >between blocks or lines of COPY output would get the job done. 
> 
> You're probably right; and if I can't trust setitimer, sigaction and pause,
> then I guess I have no choice.
> 
> 
> > I'm not
> >clear what the reason is for needing to measure anything --- the user is
> >going to be tweaking the parameter anyway to arrive at what he feels is
> >an acceptable overall system load from the backup operation, so how are
> >you making his life easier by varying the delay?
> ...
> >We do have an existing dependency on gettimeofday() in postgres.c's
> >ShowUsage(), so if you really feel a compulsion to measure then that's
> >what to use.  I don't see what it's buying you though.
> 
> The plan was for the user to specify a single number that was the ratio of
> time spent sleeping to the time spent 'working' (ie. reading COPY lines).
> 
> In the ordinary case this value would be 0 (no sleep), and for a very low
> load model it might be as high as 10 - for every 100ms spent working it
> spends 1000ms sleeping.
> 
> This was intended to handle the arbitrary speed variations that occur when
> reading, eg, large toasted rows and reading lots of small normal rows. A
> simple 'wait 200ms' model would be fine for the former, but way too long
> for the latter.
> 
> >(AFAIR no one's ever asked for this before, so...)
> 
> like most of these things (at least for me), it is personally relevant: I
> also experience severe peformance degradation during backups.
> 
> I'll look at gettimeofday...
> 
> 
> ----------------------------------------------------------------
> Philip Warner                    |     __---_____
> Albatross Consulting Pty. Ltd.   |----/       -  \
> (A.C.N. 008 659 498)             |          /(@)   ______---_
> Tel: (+61) 0500 83 82 81         |                 _________  \
> Fax: (+61) 0500 83 82 82         |                 ___________ |
> Http://www.rhyme.com.au          |                /           \|
>                                  |    --________--
> PGP key available upon request,  |  /
> and from pgp5.ai.mit.edu:11371   |/
> 


--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: pg_dump & performance degradation

From
Bruce Momjian
Date:
> It would be a bad idea to nice down a backend anyway, if the intent is
> to speed up other backends.  The Unix scheduler has no idea about
> application-level locking, so you'll get priority-inversion problems:
> once the nice'd backend has acquired any sort of lock, other backends
> that may be waiting for that lock are at the mercy of the low priority
> setting.  In effect, your entire database setup may be running at the
> nice'd priority relative to anything else on the system.
> 
> I think Philip's idea of adding some delays into pg_dump is a reasonable
> answer.  I'm just recommending a KISS approach to implementing the
> delay, in the absence of evidence that a more complex mechanism will
> actually buy anything...

I am worried about feature creep here.  Does any other database
implement this?  I can accept it as a config.h flag, but it seems
publishing it as a pg_dump flag is just way too complicated for users.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: pg_dump & performance degradation

From
Philip Warner
Date:
At 21:48 31/07/00 -0400, Bruce Momjian wrote:
>> It would be a bad idea to nice down a backend anyway, if the intent is
>> to speed up other backends.  The Unix scheduler has no idea about
>> application-level locking, so you'll get priority-inversion problems:
>> once the nice'd backend has acquired any sort of lock, other backends
>> that may be waiting for that lock are at the mercy of the low priority
>> setting.  In effect, your entire database setup may be running at the
>> nice'd priority relative to anything else on the system.
>> 
>> I think Philip's idea of adding some delays into pg_dump is a reasonable
>> answer.  I'm just recommending a KISS approach to implementing the
>> delay, in the absence of evidence that a more complex mechanism will
>> actually buy anything...
>
>I am worried about feature creep here.

I agree; it's definitely a non-critical feature. But then, it is only 80
lines of code in one place (including 28 non-code lines). I am not totally
happy with the results it produces, so I have no objection to removing it
all. I just need some more general feedback...


>I can accept it as a config.h flag, 

You mean stick it in a bunch of ifdefs? What is the gain there?


>but it seems
>publishing it as a pg_dump flag is just way too complicated for users.

I've missed something, obviously. What is the problem here?


----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.C.N. 008 659 498)             |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 0500 83 82 82         |                 ___________ |
Http://www.rhyme.com.au          |                /           \|                                |    --________--
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


Re: pg_dump & performance degradation

From
Bruce Momjian
Date:
> >> I think Philip's idea of adding some delays into pg_dump is a reasonable
> >> answer.  I'm just recommending a KISS approach to implementing the
> >> delay, in the absence of evidence that a more complex mechanism will
> >> actually buy anything...
> >
> >I am worried about feature creep here.
> 
> I agree; it's definitely a non-critical feature. But then, it is only 80
> lines of code in one place (including 28 non-code lines). I am not totally
> happy with the results it produces, so I have no objection to removing it
> all. I just need some more general feedback...
> 
> 
> >I can accept it as a config.h flag, 
> 
> You mean stick it in a bunch of ifdefs? What is the gain there?
> 
> 
> >but it seems
> >publishing it as a pg_dump flag is just way too complicated for users.
> 
> I've missed something, obviously. What is the problem here?

I am more concerned with giving people a pg_dump option of questionable
value.  I don't have problems adding it to the C code because it may be
of use to some people.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: pg_dump & performance degradation

From
Philip Warner
Date:
At 23:41 31/07/00 -0400, Bruce Momjian wrote:
>> 
>> I've missed something, obviously. What is the problem here?
>
>I am more concerned with giving people a pg_dump option of questionable
>value.

And, sadly, I think questionable effectiveness...


>I don't have problems adding it to the C code because it may be
>of use to some people.

It's pretty easy code to write; I'm actually more inclined to remove it and
avoid the clutter. I'll probably put a comment in the code as to how to do
it, and what the pitfalls are - then I/we don't need to maintain any more
unused code.


----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.C.N. 008 659 498)             |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 0500 83 82 82         |                 ___________ |
Http://www.rhyme.com.au          |                /           \|                                |    --________--
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


Re: pg_dump & performance degradation

From
Don Baccus
Date:
At 11:41 PM 7/31/00 -0400, Bruce Momjian wrote:
>I am more concerned with giving people a pg_dump option of questionable
>value.  I don't have problems adding it to the C code because it may be
>of use to some people.

I'm uncomfortable because I know of no other *nix utility that has such
options.

If they exist, can someone enumerate them?

It's hard to see why PG is such a special case in this regard, requiring
such an overbearingly groty kludge.




- Don Baccus, Portland OR <dhogaza@pacifier.com> Nature photos, on-line guides, Pacific Northwest Rare Bird Alert
Serviceand other goodies at http://donb.photo.net.
 


Re: pg_dump & performance degradation

From
Don Baccus
Date:
At 01:06 PM 8/1/00 +1000, Philip Warner wrote:

>I agree; it's definitely a non-critical feature. But then, it is only 80
>lines of code in one place (including 28 non-code lines). I am not totally
>happy with the results it produces, so I have no objection to removing it
>all. I just need some more general feedback...

Have you tried pg_dump on a multi-processor machine, which most serious
database-backed websites run on these days?   Do you see the same performance
degradation?  My site runs on a dual P450 with RAID 1 LVD disks, and cost
me exactly $2100 to build (would've been less if I'd laid off the extra
cooling fans!)

If not, the target audience has shrunk even further.   Any way you cut it,
backups want to be scheduled for low-volume hours and even internationally
popular sites don't have steady volume for each hour of the 24 hour day.

(after all, some hours of the day that correspond to the normal "traffic
jam" hours fall mostly in areas like the south pacific where few people
live, even if each and every one is an avid user the load they put on
popular sites is bound to be low compared to, say, the eastern seaboard
or Europe)

If your site's so popular that it has to be perky 24 hours a day for a
rabid base of worldwide fans, then you can either:

1. afford to invest in hardware (if not, you're not asking enough of your  fans)
2. afford to invest in other improvements in your website.

Doing such unheard of things to a utility (again, is there any Unix or NT
or VMS etc utility that has such a user option?) makes me think you're 
finger-pointing at the wrong part of your service.

But, then again, I'm the kinda guy that looks for simple solutions to simple
problems...



- Don Baccus, Portland OR <dhogaza@pacifier.com> Nature photos, on-line guides, Pacific Northwest Rare Bird Alert
Serviceand other goodies at http://donb.photo.net.
 


Re: pg_dump & performance degradation

From
Bruce Momjian
Date:
> At 11:41 PM 7/31/00 -0400, Bruce Momjian wrote:
> >I am more concerned with giving people a pg_dump option of questionable
> >value.  I don't have problems adding it to the C code because it may be
> >of use to some people.
> 
> I'm uncomfortable because I know of no other *nix utility that has such
> options.
> 
> If they exist, can someone enumerate them?
> 
> It's hard to see why PG is such a special case in this regard, requiring
> such an overbearingly groty kludge.

Yes, agreed.  I read a nice article about Mozilla feature-creep, and how
it killed them:
http://www.suck.com/daily/2000/07/31/

We have been pretty good about keeping ourselves lean and directed.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: pg_dump & performance degradation

From
Philip Warner
Date:
At 21:12 31/07/00 -0700, Don Baccus wrote:
>At 01:06 PM 8/1/00 +1000, Philip Warner wrote:
>
>>I agree; it's definitely a non-critical feature. But then, it is only 80
>>lines of code in one place (including 28 non-code lines). I am not totally
>>happy with the results it produces, so I have no objection to removing it
>>all. I just need some more general feedback...
>
>Have you tried pg_dump on a multi-processor machine, which most serious
>database-backed websites run on these days?   Do you see the same performance
>degradation?  My site runs on a dual P450 with RAID 1 LVD disks, and cost
>me exactly $2100 to build (would've been less if I'd laid off the extra
>cooling fans!)

The original request came from a person with a "4-CPU Xeon with 2GB of
RAM", but the "solution" does not seem to work for them (I think), so it's
probably a waste of time.


>Doing such unheard of things to a utility (again, is there any Unix or NT
>or VMS etc utility that has such a user option?) makes me think you're 
>finger-pointing at the wrong part of your service.

I agree; I think in an earlier post I did say that I was trying to 'fix' an
OS issue in the application, which is really not a good thing to do...but
there *is* a problem. Just no solution, apparently.


>But, then again, I'm the kinda guy that looks for simple solutions to simple
>problems...

It was simple when I started. Honest...



----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.C.N. 008 659 498)             |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 0500 83 82 82         |                 ___________ |
Http://www.rhyme.com.au          |                /           \|                                |    --________--
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


Re: pg_dump & performance degradation

From
Philip Warner
Date:
At 21:15 31/07/00 -0700, Don Baccus wrote:
>At 11:41 PM 7/31/00 -0400, Bruce Momjian wrote:
>>I am more concerned with giving people a pg_dump option of questionable
>>value.  I don't have problems adding it to the C code because it may be
>>of use to some people.
>
>I'm uncomfortable because I know of no other *nix utility that has such
>options.
>
>If they exist, can someone enumerate them?
>
>It's hard to see why PG is such a special case in this regard, requiring
>such an overbearingly groty kludge.

Don't beat about the bush. What are you trying to say, here? ;-}.


----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.C.N. 008 659 498)             |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 0500 83 82 82         |                 ___________ |
Http://www.rhyme.com.au          |                /           \|                                |    --________--
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


Re: pg_dump & performance degradation

From
Chris Bitmead
Date:
Is this the sort of problem that nice() might solve, or not?

Bruce Momjian wrote:
> 
> > >> I think Philip's idea of adding some delays into pg_dump is a reasonable
> > >> answer.  I'm just recommending a KISS approach to implementing the
> > >> delay, in the absence of evidence that a more complex mechanism will
> > >> actually buy anything...
> > >
> > >I am worried about feature creep here.
> >
> > I agree; it's definitely a non-critical feature. But then, it is only 80
> > lines of code in one place (including 28 non-code lines). I am not totally
> > happy with the results it produces, so I have no objection to removing it
> > all. I just need some more general feedback...
> >
> >
> > >I can accept it as a config.h flag,
> >
> > You mean stick it in a bunch of ifdefs? What is the gain there?
> >
> >
> > >but it seems
> > >publishing it as a pg_dump flag is just way too complicated for users.
> >
> > I've missed something, obviously. What is the problem here?
> 
> I am more concerned with giving people a pg_dump option of questionable
> value.  I don't have problems adding it to the C code because it may be
> of use to some people.
> 
> --
>   Bruce Momjian                        |  http://candle.pha.pa.us
>   pgman@candle.pha.pa.us               |  (610) 853-3000
>   +  If your life is a hard drive,     |  830 Blythe Avenue
>   +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026


Re: pg_dump & performance degradation

From
Philip Warner
Date:
At 14:38 1/08/00 +1000, Chris Bitmead wrote:
>
>Is this the sort of problem that nice() might solve, or not?
>

Doesn't seem to; the problem is the IO, I think. Besides, it's a bad idea
to change priorities on backend processes (which is where most of the work
is done).


----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.C.N. 008 659 498)             |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 0500 83 82 82         |                 ___________ |
Http://www.rhyme.com.au          |                /           \|                                |    --________--
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


Re: pg_dump & performance degradation

From
Thomas Swan
Date:
My $.02 ...
 :)

Anyway, the command line option makes the most sense.    An arbitrary or relative sleep timer specified as an option is an ideal way to go about it.  I really liked the
pg_dump -T {pick a number, any number} idea.   maybe a -d  for delay or something...

Note:  A command line option is not too difficult for someone to use.  (I saw someone mention this.)   If you're using a GUI and rely on it... get someone to add a check box for the option.   Otherwise if your actually using a command-line utility, you've got the hardest part licked...  (It's a nice FAQ item, why is my db so slow when doing a backup? ... how can I fix this? ...)

This seems a little rhetorical in some respects... I would want a backup or dump of the DB to hit as quickly as possible... (scheduled for a low load time).  But if, hypothetically, the load is constant, then a more relaxed backup might be in order...

All in all, hard coding in the delay, would aggravate me and quite a few others; can't you hear it, "Why the *#!##@$ is this taking so long?"

-
- Thomas Swan                                   
- Graduate Student  - Computer Science
- The University of Mississippi
-
- "People can be categorized into two fundamental
- groups, those that divide people into two groups
- and those that don't."

Re: pg_dump & performance degradation

From
Philip Warner
Date:
At 01:49 1/08/00 -0500, Thomas Swan wrote: 

>>>>

<excerpt>

All in all, hard coding in the delay, would aggravate me and quite a few
others; can't you hear it, "Why the *#!##@$ is this taking so long?"


</excerpt>

<<<<<<<<


Just in case there is a misunderstanding, this was never, ever, part of
the plan or implementation. The 'throttle' code was driven by a command
option...



>>>>



----------------------------------------------------------------

Philip Warner                    |     __---_____

Albatross Consulting Pty. Ltd.   |----/       -  \

(A.C.N. 008 659 498)             |          /(@)   ______---_

Tel: (+61) 0500 83 82 81         |                 _________  \

Fax: (+61) 0500 83 82 82         |                 ___________ |

Http://www.rhyme.com.au          |                /           \|
                                |    --________--

PGP key available upon request,  |  /

and from pgp5.ai.mit.edu:11371   |/


Re: pg_dump & performance degradation

From
Philip Warner
Date:
At 06:05 1/08/00 -0700, Don Baccus wrote:
>At 02:31 PM 8/1/00 +1000, Philip Warner wrote:
>
>>>Have you tried pg_dump on a multi-processor machine, which most serious
>>>database-backed websites run on these days?   Do you see the same
>performance
>>>degradation?  My site runs on a dual P450 with RAID 1 LVD disks, and cost
>>>me exactly $2100 to build (would've been less if I'd laid off the extra
>>>cooling fans!)
>>
>>The original request came from a person with a "4-CPU Xeon with 2GB of
>>RAM", but the "solution" does not seem to work for them (I think), so it's
>>probably a waste of time.
>
>It seems really strange that pg_dump could suck the guts out of a
>four-processor
>machine.  What kind of device were they backing up to?   Disk?

I *think* you missed the point; the problem is that the CPU is perfectly
adequate to "suck the guts" out of his disks. If it was just a CPU issue,
then 'nice' would work.

The throttle stuff is there to slow down pg_dump and thereby reduce IO
load. AFAIK, there is no supported way of limiting IO demand in Unix. But
I'd love to be corrected. My meagre PIII 550 is perfectly able to simulate
the problem (with just a million rows) - and the code seems to help. But
not apparently for the 4-CPU Xeon (although I don't think he has tested the
most recent code).

But, given the negative response to the code (and the fact that it seems to
only be somewhat effective), I'll probably be ditching it.


----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.C.N. 008 659 498)             |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 0500 83 82 82         |                 ___________ |
Http://www.rhyme.com.au          |                /           \|                                |    --________--
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


Re: pg_dump & performance degradation

From
Bruce Momjian
Date:
> 
> Is this the sort of problem that nice() might solve, or not?

No.  Nice only handles CPU scheduling, not I/O.  In fact, most kernels
give I/O bound processed higher priority because they are using valuable
shared resources while doing the I/O, so the kernel wants it to finish
as quickly as possible.

> 
> Bruce Momjian wrote:
> > 
> > > >> I think Philip's idea of adding some delays into pg_dump is a reasonable
> > > >> answer.  I'm just recommending a KISS approach to implementing the
> > > >> delay, in the absence of evidence that a more complex mechanism will
> > > >> actually buy anything...
> > > >
> > > >I am worried about feature creep here.
> 


--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: pg_dump & performance degradation

From
Thomas Swan
Date:
<blockquote cite="cite" type="cite"><blockquote cite="cite" type="cite">All in all, hard coding in the delay, would
aggravateme and quite a few others; can't you hear it, "Why the *#!##@$ is this taking so long?" </blockquote>Just in
casethere is a misunderstanding, this was never, ever, part of the plan or implementation. The 'throttle' code was
drivenby a command option... </blockquote>It was... my apologies...<br /><br /> Thomas<br /><br /> - <br /> -
<b><u>ThomasSwan</u></b>                                   <br /> - Graduate Student  - Computer Science<br /> - The
Universityof Mississippi<br /> - <br /> - "People can be categorized into two fundamental <br /> - groups, those that
dividepeople into two groups <br /> - and those that don't." 

Re: pg_dump & performance degradation

From
Don Baccus
Date:
At 02:31 PM 8/1/00 +1000, Philip Warner wrote:

>>Have you tried pg_dump on a multi-processor machine, which most serious
>>database-backed websites run on these days?   Do you see the same
performance
>>degradation?  My site runs on a dual P450 with RAID 1 LVD disks, and cost
>>me exactly $2100 to build (would've been less if I'd laid off the extra
>>cooling fans!)
>
>The original request came from a person with a "4-CPU Xeon with 2GB of
>RAM", but the "solution" does not seem to work for them (I think), so it's
>probably a waste of time.

It seems really strange that pg_dump could suck the guts out of a
four-processor
machine.  What kind of device were they backing up to?   Disk?





- Don Baccus, Portland OR <dhogaza@pacifier.com> Nature photos, on-line guides, Pacific Northwest Rare Bird Alert
Serviceand other goodies at http://donb.photo.net.