Thread: nice'ing the postgres COPY backend process to make pg_dumps run more "softly"

nice'ing the postgres COPY backend process to make pg_dumps run more "softly"

From
Aleksey Tsalolikhin
Date:
Hi.  When pg_dump runs, our application becomes inoperative (too
slow).  I was going to ask if nice'ing the postgres backend process
that handles the COPY would help but I just realized probably the
pg_dump takes out locks when it runs and nice'ing it would just make
it run longer...

However the man page says "pg_dump does not block other users
accessing the database  (readers  or writers)."  But if we run a
pg_dump, the phone starts ringing, users are complaining that the web
app is not working.

Would appreciate some pointer to help me reconcile these two
apparently contradictory facts.

Best,
-at

Re: nice'ing the postgres COPY backend process to make pg_dumps run more "softly"

From
Kiriakos Georgiou
Date:
Try this:   http://klicman.org/throttle/

Kiriakos


On Mar 19, 2012, at 12:06 AM, Aleksey Tsalolikhin wrote:

> Hi.  When pg_dump runs, our application becomes inoperative (too
> slow).  I was going to ask if nice'ing the postgres backend process
> that handles the COPY would help but I just realized probably the
> pg_dump takes out locks when it runs and nice'ing it would just make
> it run longer...
>
> However the man page says "pg_dump does not block other users
> accessing the database  (readers  or writers)."  But if we run a
> pg_dump, the phone starts ringing, users are complaining that the web
> app is not working.
>
> Would appreciate some pointer to help me reconcile these two
> apparently contradictory facts.
>
> Best,
> -at
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


Re: nice'ing the postgres COPY backend process to make pg_dumps run more "softly"

From
Guillaume Lelarge
Date:
On Sun, 2012-03-18 at 21:06 -0700, Aleksey Tsalolikhin wrote:
> Hi.  When pg_dump runs, our application becomes inoperative (too
> slow).  I was going to ask if nice'ing the postgres backend process
> that handles the COPY would help but I just realized probably the
> pg_dump takes out locks when it runs and nice'ing it would just make
> it run longer...
>
> However the man page says "pg_dump does not block other users
> accessing the database  (readers  or writers)."  But if we run a
> pg_dump, the phone starts ringing, users are complaining that the web
> app is not working.
>
> Would appreciate some pointer to help me reconcile these two
> apparently contradictory facts.
>

Depends on what your app is doing. It doesn't block any usual use of the
database: DML are all accepted. But you cannot drop a table that pg_dump
must save, you cannot change its definition. So there are some DDL
commands you cannot use during a dump.

Other than this, your users shouldn't be blocked. If it happens again,
you should look at pg_locks and pg_stat_activity to understand what's
going on.


--
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com


On 03/19/2012 01:51 PM, Guillaume Lelarge wrote:
> On Sun, 2012-03-18 at 21:06 -0700, Aleksey Tsalolikhin wrote:
>> Hi.  When pg_dump runs, our application becomes inoperative (too
>> slow)....
> Depends on what your app is doing. It doesn't block any usual use of the
> database: DML are all accepted. But you cannot drop a table that pg_dump
> must save, you cannot change its definition. So there are some DDL
> commands you cannot use during a dump....
>
Dumping may not technically block access but it *does*, of course,
consume resources.

Most obvious is that it requires reading all table data in entirety.
This will cause competition for disk access and may cause active data to
be temporarily pushed out of cache.

You also have to write the data somewhere. If it is on the same drive as
your database, you will have write competition. If it is on another
machine it will use network resources.

If you are compressing the data either externally or using a compressed
dump format, you will need more CPU to handle the compression on
whatever machine is doing the actual compression.

To assist, we need more info. Tell us the database size, some details
about your dump process (same or different machine, compression, etc.),
how long your dumps take to run, how many backends are typically running
and how many you reach during a dump, whether or not any web processes
alter tables and other info you think may be of use.

Cheers,
Steve




On Mon, Mar 19, 2012 at 3:51 PM, Guillaume Lelarge
<guillaume@lelarge.info> wrote:
> On Sun, 2012-03-18 at 21:06 -0700, Aleksey Tsalolikhin wrote:
>> Hi.  When pg_dump runs, our application becomes inoperative (too
>> slow).  I was going to ask if nice'ing the postgres backend process
>> that handles the COPY would help but I just realized probably the
>> pg_dump takes out locks when it runs and nice'ing it would just make
>> it run longer...
>>
>> However the man page says "pg_dump does not block other users
>> accessing the database  (readers  or writers)."  But if we run a
>> pg_dump, the phone starts ringing, users are complaining that the web
>> app is not working.
>>
>> Would appreciate some pointer to help me reconcile these two
>> apparently contradictory facts.
>>
>
> Depends on what your app is doing. It doesn't block any usual use of the
> database: DML are all accepted. But you cannot drop a table that pg_dump
> must save, you cannot change its definition. So there are some DDL
> commands you cannot use during a dump.
>
> Other than this, your users shouldn't be blocked. If it happens again,
> you should look at pg_locks and pg_stat_activity to understand what's
> going on.

This.  pg_dump essentially runs a read only database wide transaction
that touches all objects.   This will allow all update, select,
delete, etc, but will block a say, CREATE INDEX or ALTER TABLE.  Let's
just say maybe some other transaction is firing off one of these guys
and is in turn already touched a customer record... bam.  Classic
priority inversion -- slowing down pg_dump will only make the problem
worse.

So the very first thing to determine is if you are seeing generalized
load issues (we don't have the info for that) or locking issues
because the solution will be entirely different depending on what
you're seeing.  Most likely case is you're having iowait issues and
moving the dump to another machine will fix the problem.   Next most
likely case is cpu (courtesy zlib) -- also easily fixed.  But you have
to know before changing things.  So:

What's general load (from top) during pg_dump "bad" times?
pg_locks, etc as Guillaume asked
iowait

etc.

merlin