Re: patch for parallel pg_dump - Mailing list pgsql-hackers

From Robert Haas
Subject Re: patch for parallel pg_dump
Date
Msg-id CA+TgmoaBbtaiQLmjgDqy=9aJJOFyA6Ugt2BY-B5ds2BuZ_pr_A@mail.gmail.com
Whole thread Raw
In response to Re: patch for parallel pg_dump  (Joachim Wieland <joe@mcknight.de>)
List pgsql-hackers
On Wed, Mar 28, 2012 at 9:54 PM, Joachim Wieland <joe@mcknight.de> wrote:
> On Wed, Mar 28, 2012 at 1:46 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>> I'm wondering if we really need this much complexity around shutting
>> down workers.  I'm not sure I understand why we need both a "hard" and
>> a "soft" method of shutting them down.  At least on non-Windows
>> systems, it seems like it would be entirely sufficient to just send a
>> SIGTERM when you want them to die.  They don't even need to catch it;
>> they can just die.
>
> At least on my Linux test system, even if all pg_dump processes are
> gone, the server happily continues sending data. When I strace an
> individual backend process, I see a lot of Broken pipe writes, but
> that doesn't stop it from just writing out the whole table to a closed
> file descriptor. This is a 9.0-latest server.

Wow, yuck.  At least now I understand why you're doing it like that.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


pgsql-hackers by date:

Previous
From: Marko Kreen
Date:
Subject: Re: Standbys, txid_current_snapshot, wraparound
Next
From: Robert Haas
Date:
Subject: Re: Re: pg_stat_statements normalisation without invasive changes to the parser (was: Next steps on pg_stat_statements normalisation)