Re: patch for parallel pg_dump - Mailing list pgsql-hackers

From Joachim Wieland
Subject Re: patch for parallel pg_dump
Date
Msg-id CACw0+12Hc36DhOyq87i881fcDB1e5Mck2xVt16TsTUM2vCMhSA@mail.gmail.com
Whole thread Raw
In response to Re: patch for parallel pg_dump  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: patch for parallel pg_dump
List pgsql-hackers
On Wed, Mar 14, 2012 at 2:02 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>> I think we should somehow unify both functions, the code is not very
>> consistent in this respect, it also calls exit_horribly() when it has
>> AH available. See for example pg_backup_tar.c
>
> I think we should get rid of die_horribly(), and instead have arrange
> to always clean up AH via an on_exit_nicely hook.

Attached is a patch that gets rid of die_horribly().

For the parallel case it maintains an array with as many elements as
we have worker processes. When the workers start, they enter their Pid
(or ThreadId) and their ArchiveHandle (AH). The exit handler function
in a process can then find its own ArchiveHandle by comparing the own
Pid with all the elements in the array.

Attachment

pgsql-hackers by date:

Previous
From: HuangQi
Date:
Subject: Re: Gsoc2012 Idea --- Social Network database schema
Next
From: Shigeru Hanada
Date:
Subject: Re: Why does exprCollation reject List node?