Re: crashes due to setting max_parallel_workers=0 - Mailing list pgsql-hackers

From Robert Haas
Subject Re: crashes due to setting max_parallel_workers=0
Date
Msg-id CA+TgmoavXPCAHhg0TsuLrH4_meeZxp2Rw7jTmBBnw9D2pfHi2g@mail.gmail.com
Whole thread Raw
In response to crashes due to setting max_parallel_workers=0  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
Responses Re: crashes due to setting max_parallel_workers=0  (David Rowley <david.rowley@2ndquadrant.com>)
List pgsql-hackers
On Sat, Mar 25, 2017 at 12:18 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:
> About the original issue reported by Tomas, I did more debugging and
> found that - problem was gather_merge_clear_slots() was not returning
> the clear slot when nreader is zero (means nworkers_launched = 0).
> Due to the same scan was continue even all the tuple are exhausted,
> and then end up with server crash at gather_merge_getnext(). In the patch
> I also added the Assert into gather_merge_getnext(), about the index
> should be less then the nreaders + 1 (leader).

Well, you and David Rowley seem to disagree on what the fix is here.
His patches posted upthread do A, and yours do B, and from a quick
look those things are not just different ways of spelling the same
underlying fix, but actually directly conflicting ideas about what the
fix should be.  Any chance you can review his patches, and maybe he
can review yours, and we could try to agree on a consensus position?
:-)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Jeff Janes
Date:
Subject: Re: segfault in hot standby for hash indexes
Next
From: Robert Haas
Date:
Subject: Re: logical replication worker and statistics