Re: Parallel Seq Scan - Mailing list pgsql-hackers

From Robert Haas
Subject Re: Parallel Seq Scan
Date
Msg-id CA+TgmoYUmyWW-qwCt_dUHW6-jjjAA2k+DzuB4Ydn1MoH4ccfTQ@mail.gmail.com
Whole thread Raw
In response to Re: Parallel Seq Scan  (Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>)
Responses Re: Parallel Seq Scan
List pgsql-hackers
On Wed, Apr 8, 2015 at 3:38 AM, Amit Langote
<Langote_Amit_f8@lab.ntt.co.jp> wrote:
> On 08-04-2015 PM 12:46, Amit Kapila wrote:
>> Going forward, I think we can improve the same if we decide not to shutdown
>> parallel workers till postmaster shutdown once they are started and
>> then just allocate them during executor-start phase.
>
> I wonder if it makes sense to invent the notion of a global pool of workers
> with configurable number of workers that are created at postmaster start and
> destroyed at shutdown and requested for use when a query uses parallelizable
> nodes.

Short answer: Yes, but not for the first version of this feature.

Longer answer: We can't actually very reasonably have a "global" pool
of workers so long as we retain the restriction that a backend
connected to one database cannot subsequently disconnect from it and
connect to some other database instead.  However, it's certainly a
good idea to reuse the same workers for subsequent operations on the
same database, especially if they are also by the same user.  At the
very minimum, it would be good to reuse the same workers for
subsequent operations within the same query, instead of destroying the
old ones and creating new ones.  Nonwithstanding the obvious value of
all of these ideas, I don't think we should do any of them for the
first version of this feature.  This is too big a thing to get perfect
on the first try.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: Parallel Seq Scan
Next
From: Bruce Momjian
Date:
Subject: Re: Freeze avoidance of very large table.