Re: Parallel Seq Scan - Mailing list pgsql-hackers

From John Gorman
Subject Re: Parallel Seq Scan
Date
Msg-id CALkS6B_HBPPzSWuUQsS_S=OD-WtkRc9j2C+LubgDqJ05gigrug@mail.gmail.com
Whole thread Raw
In response to Re: Parallel Seq Scan  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: Parallel Seq Scan
Re: Parallel Seq Scan
Re: Parallel Seq Scan
List pgsql-hackers


On Sun, Jan 11, 2015 at 6:00 PM, Robert Haas <robertmhaas@gmail.com> wrote:
On Sun, Jan 11, 2015 at 6:01 AM, Stephen Frost <sfrost@snowman.net> wrote:
> So, for my 2c, I've long expected us to parallelize at the relation-file
> level for these kinds of operations.  This goes back to my other
> thoughts on how we should be thinking about parallelizing inbound data
> for bulk data loads but it seems appropriate to consider it here also.
> One of the issues there is that 1G still feels like an awful lot for a
> minimum work size for each worker and it would mean we don't parallelize
> for relations less than that size.

Yes, I think that's a killer objection.

One approach that I has worked well for me is to break big jobs into much smaller bite size tasks. Each task is small enough to complete quickly.

We add the tasks to a task queue and spawn a generic worker pool which eats through the task queue items.

This solves a lot of problems.

- Small to medium jobs can be parallelized efficiently.
- No need to split big jobs perfectly.
- We don't get into a situation where we are waiting around for a worker to finish chugging through a huge task while the other workers sit idle.
- Worker memory footprint is tiny so we can afford many of them.
- Worker pool management is a well known problem.
- Worker spawn time disappears as a cost factor.
- The worker pool becomes a shared resource that can be managed and reported on and becomes considerably more predictable.

pgsql-hackers by date:

Previous
From: Marco Nenciarini
Date:
Subject: Re: [RFC] LSN Map
Next
From: Kyotaro HORIGUCHI
Date:
Subject: Re: Async execution of postgres_fdw.