Re: Parallel tuplesort, partitioning, merging, and the future - Mailing list pgsql-hackers

From Claudio Freire
Subject Re: Parallel tuplesort, partitioning, merging, and the future
Date
Msg-id CAGTBQpZbv_3mNgxsKtrRk_xCUc5Yj-b=S0XvRVX9pxeCANB_kg@mail.gmail.com
Whole thread Raw
In response to Parallel tuplesort, partitioning, merging, and the future  (Peter Geoghegan <pg@heroku.com>)
Responses Re: Parallel tuplesort, partitioning, merging, and the future
List pgsql-hackers
On Mon, Aug 8, 2016 at 4:44 PM, Peter Geoghegan <pg@heroku.com> wrote:
> The basic idea I have in mind is that we create runs in workers in the
> same way that the parallel CREATE INDEX patch does (one output run per
> worker). However, rather than merging in the leader, we use a
> splitting algorithm to determine partition boundaries on-the-fly. The
> logical tape stuff then does a series of binary searches to find those
> exact split points within each worker's "final" tape. Each worker
> reports the boundary points of its original materialized output run in
> shared memory. Then, the leader instructs workers to "redistribute"
> slices of their final runs among each other, by changing the tapeset
> metadata to reflect that each worker has nworker input tapes with
> redrawn offsets into a unified BufFile. Workers immediately begin
> their own private on-the-fly merges.

I think it's a great design, but for that, per-worker final tapes have
to always be random-access.

I'm not hugely familiar with the code, but IIUC there's some penalty
to making them random-access right?



pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: Parallel tuplesort, partitioning, merging, and the future
Next
From: Peter Eisentraut
Date:
Subject: Re: Set log_line_prefix and application name in test drivers