Re: Merge algorithms for large numbers of "tapes" - Mailing list pgsql-hackers

From Jim C. Nasby
Subject Re: Merge algorithms for large numbers of "tapes"
Date
Msg-id 20060308174904.GD45250@pervasive.com
Whole thread Raw
In response to Re: Merge algorithms for large numbers of "tapes"  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Merge algorithms for large numbers of "tapes"
Re: Merge algorithms for large numbers of "tapes"
List pgsql-hackers
On Wed, Mar 08, 2006 at 11:20:50AM -0500, Tom Lane wrote:
> "Jim C. Nasby" <jnasby@pervasive.com> writes:
> > If we do have to fail to disk, cut back to 128MB, because having 8x that
> > certainly won't make the sort run anywhere close to 8x faster.
> 
> Not sure that follows.  In particular, the entire point of the recent
> changes has been to extend the range in which we can use a single merge
> pass --- that is, write the data once as N sorted runs, then merge them
> in a single read pass.  As soon as you have to do an actual merge-back-
> to-disk pass, your total I/O volume doubles, so there is definitely a
> considerable gain if that can be avoided.  And a larger work_mem
> translates directly to fewer/longer sorted runs.

But do fewer/longer sorted runs translate into not merging back to disk?
I thought that was controlled by if we had to be able to rewind the
result set.
-- 
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461


pgsql-hackers by date:

Previous
From: Simon Riggs
Date:
Subject: Re: Running out of disk space during query
Next
From: Simon Riggs
Date:
Subject: Re: problem with large maintenance_work_mem settings and