Re: Tuning Question sort_mem vs pgsql_tmp - Mailing list pgsql-general

From Greg Stark
Subject Re: Tuning Question sort_mem vs pgsql_tmp
Date
Msg-id 87ptq75lls.fsf@stark.dyndns.tv
Whole thread Raw
In response to Re: Tuning Question sort_mem vs pgsql_tmp  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
Tom Lane <tgl@sss.pgh.pa.us> writes:

> Greg Stark <gsstark@mit.edu> writes:
> > Does sort_mem have to be larger than the corresponding pgsql_tmp area that
> > would be used if postgres runs out of sort_mem?
>
> Probably.  At least in recent versions, the "do we still fit in
> sort_mem" logic tries to account for palloc overhead and alignment
> padding, neither of which are present in the on-disk representation
> of the same tuples.  So data unloaded to disk should be more compact
> than it was in memory.  You didn't say what you were sorting, but
> if it's narrow rows (like maybe just an int or two) the overhead
> could easily be more than the actual data size.

Thank you. 64M seems to be enough after all, 48M just wasn't big enough. At
64M I don't see any more usage of pgsql_tmp. The largest on disk sort was
35,020,800 bytes. So that translates to a 44%-92% space overhead.

It turns out it was the same data structure as my earlier message which puts
it at 53 byte records in practice. Two integers, a float, a varchar with up to
12 characters.

--
greg

pgsql-general by date:

Previous
From: Greg Stark
Date:
Subject: Re: not exactly a bug report, but surprising behaviour
Next
From: John Smith
Date:
Subject: Re: UPDATE slow [Viruschecked]