Tom Lane <tgl@sss.pgh.pa.us> writes:
> Greg Stark <gsstark@mit.edu> writes:
> > Does sort_mem have to be larger than the corresponding pgsql_tmp area that
> > would be used if postgres runs out of sort_mem?
>
> Probably. At least in recent versions, the "do we still fit in
> sort_mem" logic tries to account for palloc overhead and alignment
> padding, neither of which are present in the on-disk representation
> of the same tuples. So data unloaded to disk should be more compact
> than it was in memory. You didn't say what you were sorting, but
> if it's narrow rows (like maybe just an int or two) the overhead
> could easily be more than the actual data size.
Thank you. 64M seems to be enough after all, 48M just wasn't big enough. At
64M I don't see any more usage of pgsql_tmp. The largest on disk sort was
35,020,800 bytes. So that translates to a 44%-92% space overhead.
It turns out it was the same data structure as my earlier message which puts
it at 53 byte records in practice. Two integers, a float, a varchar with up to
12 characters.
--
greg