Re: [PERFORMANCE] work_mem vs temp files issue - Mailing list pgsql-performance

From Jaime Casanova
Subject Re: [PERFORMANCE] work_mem vs temp files issue
Date
Msg-id 3073cc9b1001122231m1a15d187lae2a8096a813361b@mail.gmail.com
Whole thread Raw
In response to Re: [PERFORMANCE] work_mem vs temp files issue  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: [PERFORMANCE] work_mem vs temp files issue  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: [PERFORMANCE] work_mem vs temp files issue  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-performance
On Mon, Jan 11, 2010 at 3:18 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> Hmm.  Not clear where the temp files are coming from, but it's *not* the
> sort --- the "internal sort ended" line shows that that sort never went
> to disk.  What kind of plan is feeding the sort node?
>

some time ago, you said:
"""
It might be useful to turn on trace_sort to see if the small files
are coming from sorts.  If they're from hashes I'm afraid there's
no handy instrumentation ...
"""

and is clearly what was bother me... because most of all temp files
are coming from hash...

why we don't show some of that info in explain? for example: we can
show memory used, no? or if the hash goes to disk... if i remove
#ifdef HJDEBUG seems like we even know how many batchs the hash
used...

the reason i say "most of the temp files" is that when i removed
#ifdef HJDEBUG it says that in total i was using 10 batchs but there
were 14 temp files created (i guess we use 1 file per batch, no?)

"""
nbatch = 1, nbuckets = 1024
nbatch = 1, nbuckets = 1024
nbatch = 8, nbuckets = 2048
"""

--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157

pgsql-performance by date:

Previous
From: Craig Ringer
Date:
Subject: Re: a heavy duty operation on an "unused" table kills my server
Next
From: Eduardo Piombino
Date:
Subject: Re: a heavy duty operation on an "unused" table kills my server