"Todd A. Cook" <tcook@blackducksoftware.com> writes:
> Tom Lane wrote:
>> I'm not seeing the very long CPU-bound commit phase that Todd is seeing.
> The commit looks CPU-bound when I let the residual I/O from the function
> execution die out before I issue the commit.
Well, mine is CPU-bound too, it just is much shorter relative to the
function execution time than you're showing. My test showed about
a 9x ratio at 10000 truncates and a 30x ratio at 30000; you've got
numbers in the range of 2x to 4x. So something is behaving
differently between your machines and mine.
I took a look through the CVS history and verified that there were
no post-8.4 commits that looked like they'd affect performance in
this area. So I think it's got to be a platform difference not a
PG version difference. In particular I think we are probably looking
at a filesystem issue: how fast can you delete 10000 or 30000 files.
(I'm testing an ext3 filesystem on a plain ol consumer-grade drive
that is probably lying about write complete, so I'd not be surprised
if deletions go a lot faster than they "ought to" ... except that
the disk issue shouldn't affect things if it's CPU-bound anyway.)
As I said, my inclination for improving this area, if someone wanted
to work on it, would be to find a way to do truncate-in-place on
temp tables. ISTM that in the case you're showing --- truncate that's
not within a subtransaction, on a table that's drop-on-commit anyway
--- we should not need to keep around the pre-truncation data. So we
could just do ftruncate instead of creating a new file, and we'd not
need a new copy of the pg_class row either. So that should make both
the function time and the commit time a lot better. But I'm not sure
if the use-case is popular enough to deserve such a hack.
regards, tom lane