Re: slow commits with heavy temp table usage in 8.4.0 - Mailing list pgsql-hackers

From Alex Hunsaker
Subject Re: slow commits with heavy temp table usage in 8.4.0
Date
Msg-id 34d269d40908061113j5c883d91s5633a35e1e38bf87@mail.gmail.com
Whole thread Raw
In response to Re: slow commits with heavy temp table usage in 8.4.0  ("Todd A. Cook" <tcook@blackducksoftware.com>)
Responses Re: slow commits with heavy temp table usage in 8.4.0  ("Todd A. Cook" <tcook@blackducksoftware.com>)
List pgsql-hackers
On Thu, Aug 6, 2009 at 11:32, Todd A. Cook<tcook@blackducksoftware.com> wrote:
> Tom Lane wrote:
>>
>> I took a look through the CVS history and verified that there were
>> no post-8.4 commits that looked like they'd affect performance in
>> this area.  So I think it's got to be a platform difference not a
>> PG version difference.  In particular I think we are probably looking
>> at a filesystem issue: how fast can you delete [...] 30000 files.
>
> I'm still on Fedora 7, so maybe this will be motivation to upgrade.
>
> FYI, on my 8.2.13 system, the test created 30001 files which were all
> deleted during the commit.  On my 8.4.0 system, the test created 60001
> files, of which 30000 were deleted at commit and 30001 disappeared
> later (presumably during a checkpoint?).

Smells like fsm?  With double the number of files maybe something
simple like turning on dir_index if you are ext3 will help?


pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: Prefix support for synonym dictionary
Next
From: "Kevin Grittner"
Date:
Subject: Re: the case for machine-readable error fields