Re: LWLock contention: I think I understand the problem - Mailing list pgsql-hackers

From Tom Lane
Subject Re: LWLock contention: I think I understand the problem
Date
Msg-id 29890.1010367425@sss.pgh.pa.us
Whole thread Raw
In response to Re: LWLock contention: I think I understand the problem  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: LWLock contention: I think I understand the problem  (Hannu Krosing <hannu@krosing.net>)
List pgsql-hackers
Hannu Krosing <hannu@krosing.net> writes:
> Should this not be 'vacuum full' ?
>> 
>> Don't see why I should expend the extra time to do a vacuum full.
>> The point here is just to ensure a comparable starting state for all
>> the runs.

> Ok. I thought that you would also want to compare performance for different 
> concurrency levels where the number of dead tuples matters more as shown by
> the attached graph. It is for Dual PIII 800 on RH 7.2 with IDE hdd, scale 5,
> 1-25 concurrent backends and 10000 trx per run

VACUUM and VACUUM FULL will provide the same starting state as far as
number of dead tuples goes: none.  So that doesn't explain the
difference you see.  My guess is that VACUUM FULL looks better because
all the new tuples will get added at the end of their tables; possibly
that improves I/O locality to some extent.  After a plain VACUUM the
system will tend to allow each backend to drop new tuples into a
different page of a relation, at least until the partially-empty pages
all fill up.

What -B setting were you using?
        regards, tom lane


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Effects of pgbench "scale factor"
Next
From: Bruce Momjian
Date:
Subject: Re: Effects of pgbench "scale factor"