Re: LWLock contention: I think I understand the problem - Mailing list pgsql-hackers

From Hannu Krosing
Subject Re: LWLock contention: I think I understand the problem
Date
Msg-id 1010358727.10359.5.camel@rh72.home.ee
Whole thread Raw
In response to Re: LWLock contention: I think I understand the problem  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
On Mon, 2002-01-07 at 06:37, Tom Lane wrote:
> Hannu Krosing <hannu@krosing.net> writes:
> > Should this not be 'vacuum full' ?
> >>
> >> Don't see why I should expend the extra time to do a vacuum full.
> >> The point here is just to ensure a comparable starting state for all
> >> the runs.
>
> > Ok. I thought that you would also want to compare performance for different
> > concurrency levels where the number of dead tuples matters more as shown by
> > the attached graph. It is for Dual PIII 800 on RH 7.2 with IDE hdd, scale 5,
> > 1-25 concurrent backends and 10000 trx per run
>
> VACUUM and VACUUM FULL will provide the same starting state as far as
> number of dead tuples goes: none.

I misinterpreted the fact that new VACUUM will skip locked pages - here
are none if run independently.

> So that doesn't explain the
> difference you see.  My guess is that VACUUM FULL looks better because
> all the new tuples will get added at the end of their tables; possibly
> that improves I/O locality to some extent.  After a plain VACUUM the
> system will tend to allow each backend to drop new tuples into a
> different page of a relation, at least until the partially-empty pages
> all fill up.
>
> What -B setting were you using?

I had the following in the postgresql.conf

shared_buffers = 4096

--------------
Hannu

I attach similar run, only with scale 50, from my desktop computer
(uniprocessor Athlon 850MHz, RedHat 7.1)

BTW, both were running unpatched postgreSQL 7.2b4.

--------------
Hannu


Attachment

pgsql-hackers by date:

Previous
From: mlw
Date:
Subject: Re: (void *) with shmat
Next
From: Dwayne Miller
Date:
Subject: Time as keyword