Re: LWLock contention: I think I understand the problem - Mailing list pgsql-hackers

From Hannu Krosing
Subject Re: LWLock contention: I think I understand the problem
Date
Msg-id 1010356360.10359.3.camel@rh72.home.ee
Whole thread Raw
In response to Re: LWLock contention: I think I understand the problem  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
On Sun, 2002-01-06 at 02:44, Tom Lane wrote:
> Hannu Krosing <hannu@tm.ee> writes:
> > Could you rerun some of the tests on the same hardware but with
> > uniprocesor kernel
>
> I don't have root on that machine, but will see what I can arrange next
> week.
>
> > There were some reports about very poor insert performance on 4way vs 1way
> > processors.
>
> IIRC, that was fixed for 7.2.  (As far as I can tell from profiling,
> contention for the shared free-space-map is a complete nonissue, at
> least in this test.  That was something I was a tad worried about
> when I wrote the FSM code, but the tactic of locally caching a current
> insertion page seems to have sidestepped the problem nicely.)
>
> >> psql -c 'vacuum' $DB
> >>
> > Should this not be 'vacuum full' ?
>
> Don't see why I should expend the extra time to do a vacuum full.
> The point here is just to ensure a comparable starting state for all
> the runs.

Ok. I thought that you would also want to compare performance for different
concurrency levels where the number of dead tuples matters more as shown by
the attached graph. It is for Dual PIII 800 on RH 7.2 with IDE hdd, scale 5,
1-25 concurrent backends and 10000 trx per run





Attachment

pgsql-hackers by date:

Previous
From: Bear Giles
Date:
Subject: preannouncement: libpkixpq 0.3 will have crypto
Next
From: "Rod Taylor"
Date:
Subject: Re: Some interesting results from tweaking spinlocks