Re: PostgreSQL strugling during high load - Mailing list pgsql-performance

From Mindaugas Riauba
Subject Re: PostgreSQL strugling during high load
Date
Msg-id 025401c559fe$bb852e80$f20214ac@bite.lt
Whole thread Raw
In response to PostgreSQL strugling during high load  ("Mindaugas Riauba" <mind@bi.lt>)
List pgsql-performance
> >   Hm. Yes. Number of locks varies quite alot (10-600). Now what to
> > investigate
> > further? We do not use explicit locks in our functions. We use quite
simple
> > update/delete where key=something;
> >   Some sample (select * from pg_locks order by pid) is below.
>
> The sample doesn't show any lock issues (there are no processes waiting
> for ungranted locks).  The thing that typically burns people is foreign
> key conflicts.  In current releases, if you have a foreign key reference
> then an insert in the referencing table takes an exclusive row lock on
> the referenced (master) row --- which means that two inserts using the
> same foreign key value block each other.
>
> You can alleviate the issue by making all your foreign key checks
> deferred, but that just shortens the period of time the lock is held.
> There will be a real solution in PG 8.1, which has sharable row locks.

  In such case our foreign key contraint should not be an issue since it
is on msg_id which is pretty much unique among concurrent transactions.

  And I noticed that "storms" happens along with higher write activity. If
bo in vmstat shows 25+MB in 2s then most likely I will get "storm" of slow
queries in serverlog. How to even write activity? fsync=off, bgwriter
settings
are default.

  And is it possible to log which query in function takes the longest time
to complete?

  Also do not know if it matters but PG database is on ext3 partition with
data=journal option.

  Thanks,

  Mindaugas


pgsql-performance by date:

Previous
From: Alvaro Herrera
Date:
Subject: Re: checkpoint segments
Next
From: Josh Berkus
Date:
Subject: Re: checkpoint segments