FW: can't handle large number of INSERT/UPDATEs - Mailing list pgsql-performance

From Rod Dutton
Subject FW: can't handle large number of INSERT/UPDATEs
Date
Msg-id 20041026081155.6C5494070A9@mra02.ex.eclipse.net.uk
Whole thread Raw
List pgsql-performance
>>Eliminate that contention point, and you will have solved your problem.

I agree,  If your updates are slow then you will get a queue building up.

Make sure that:-
1) all your indexing is optimised.
2) you are doing regular vacuuming (bloated tables will cause a slow down
due to swapping).
3) your max_fsm_pages setting is large enough - it needs to be big enough to
hold all the transactions between vacuums (+ some spare for good measure).
4) do a full vacuum - do one to start and then do one after you have had 2&3
(above) in place for a while - if the full vacuum handles lots of dead
tuples then your max_fsm_pages setting is too low.
5) Also try reindexing or drop/recreate the indexes in question as...
"PostgreSQL is unable to reuse B-tree index pages in certain cases. The
problem is that if indexed rows are deleted, those index pages can only be
reused by rows with similar values. For example, if indexed rows are deleted
and newly inserted/updated rows have much higher values, the new rows can't
use the index space made available by the deleted rows. Instead, such new
rows must be placed on new index pages. In such cases, disk space used by
the index will grow indefinitely, even if VACUUM is run frequently. "

Are your updates directly executed or do you use stored procs?  We had a
recent problem with stored procs as they store a "one size fits all" query
plan when compiled - this can be less than optimum in some cases.

We have a similar sounding app to yours and if tackled correctly then all
the above will make a massive difference in performance.

Rod

-----Original Message-----
From: pgsql-performance-owner@postgresql.org
[mailto:pgsql-performance-owner@postgresql.org] On Behalf Of Rod Taylor
Sent: 25 October 2004 22:19
To: Anjan Dave
Cc: Postgresql Performance
Subject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs

On Mon, 2004-10-25 at 16:53, Anjan Dave wrote:
> Hi,
>
>
>
> I am dealing with an app here that uses pg to handle a few thousand
> concurrent web users. It seems that under heavy load, the INSERT and
> UPDATE statements to one or two specific tables keep queuing up, to
> the count of 150+ (one table has about 432K rows, other has about
> 2.6Million rows), resulting in ?wait?s for other queries, and then

This isn't an index issue, it's a locking issue. Sounds like you have a
bunch of inserts and updates hitting the same rows over and over again.

Eliminate that contention point, and you will have solved your problem.

Free free to describe the processes involved, and we can help you do
that.



---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to majordomo@postgresql.org so that your
      message can get through to the mailing list cleanly


pgsql-performance by date:

Previous
From: Dustin Sallings
Date:
Subject: Re: can't handle large number of INSERT/UPDATEs
Next
From: "Joost Kraaijeveld"
Date:
Subject: Measuring server performance with psql and pgAdmin