Re: Slow concurrent update of same row in a given table - Mailing list pgsql-performance

From Gavin Sherry
Subject Re: Slow concurrent update of same row in a given table
Date
Msg-id Pine.LNX.4.58.0509282245040.19538@linuxworld.com.au
Whole thread Raw
In response to Slow concurrent update of same row in a given table  (Rajesh Kumar Mallah <mallah.rajesh@gmail.com>)
Responses Re: Slow concurrent update of same row in a given table
List pgsql-performance
On Wed, 28 Sep 2005, Rajesh Kumar Mallah wrote:

> Hi
>
> While doing some stress testing for updates in a small sized table
> we found the following results. We are not too happy about the speed
> of the updates particularly at high concurrency (10 clients).
>
> Initially we get 119 updates / sec but it drops to 10 updates/sec
> as concurrency is increased.
>
> PostgreSQL: 8.0.3
> -------------------------------
> TABLE STRUCTURE: general.stress
> -------------------------------
> | dispatch_id      | integer                  | not null  |
> | query_id         | integer                  |           |
> | generated        | timestamp with time zone |           |
> | unsubscribes     | integer                  |           |
> | read_count       | integer                  |           |
> | status           | character varying(10)    |           |
> | bounce_tracking  | boolean                  |           |
> | dispatch_hour    | integer                  |           |
> | dispatch_date_id | integer                  |           |
> +------------------+--------------------------+-----------+
> Indexes:
>     "stress_pkey" PRIMARY KEY, btree (dispatch_id)
>
> UPDATE STATEMENT:
> update general.stress set read_count=read_count+1 where dispatch_id=114

This means you are updating only one row, correct?

> Number of Copies | Update perl Sec
>
> 1  --> 119
> 2  ---> 59
> 3  --->  38
> 4  ---> 28
> 5 --> 22
> 6 --> 19
> 7 --> 16
> 8 --> 14
> 9 --> 11
> 10 --> 11
> 11 --> 10

So, 11 instances result in 10 updated rows per second, database wide or
per instance? If it is per instance, then 11 * 10 is close to the
performance for one connection.

That being said, when you've got 10 connections fighting over one row, I
wouldn't be surprised if you had bad performance.

Also, at 119 updates a second, you're more than doubling the table's
initial size (dead tuples) each second. How often are you vacuuming and
are you using vacuum or vacuum full?

Gavin

pgsql-performance by date:

Previous
From: Rajesh Kumar Mallah
Date:
Subject: Slow concurrent update of same row in a given table
Next
From: Arnau
Date:
Subject: Monitoring Postgresql performance