On Wed, 28 Sep 2005, Rajesh Kumar Mallah wrote:
> Hi
>
> While doing some stress testing for updates in a small sized table
> we found the following results. We are not too happy about the speed
> of the updates particularly at high concurrency (10 clients).
>
> Initially we get 119 updates / sec but it drops to 10 updates/sec
> as concurrency is increased.
>
> PostgreSQL: 8.0.3
> -------------------------------
> TABLE STRUCTURE: general.stress
> -------------------------------
> | dispatch_id | integer | not null |
> | query_id | integer | |
> | generated | timestamp with time zone | |
> | unsubscribes | integer | |
> | read_count | integer | |
> | status | character varying(10) | |
> | bounce_tracking | boolean | |
> | dispatch_hour | integer | |
> | dispatch_date_id | integer | |
> +------------------+--------------------------+-----------+
> Indexes:
> "stress_pkey" PRIMARY KEY, btree (dispatch_id)
>
> UPDATE STATEMENT:
> update general.stress set read_count=read_count+1 where dispatch_id=114
This means you are updating only one row, correct?
> Number of Copies | Update perl Sec
>
> 1 --> 119
> 2 ---> 59
> 3 ---> 38
> 4 ---> 28
> 5 --> 22
> 6 --> 19
> 7 --> 16
> 8 --> 14
> 9 --> 11
> 10 --> 11
> 11 --> 10
So, 11 instances result in 10 updated rows per second, database wide or
per instance? If it is per instance, then 11 * 10 is close to the
performance for one connection.
That being said, when you've got 10 connections fighting over one row, I
wouldn't be surprised if you had bad performance.
Also, at 119 updates a second, you're more than doubling the table's
initial size (dead tuples) each second. How often are you vacuuming and
are you using vacuum or vacuum full?
Gavin