Re: Slow concurrent update of same row in a given table - Mailing list pgsql-performance

From Gavin Sherry
Subject Re: Slow concurrent update of same row in a given table
Date
Msg-id Pine.LNX.4.58.0509290757220.22312@linuxworld.com.au
Whole thread Raw
In response to Re: Slow concurrent update of same row in a given table  (Rajesh Kumar Mallah <mallah.rajesh@gmail.com>)
Responses Re: Slow concurrent update of same row in a given table
List pgsql-performance
On Wed, 28 Sep 2005, Rajesh Kumar Mallah wrote:

> > > Number of Copies | Update perl Sec
> > >
> > > 1  --> 119
> > > 2  ---> 59
> > > 3  --->  38
> > > 4  ---> 28
> > > 5 --> 22
> > > 6 --> 19
> > > 7 --> 16
> > > 8 --> 14
> > > 9 --> 11
> > > 10 --> 11
> > > 11 --> 10
> >
> > So, 11 instances result in 10 updated rows per second, database wide or
> > per instance? If it is per instance, then 11 * 10 is close to the
> > performance for one connection.
>
>
> Sorry do not understand the difference between "database wide"
> and "per instance"

Per instance.

>
> >
> > That being said, when you've got 10 connections fighting over one row, I
> > wouldn't be surprised if you had bad performance.
> >
> > Also, at 119 updates a second, you're more than doubling the table's
> > initial size (dead tuples) each second. How often are you vacuuming and
> > are you using vacuum or vacuum full?
>
>
> Yes I realize the obvious phenomenon now, (and the uselessness of the script)
>  , we should not consider it a performance degradation.
>
> I am having performance issue in my live database thats why i tried to
> simulate the situation(may the the script was overstresser).
>
> My original problem is  that i send 100 000s of emails carrying a
> beacon for tracking readership every tuesday and on wednesday i see
> lot of the said query in pg_stat_activity each of these query update
> the SAME row that corresponds to the dispatch of last day and it is
> then i face the performance problem.
>
> I think i can only post further details next wednesday , please lemme
> know how should i be dealing with the situation if each the updates takes
> 100times more time that normal update duration.

I see. These problems regularly come up in database design. The best thing
you can do is modify your database design/application such that instead of
incrementing a count in a single row, you insert a row into a table,
recording the 'dispatch_id'. Counting the number of rows for a given
dispatch id will give you your count.

Thanks,

Gavin

pgsql-performance by date:

Previous
From: Alex Stapleton
Date:
Subject: Re: Monitoring Postgresql performance
Next
From: Ron Peacetree
Date:
Subject: Re: Logarithmic change (decrease) in performance