Thread: the jokes for pg concurrency write performance

the jokes for pg concurrency write performance

From
wyx6fox@sina.com
Date:

hi, first, thanks u for make so good opensource db .

recently maybe half an years ago ,i begin to use pg in a big project for insurance project, belong as the project go on ,and

i found some performance problem on concurrency write situation , then i do a research on concurrency write strategy on postgresql ,

i found a joke ,maybe this joke concurrency strategy is the designer's pround idea, but i think it is a joke , next let me describe the problems:

* joke 1: insert operation would use a excluse lock on reference row by the foreign key . a big big big performance killer , i think this is a stupid design .

* joke 2: concurrency update on same row would lead to that other transaction must wait the earlier transaction complete , this would kill the concurrency performance in some long time transaction situation . a stupid design to ,

this joke design's reason is avoid confliction on read committed isolation , such as this situation:

UPDATE webpages SET hits = hits + 1 WHERE url ='some url ';

when concurrency write transaction on read committed isolation , the hits may result wrong .

this joke design would do seriable write , but i think any stupid developer would not write this code like this stupid sample code , a good code is

use a exclusive lock to do a seriable write on this same row , but the joker think he should help us to do this , i say ,u should no kill concurrency performance and help i do this fucking stupid sample code , i would use a select .. for update to do this :

select 1 from lock_table where lockId='lock1' for update ;

UPDATE webpages SET hits = hits + 1 WHERE url ='some url ';

* joke 3: update 100000 rows on a table no any index , it cost 5-8 seconds , this is not acceptable in some bulk update situation .

Re: the jokes for pg concurrency write performance

From
david@lang.hm
Date:
On Tue, 2 Feb 2010, wyx6fox@sina.com wrote:

> hi, first, thanks u for make so good opensource db .

first you thank the developers, then you insult them. are you asking for
help or just trying to cause problems.

> recently maybe half an years ago ,i begin to use pg in a big project for insurance project, belong as the project go
on,and 
> i found some performance problem on concurrency write situation , then i do a research on concurrency write strategy
onpostgresql , 
>
> i found a joke ,maybe this joke concurrency strategy is the designer's pround idea, but i think it is a joke , next
letme describe the problems: 
>
> * joke 1: insert operation would use a excluse lock on reference row by the foreign key . a big big big performance
killer, i think this is a stupid design . 

this I don't know enough to answer

> * joke 2: concurrency update on same row would lead to that other transaction must wait the earlier transaction
complete, this would kill the concurrency performance in some long time transaction situation . a stupid design to , 
>
> this joke design's reason is avoid confliction on read committed isolation , such as this situation:
> UPDATE webpages SET hits = hits + 1 WHERE url ='some url ';
> when concurrency write transaction on read committed isolation , the hits may result wrong .
>
> this joke design would do seriable write , but i think any stupid developer would not write this code like this
stupidsample code , a good code is 
> use a exclusive lock to do a seriable write on this same row , but the joker think he should help us to do this , i
say,u should no kill concurrency performance and help i do this fucking stupid sample code , i would use a select ..
forupdate to do this : 
>
> select 1 from lock_table where lockId='lock1' for update ;
> UPDATE webpages SET hits = hits + 1 WHERE url ='some url ';
>

If you have one transaction start modifying a row, and then have another
one start, how do you not have one wait for the other? Remember that any
transaction can end up running for a long time and may revert at any time.

Why would you want to lock the entire table for an update as simple as you
describe?

> * joke 3: update 100000 rows on a table no any index , it cost 5-8 seconds , this is not acceptable in some bulk
updatesituation . 

This one is easy, you did 100000 inserts as seperate transactions, if you
do them all as one transaction (or better still as a copy) they would
complete much faster.

You seem to be assuming incopatence on the part of the developers whenever
you run into a problem. If you want them to help you, I would suggest that
you assume that they know what they are doing (after all, if they didn't
you wouldn't want to use their code for anything important anyway), and
instead ask what the right way is to do what you are trying to do.

David Lang

Re: the jokes for pg concurrency write performance

From
Richard Broersma
Date:
On Feb 1, 2010, at 8:57 PM, wyx6fox@sina.com wrote:

> i found a joke ,maybe this joke concurrency strategy is the
> designer's pround idea, but i think it is a joke , next let me
> describe the problems:
>
  I would suggest that the behavior that you dislike so much is really
not idea of the postgresql developers as much as it is of Prof. Codd
and the ANSI-SQL committee.  I wonder if a creditable relational DBMS
exists that doesn't behave in exactly the way you've described?


> UPDATE webpages SET hits = hits + 1 WHERE url ='some url ';
>
>  i say ,u should no kill concurrency performance
>

One alternative design would be to log the timestamp of your web page
hits rather than update a hits count field.

Once you have this design, if the table becomes volumous with
historical logs you have the choice the use horizontal table
partitioning or you can roll up all of the historical logs into an
aggregating materialized view(table).

Regarding all of the jokes you mentioned, I found them all to be very
funny indeed.  :)

Regards,
Richard

Re: the jokes for pg concurrency write performance

From
Scott Marlowe
Date:
2010/2/1  <wyx6fox@sina.com>:
> hi, first, thanks u for make so good opensource db .
>
> recently maybe half an years ago ,i begin to use pg in a big project for
> insurance project, belong as the project go on ,and
>
> i found some performance problem on concurrency write situation , then i do
> a research on concurrency write strategy on postgresql ,
>
> i found a joke ,maybe this joke concurrency strategy is the designer's
> pround idea, but i think it is a joke , next let me describe the problems:

Please try not to insult the people you're asking for help.  Maybe a
little less inflamatory language.  Something like "It seems that there
are some issues with concurrency" would work wonders.  It's amazing
how much better a response you can get wihtout insulting everybody on
the list, eh?

Let's rewrite this assertion:
> * joke 1: insert operation would use a excluse lock on reference row by the
> foreign key . a big big big performance killer , i think this is a stupid
> design .

"problem #1: insert operation would use a excluse lock on reference row by the
foreign key . a big big big performance killer.  "

Then post an example of how it affects your performance.  Did you go
to the page that was pointed out to you in a previous post on how to
post effectively about pg problems and get a useful answer?  If not,
please do so, and re-post your questions etc without all the insults
and hand waving.

Re: the jokes for pg concurrency write performance

From
Alvaro Herrera
Date:
Scott Marlowe escribió:
> 2010/2/1  <wyx6fox@sina.com>:

> Let's rewrite this assertion:
> > * joke 1: insert operation would use a excluse lock on reference row by the
> > foreign key . a big big big performance killer , i think this is a stupid
> > design .
>
> "problem #1: insert operation would use a excluse lock on reference row by the
> foreign key . a big big big performance killer.  "

Yeah, if it had been written this way I could have told him that this
is not the case since 8.1, but since he didn't, I simply skipped his
emails.

--
Alvaro Herrera                                http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

Re: the jokes for pg concurrency write performance

From
Scott Marlowe
Date:
On Tue, Feb 2, 2010 at 9:20 AM, Alvaro Herrera
<alvherre@commandprompt.com> wrote:
> Scott Marlowe escribió:
>> 2010/2/1  <wyx6fox@sina.com>:
>
>> Let's rewrite this assertion:
>> > * joke 1: insert operation would use a excluse lock on reference row by the
>> > foreign key . a big big big performance killer , i think this is a stupid
>> > design .
>>
>> "problem #1: insert operation would use a excluse lock on reference row by the
>> foreign key . a big big big performance killer.  "
>
> Yeah, if it had been written this way I could have told him that this
> is not the case since 8.1, but since he didn't, I simply skipped his
> emails.

I wonder if having paid technical support to abuse leads to people
thinking they can treat other people like crap and get the answer they
want anyway...  Well, we'll see if the OP can write a non-flame-filled
inquiry on their performance issues or not.

Re: the jokes for pg concurrency write performance

From
Greg Smith
Date:
Scott Marlowe wrote:
> I wonder if having paid technical support to abuse leads to people
> thinking they can treat other people like crap and get the answer they
> want anyway...

You have technical support somewhere you get to abuse?  For me it always
seems to be the other way around...

--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com   www.2ndQuadrant.us


Re: the jokes for pg concurrency write performance

From
J Sisson
Date:
2010/2/1  <wyx6fox@sina.com>:
> * joke 1: insert operation would use a excluse lock on reference row by the
> foreign key . a big big big performance killer , i think this is a stupid
> design .
>
> * joke 2: concurrency update on same row would lead to that other
> transaction must wait the earlier transaction complete , this would kill the
> concurrency performance in some long time transaction situation . a stupid
> design to ,

I hear that MySQL can work wonders in performance by bypassing the
checks you're concerned about...don't count on the data being
consistent, but by golly it'll get to the client FAAAAAAAST...