Thread: some problems when i use postgresql 8.4.2 in my projects .

some problems when i use postgresql 8.4.2 in my projects .

From
wyx6fox@sina.com
Date:
Sorry again for previous rough messages,  some good people advice me to post these problems again With sincere and friendly attitude 。

I think i should do this .

In recently projects , I determine use pg in some medium or big projects , as the projects has been finished, 
it prove that I made a right decision. Maturity and stability of the postgresql has left us a deep iompression, of coz,  there is some problems in postgresql  , and finally we take some  interim measures to avoid this problems
ENV: postgresql 8.4.2 , CentOS5.4, JDK6.0

problems 1:  My previous view is that the insert operation would use a exclusive lock on referenced row on FK , but now I realyzed that I  am wrong , after test , pg does not  take a exclusive lock on fk row,  My prvous test procedure make a stupid mistake: when i check the performence problem , i remove the fks and unique constraints in one time , and the good result make me think the fk is the problem, but actually the unique constraints is the problem .

i shame myself .
after shaming , I think i should pick out some my points:
 the unique constraints actualy kill concurrency write transaction when concurrency insert violate the unique constraints , they block each other , i test this in oracle10g, has the same behavour. I think this may be reasonable because the uqniue check must be  the seriazable check .
for resolve this problem , i do the unique check in application as possible , but in big concurrency env , this is not good way .


problems 2: my mistake too , i think i misunderstanding read committed isolation , shame myself again too .


problems 3:

After i do some config by this link: http://wiki.postgresql.org/wiki/SlowQueryQuestions .

the cost now just is 2-4 seconds , it is acceptable .


thanks u  very much and forgive me .

 

Re: some problems when i use postgresql 8.4.2 in my projects .

From
Robert Haas
Date:
2010/2/2  <wyx6fox@sina.com>:
>  the unique constraints actualy kill concurrency write transaction when
> concurrency insert violate the unique constraints , they block each other ,
> i test this in oracle10g, has the same behavour. I think this may be
> reasonable because the uqniue check must be  the seriazable check .
> for resolve this problem , i do the unique check in application as possible
> , but in big concurrency env , this is not good way .

You may find that your way isn't actually very reliable, and that
making it reliable will be very, very much harder (and likely no
faster) than letting PostgreSQL do it.

...Robert

Re: some problems when i use postgresql 8.4.2 in my projects .

From
Leo Mannhart
Date:
wyx6fox@sina.com wrote:
> after shaming , I think i should pick out some my points:
>  the unique constraints actualy kill concurrency write transaction when
> concurrency insert violate the unique constraints , they block each
> other , i test this in oracle10g, has the same behavour. I think this
> may be reasonable because the uqniue check must be  the seriazable check .
> for resolve this problem , i do the unique check in application as
> possible , but in big concurrency env , this is not good way .
>

How can you enforce uniqueness in the application? If you implement it
correctly, you need considerably longer than letting it do PostgreSQL.
Even if you use some kind of magic, I could not imagine, how you can
implement a unique constraint in the application and gaurantee
uniqueness while at the same time be faster than the RDBMS.

Leo

Re: some problems when i use postgresql 8.4.2 in my projects .

From
Pierre Frédéric Caillaud
Date:
> when concurrency insert violate the unique constraints , they block each
> other , i test this in oracle10g, has the same behavour. I think this
> may be reasonable because the uqniue check must be  the seriazable
> check .
> for resolve this problem , i do the unique check in application as
> possible , but in big concurrency env , this is not good way .

    You probably can't do that in the application.

    About exclusive constraints :

Transaction A : begin
Transaction A : insert value X
Transaction A : do some work, or just wait for client
...

Meanwhile :

Transaction B : begin
Transaction B : insert same value X
Transaction B : locked because A hasn't committed yet so the exclusive
constraint can't be resolved

Transaction A : commit or rollback
Transaction B : lock is released, constraint is either OK or violated
depending on txn A rollback/rommit.

    As you can see, the longer the transactions are, the more problems you
get.

Solution 1 : change design.

- Why do you need this exclusive constraint ?
- How are the unique ids generated ?
- What kind of data do those ids represent ?
- Can you sidestep it by using a sequence or something ?
- Without knowing anything about your application, impossible to answer.

Solution 2 : reduce the transaction time.

- Optimize your queries (post here)
- Commit as soon as possible
- Long transactions (waiting for user input) are generally not such a good
idea
- Anything that makes the txn holding the locks wait more is bad
(saturated network, slow app server, etc)
- Optimize your xlog to make writes & commits faster

Solution 3 : reduce the lock time

Instead of doing :
BEGIN
INSERT X
... do some stuff ...
COMMIT;

do :

BEGIN
... do some stuff that doesn't depend on X...
INSERT X
... do less stuff while holding lock ...
COMMIT;

Solution 4 :

If you have really no control over value "X" and you need a quick reply
"is X already there ?", you can use 2 transactions.
One transaction will "reserve" the value of X :

- SELECT WHERE col = X
    ensures row and index are in cache whilst taking no locks)

- Set autocommit to 1
- INSERT X;
    inserts X and commits immediately, else cause an error. Lock will not be
held for long, since autocommit means it commits ASAP.

- Perform the rest of your (long) operations in another transaction.

This is a bit less safe since, if the second transaction fails, insert of
X is not rolled back.