Andrei wrote:
> The function above updates the rows between the ids start_id and
> end_id. I have a quad core procesor so i run two separate connections
> to the database: select populate_test_data(5000,1) and another select
> populate_test_data(5000,2). In this case each function runs on one
> core doing the inserts in parallel, but when i try to run select
> select_unprocessed(1,5001) and from another connection select
> select_unprocessed(5001, 10001), one of the processes locks the table
> so the other one has to wait until the table is unlocked. Each
> process updates different parts of the table. Is there a way to do
> the updates in parallel on multiple cores?
>
Wait until the other is done or wait until it has done what it needs to?
If it appears to not update the records I would look at the id ranges
you are passing. You insert 5000 rows with the first function then you
tell the update function to update row id's 1 to 50001 - have you reset
the sequence for the id column? or do you drop and create the table
before each test? My guess is no updates appear to happen as the id's
entered by the serial type are larger than 10000.
Also you update with processed='n' - is that what you want? Is that the
only column you look at to see that it is done?
Transactions would be the only cause of the problem you describe. I am
guessing that you use bigger numbers than 5000 in your tests and the
examples above use overlapping id's. If the first updates row 5001 early
then the second may need to wait until it commits to update it again.
This can work the other way 5001 is updated by the second locking it
until it finishes and the first waits until the second commits to update
it again.
With 5000 rows I wouldn't expect to see a time difference.
Without an order by in the select the rows can be returned and updated
in any order.
--
Shane Ambler
pgSQL (at) Sheeky (dot) Biz
Get Sheeky @ http://Sheeky.Biz