updating 40.000 records should take no longer than a couple of minutes.
I think you should optimise your query before going any further.
You have an inner SELECT sentence that executes before anything. It
joins EVERY row in your table (1,000,000+) with at most 3 other rows in
the same table, so you will end up with about 3,000,000+ rows... but you
are interested in only 40,000 rows!
To make it simple, add a WHERE condition to fetch only the 40.000 rows
you are interested in and discard the others. Make sure also you have
indexed the attributes you are filtering on, and the date attribute too.
You should use EXPLAIN ANALYZE on the inner query to check how it improves.
Once your SELECT query runs fast enough, the UPDATE should go much
faster too.
The number of columns matters, but as I said, I don't think it's an
UPDATE problem.
If you don't find the way to speed your query up, try posting to the
performance list.
mrblonde@locked.myftp.org wrote:
>Thanks a lot.. That is what i searched.. In fact your query is very good for little changes, but i will have to use
anothermethod when updating all my rows because the performance is not very good alas.
>
>My data set contains something like 40000 rows to update in 1+ million records and data_raw, data_sys are of type
"real"...The complete update took 40 minutes on a 256Mo, athlon 2400, kernel 2.6 and with no charge during the
executionof the query.
>
>Is this normal ? The number of columns of the table does it matter a lot (the table contains 12 reals and 4 integers)
?
>
>I found that using an intermediate table which stock for every row the value before and the value after helps to gain
speed...But it is not a very nice way i think..
>
>Thanks again :)
>Etienne
>
>
>---------------------------(end of broadcast)---------------------------
>TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faq
>
>
>