Hi,
I am updating a big table (90M records) with data from another rather large table (4M entries). Here is my update query:
update links set target_size =
( select size from articles where articles.article_id = links.article_to)
I have built all the indexes one might need, increased shared mem buffers to 400MB, I looked at the query plan and it looks reasonable.
But its taking an eternity to run: I've been running the query for 3 hours now on my new Mac laptop, and looking at the activity monitor I see that postrges is spending all of this time in disk IO (average CPU load of postgres process is about 4-5%).
However, just looking at the query, postgres could cache the articles table and do a single pass over the links table...
Please let me know if there is a good solution to this.
Thanks!
Pavel Velikhov
Institute of Systems Programming
Russian Academy of Sciences
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com