Re: 121+ million record table perf problems - Mailing list pgsql-performance

From Alan Hodgson
Subject Re: 121+ million record table perf problems
Date
Msg-id 200705181208.41163@hal.medialogik.com
Whole thread Raw
In response to Re: 121+ million record table perf problems  ("Joshua D. Drake" <jd@commandprompt.com>)
List pgsql-performance
On Friday 18 May 2007 11:51, "Joshua D. Drake" <jd@commandprompt.com> wrote:
> > The update query that started this all I had to kill after 17hours.  It
> > should have updated all 121+ million records.  That brought my select
> > count down to 19 minutes, but still a far cry from acceptable.

You're going to want to drop all your indexes before trying to update 121
million records.  Updates in PostgreSQL are really quite slow, mostly due
to all the index updates. Drop indexes, do the updates, create a primary
key, cluster the table on that key to free up the dead space, then recreate
the rest of the indexes. That's about as fast as you can get that process.

Of course, doing anything big on one disk is also going to be slow, no
matter what you do. I don't think a table scan should take 19 minutes,
though, not for 121 million records. You should be able to get at least
60-70MB/sec out of anything modern. I can only assume your disk is
thrashing doing something else at the same time as the select.

--
"We can no more blame our loss of freedom on Congressmen than we can
prostitution on pimps.  Both simply provide broker services for their
customers." -- Dr. Walter Williams


pgsql-performance by date:

Previous
From: "Tyrrill, Ed"
Date:
Subject: Slow queries on big table
Next
From: Ron Mayer
Date:
Subject: Re: Background vacuum