mlw <markw@mohawksoft.com> writes:
> Take this update:
> update table set field = 'X' ;
> This is a very expensive function when the table has millions of rows,
> it takes over an hour. If I dump the database, and process the data with
> perl, then reload the data, it takes minutes. Most of the time is used
> creating indexes.
Hm. CREATE INDEX is well known to be faster than incremental building/
updating of indexes, but I didn't think it was *that* much faster.
Exactly what indexes do you have on this table? Exactly how many
minutes is "minutes", anyway?
You might consider some hack like
drop inessential indexes;UPDATE;recreate dropped indexes;
"inessential" being any index that's not UNIQUE (or even the UNIQUE
ones, if you don't mind finding out about uniqueness violations at
the end).
Might be a good idea to do a VACUUM before rebuilding the indexes, too.
It won't save time in this process, but it'll be cheaper to do it then
rather than later.
regards, tom lane
PS: I doubt transactions have anything to do with it.