On 09/23/2014 12:35 PM, Emi Lu wrote:
> Hello list,
>
> For a big table with more than 1,000,000 records, may I know which
> update is quicker please?
>
> (1) update t1
> set c1 = a.c1
> from a
> where pk and
> t1.c1 <> a.c1;
> ......
> update t1
> set c_N = a.c_N
> from a
> where pk and
> t1.c_N <> a.c_N;
>
>
> (2) update t1
> set c1 = a.c1 ,
> c2 = a.c2,
> ...
> c_N = a.c_N
> from a
> where pk AND
> ( t1.c1 <> a.c1 OR t1.c2 <> a.c2..... t1.c_N <> a.c_N)
>
>
> ....
>
We don't have any info about table structures, index availability and
usage for query optimization, whether or not the updated columns are
part of an index, amount of memory available, disk speed, portion of t1
that will be updated, PostgreSQL settings, etc. so it's really anyone's
guess. A million rows is pretty modest so I was able to try a couple
variants of "update...from..." on million row tables on my aging desktop
without coming close to the 60-second mark.
*Usually* putting statements into a single transaction is better (as
would happen automatically in case 2). Also, to the extent that a given
tuple would have multiple columns updated you will have less bloat and
I/O using the query that updates the tuple once rather than multiple
times. But a lot will depend on the efficiency of looking up the
appropriate data in "a."
Cheers,
Steve