Re: UPDATE on 20 Million Records Transaction or not? - Mailing list pgsql-general

From Michael Lewis
Subject Re: UPDATE on 20 Million Records Transaction or not?
Date
Msg-id CAHOFxGr2j_BYN35By3GJOhevHXPOLV4sKgTgPfqec8wxg_tN4A@mail.gmail.com
Whole thread Raw
In response to Re: UPDATE on 20 Million Records Transaction or not?  (Ganesh Korde <ganeshakorde@gmail.com>)
Responses RE: UPDATE on 20 Million Records Transaction or not?  (Jason Ralph <jralph@affinitysolutions.com>)
List pgsql-general

>Are you updating every row in the table?

No I am using an update like so: UPDATE members SET regdate='2038-01-18' WHERE regdate='2020-07-07'

DB=# select count(*) from members where regdate = '2020-07-07';

  count  

----------

17333090

(1 row)


Just update them and be done with it. Do the work in batches if it doesn't matter that concurrent accesses to the table might see some rows that have old value and some new. Given you are on PG11 and can commit within a function, you could adapt something like this to just run until done. Oh, if you have not tuned the autovacuum process much, depending on the total number of rows in your table, it could be good to manually vacuum once or twice during this process so that space is reused.

*the below was shared by Andreas Krogh on one of these mailing lists about 6 months back.


do $_$
declare
    num_rows bigint;
begin
    loop
        delete from YourTable where id in
                                    (select id from YourTable where id < 500 limit 100);
        commit;
        get diagnostics num_rows = row_count;
        raise notice 'deleted % rows', num_rows;
        exit when num_rows = 0;
    end loop;
end;$_$;

pgsql-general by date:

Previous
From: Michael Lewis
Date:
Subject: Re: n_distinct off by a factor of 1000
Next
From: Edu Gargiulo
Date:
Subject: Re: pg_dump empty tables