Updating large tables without dead tuples - Mailing list pgsql-performance

From ldh@laurent-hasson.com
Subject Updating large tables without dead tuples
Date
Msg-id BY2PR15MB0872700CE29FE00DC9E9647885CC0@BY2PR15MB0872.namprd15.prod.outlook.com
Whole thread Raw
Responses Re: Updating large tables without dead tuples
Re: Updating large tables without dead tuples
List pgsql-performance

Hello

 

I work with a large and wide table (about 300 million rows, about 50 columns), and from time to time, we get business requirements to make some modifications. But sometimes, it’s just some plain mistake. This has happened to us a few weeks ago where someone made a mistake and we had to update a single column of a large and wide table. Literally, the source data screwed up a zip code and we had to patch on our end.

 

Anyways… Query ran was:

    update T set source_id = substr(sourceId, 2, 10);

Took about 10h and created 100’s of millions of dead tuples, causing another couple of hours of vacuum.

 

This was done during a maintenance window, and that table is read-only except when we ETL data to it on a weekly basis, and so I was just wondering why I should pay the “bloat” penalty for this type of transaction. Is there a trick that could be use here?

 

More generally, I suspect that the MVCC architecture is so deep that something like LOCK TABLE, which would guarantee that there won’t be contentions, couldn’t be used as a heuristic to not create dead tuples? That would make quite a performance improvement for this type of work though.

 

 

Thank you,

Laurent.

pgsql-performance by date:

Previous
From: Andreas Kretschmer
Date:
Subject: Re: Please help
Next
From: Stephen Frost
Date:
Subject: Re: Updating large tables without dead tuples