Re: BUG #18166: 100 Gb 18000000 records table update - Mailing list pgsql-bugs

From Tom Lane
Subject Re: BUG #18166: 100 Gb 18000000 records table update
Date
Msg-id 4020107.1697851360@sss.pgh.pa.us
Whole thread Raw
In response to BUG #18166: 100 Gb 18000000 records table update  (PG Bug reporting form <noreply@postgresql.org>)
Responses Re[2]: BUG #18166: 100 Gb 18000000 records table update  (Ruslan Ganeev <ruslan.ganeev@list.ru>)
List pgsql-bugs
PG Bug reporting form <noreply@postgresql.org> writes:
> We tried to make a script, which sets enddate = '2022-12-31' for all
> records, having the value in «DataVip» that is not maximal.  For other
> records the script set s enddate = null
> The problem is that the script is running for 6 hours, the main percentage
> of time is taken by the rebuilding of indexes.

This is not a bug.  However ... a common workaround for bulk updates
like that is to drop all the table's indexes and then recreate them
afterwards.  It's often quicker than doing row-by-row index updates.

            regards, tom lane



pgsql-bugs by date:

Previous
From: Tom Lane
Date:
Subject: Re: BUG #18165: Could not duplicate handle for "Global/PostgreSQL.xxxxxxxxxx": Bad file descriptor
Next
From: Thomas Munro
Date:
Subject: Re: BUG #18165: Could not duplicate handle for "Global/PostgreSQL.xxxxxxxxxx": Bad file descriptor