Am 23.03.26 um 14:02 schrieb Mauricio Martini:
P {margin-top:0;margin-bottom:0;} Hi team,
I am evaluating an approach to change the scale of a numeric column from numeric(18,2) to numeric(18,4) in a large table, aiming to avoid the cost of a full table rewrite.
The proposed approach involves directly updating the PostgreSQL system catalog:
UPDATE pg_attribute
SET atttypmod = (18 << 16 | 4) + 4
WHERE attrelid = 'table'::regclass AND attname = 'column';
Before considering this in practice, I would like to validate a few points:
- Is this approach considered safe from a data integrity perspective?
- Is there a risk of inconsistencies in the internal representation of the
numeric type (especially regarding scale)? - Could this impact indexes, functions, or aggregation operations?
- Is there any official recommendation or prior experience using this approach in production environments?
- Are there additional risks related to rollback, maintenance, or future operations (e.g., dump/restore, upgrades)?
The goal is to determine whether this alternative is viable, or if we should stick with more standard approaches (e.g., shadow column, incremental migration, etc.).
If anyone has experience with a similar scenario, your insights would be appreciated.
Thanks.
Hi Maurcicio,
this change can lead to problems, because numbers with more than 16 digits before the decimal point wouldn't fit into the new data type.
It's always safer to use regular SQL. I'm not sure whether a change from numeric(18,2) to numeric(20,4) would cause a table re-write, but it's rather likely,
In general it's easier to be generous when choosing a data type so that later changes can be avoided. This doesn't apply to varchar lengths, because their change never triggers a re-write.
Kind Regards,
Holger
--
Holger Jakobs, Bergisch Gladbach