The table definition and the size of the data set would help with interpreting the below.
The below example shows the issue on a table with just a single field. I can demonstrate the problem wether there are 100 records or a million records.
In every case: Altering the type from a domain of varchar(9) to a raw varchar(9) results in a full table rewrite (as identified by relfilenode). Altering the type from a raw varchar(9) to a domain of varchar(9) occurs for free, with no change to relfilenode.
The timing of each ALTER operation appears to back this up.
postgres@[local]=# create domain old_type as varchar(9); CREATE DOMAIN
postgres@[local]=# create table test (values old_type); CREATE TABLE
postgres@[local]=# with data as (select generate_series(1,1000000), md5(random()::text)) postgres@[local]-# insert into test select substring(md5, 1, 9) from data; INSERT 0 1000000 Time: 4097.162 ms
postgres@[local]=# \d test Table "alpha_core.test" Column | Type | Modifiers --------+----------+----------- values | old_type |
postgres@[local]=# \dD old_type List of domains Schema | Name | Type | Modifier | Check ------------+----------+----------------------+----------+------- alpha_core | old_type | character varying(9) | | (1 row)
postgres@[local]=# select count(*) from test; count --------- 1000000 (1 row)
postgres@[local]=# select relfilenode from pg_class where relname='test';