Thread: change column data type of a big table
I have a very big table, in fact only this table uses approx. 60% space of disk.
The table is an standalone table (no one inherit from this and this is not inherit from another).
I need to change a int column to bigint.
But aparentely this alter recreate the table, my problem is the space, there are no space enough in disk.
Is possible to change column datatype without recreating the table?
Thanks!
On 08/09/12 10:31 PM, Anibal David Acosta wrote: > > I have a very big table, in fact only this table uses approx. 60% > space of disk. > > The table is an standalone table (no one inherit from this and this is > not inherit from another). > > I need to change a int column to bigint. > > But aparentely this alter recreate the table, my problem is the space, > there are no space enough in disk. > > Is possible to change column datatype without recreating the table? > > every tuple of this table will have to be rewritten with the new type. you can't avoid that. as none of the old tuples can be reclaimed before the ALTER TABLE completes, you'll need sufficient disk space for the old and new data. I see no way of avoiding needing more disk space. if you have sufficient space on another drive, you could dump the table data-only, then truncate it, then alter the type while its empty, then restore the dump. -- john r pierce N 37, W 122 santa cruz ca mid-left coast
On 08/10/2012 12:05 AM, John R Pierce wrote: > On 08/09/12 10:31 PM, Anibal David Acosta wrote: >> >> I have a very big table, in fact only this table uses approx. 60% >> space of disk. >> >> The table is an standalone table (no one inherit from this and this is >> not inherit from another). >> >> I need to change a int column to bigint. >> >> But aparentely this alter recreate the table, my problem is the space, >> there are no space enough in disk. >> >> Is possible to change column datatype without recreating the table? >> >> > > every tuple of this table will have to be rewritten with the new type. > you can't avoid that. as none of the old tuples can be reclaimed > before the ALTER TABLE completes, you'll need sufficient disk space for > the old and new data. > > I see no way of avoiding needing more disk space. if you have > sufficient space on another drive, you could dump the table data-only, > then truncate it, then alter the type while its empty, then restore the > dump. > > > > Seems like for i == 1 to 10 select into table-with-bigint from table-with-int where <id mod 10 = i> delete from table-with-bigint where <id mod 10 = i> or any such fractionation scheme might do the trick And obviously more disk is inevitably required unless this table is frozen, which doesn't seem likely.
On 08/10/2012 10:26 PM, Rob Sargent wrote: > Seems like > for i == 1 to 10 > select into table-with-bigint from table-with-int where <id mod 10 = i> > delete from table-with-bigint where <id mod 10 = i> > > or any such fractionation scheme might do the trick For that matter, you can ALTER TABLE .. ADD COLUMN the bigint column without a default, then do batches of: BEGIN; UPDATE thetable SET thetable.bigintcol = thetable.intcol WHERE id > lastbatchmax AND id < batchmin; COMMIT; VACUUM thetable; Because adding a column without a default or using clause doesn't write anything to the table heap, that'll progressively rewrite the table in-place. You'll can have other read-only clients continuing to use the table so long as their transactions are short. Even read/write clients are OK, and you can add a trigger that sets the bigint column from the int column whenever an update is done for any other reason. Eventually you'll find you no longer have any null entries in the bigint column so you can ALTER TABLE ... ALTER COLUMN ... SET NOT NULL it, drop the int column, and rename the bigint one within a single transaction. Active clients will only block briefly. -- Craig Ringer