Hi,
Reynard Hilman wrote:
>
>> OK, so how do we handle things like converting a varchar to a
>> timestamp with time zone? What if one of the rows has invalid date
>> syntax? Do we convert the column anyway, or throw the whole change
>> out with an error?
>>
> being a developer instead of DBA makes me think little about the danger
> of losing data when you change column type. But, I think you're right,
> very BAD things could happen with your data if the database silently
> convert / truncate your data when you change the column type.
> As far as I remember when working with Oracle, it allows you to change
> the data as long as you don't lose or corrupt your data. So, for
> example, changing varchar(20) to varchar(40) should be ok, but the
> reverse might not be ok. It'd be nice if it allows you to change the
> type from varchar(40) to varchar(20) if you don't have any data that is
> larger than varchar(20). I don't know how much complexity that will add,
> though. well that's just some idea from me.
This of course would be convenient, but otoh it would add a great value
of complexity to the backend for a rarely used feature. At least it
should be rarely used :)
For the rare cases where someone has to change column type, maybe its
more easy if (s)he does this by hand - thus not complaining to the
overloaded core-developers about data loss or unexpected results.
The only thing which is a bit complicated is the disabling/enabling
trigger sequence which can be borrowed from pg_dump output.
The remaining part is possible to put all in one transaction;
or at least the delete rows/copy back part:
create table ... as select .... from original ...;
delete from original;
alter table original create column (with new type)
alter table remove old column
insert into original select * from temptable
Regards
Tino