We have an application in which the tables will have approx ~200 columns in some of the big transaction tables when we will be serving all the business use cases. Currently it has ~100 columns to serve current business use cases to start with. As the team is following an agile approach , the attributes will be added gradually with each future release. But the concern is, the transaction tables will be holding billions of rows ( because of historical data migration from start) and will be range partitioned and each of those partitions can be as big as ~100GB and full table size can be in multiple Terabytes.
So our concern was , as the column addition using the traditional "Alter table" command in postgres looks to be a full table rewrite , it’s going to take a lot of time and resources with each of such releases for these big transaction tables. So what is the way to handle such scenarios and do these column additions in quick time with no/minimal downtime?
Normalizing the database design completely eliminates the problem of any need to rewrite a table, since new attributes are added as new rows in a different table.