ow <oneway_111@yahoo.com> writes:
> My concern though ... wouldn't pgSql server collapse when faced with
> transaction spawning across 100M+ records?
The number of records involved really doesn't faze Postgres at all. However
the amount of time spent in the transaction could be an issue if there is
other activity in other schemas of the same database.
As long as the transaction is running none of the deleted or old updated data
in any schema of the database can be cleaned up by vacuum as postgres thinks
the big transaction "might" need to see it sometime.
So if the rest of the database is still active the tables and indexes being
updated may grow larger than normal. If it goes on for a _really_ long time
they might need a VACUUM FULL at some point to clean them up.
--
greg