Time is not really a problem for me, if we talk about hours rather than days. On a roughly comparable machine I’ve made backups of databases less than 10 GB, and it was a matter of minutes. But I know that there are scale problems. Sometimes programs just hang if the data are beyond some size. Is that likely in Postgres if you go from ~ 10 GB to ~100 GB? There isn’t any interdependence among my tables beyond queries I construct on the fly, because I use the database in a single user environment
The convention on these lists is to inline and/or bottom-post; please avoid top-posting.
That you are using a relational database system to house tables without any interdependence (relationships) between them is an interesting proposition. That you are in a "single user environment" in most cases would have no impact on this...
PostgreSQL itself, bugs not withstanding, won't "hang" no matter how much data is being processed. It does, however, take out locks so that the entire dump represents that exact same snapshot for all dumped objects. Those locks can impact queries. In particular using "TRUNCATE" becomes pretty much impossible while a dump backup is in progress (I get bit by this, I tend to truncate unlogged tables quite a bit in my usage of PostgreSQL). Normal updates and selects usually work without problem though any transactions started after the backup will not be part of the output no matter how long after the transaction closes the backup finishes.
I suspect that typically you will end up annoyed at how long the backup takes well before any program/system issues become apparent. Data is streamed to the output file handle so active memory usage and database size are not really correlated.