pá 29. 5. 2020 v 16:28 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:
Pavel Stehule <pavel.stehule@gmail.com> writes: > one my customer has to specify dumped tables name by name. After years and > increasing database size and table numbers he has problem with too short > command line. He need to read the list of tables from file (or from stdin).
I guess the question is why. That seems like an enormously error-prone approach. Can't they switch to selecting schemas? Or excluding the hopefully-short list of tables they don't want?
It is not typical application. It is a analytic application when the schema of database is based on dynamic specification of end user (end user can do customization every time). So schema is very dynamic.
For example - typical server has about four thousand databases and every database has some between 1K .. 10K tables.
Another specific are different versions of data in different tables. A user can work with one set of data (one set of tables) and a application prepares new set of data (new set of tables). Load can be slow, because sometimes bigger tables are filled (about forty GB). pg_dump backups one set of tables (little bit like snapshot of data). So it is strange OLAP (but successfull) application.