I ended up writing a 1000 lines SQL file with the SQL commands for creation of the 1000 tablespaces. I keep the file with my bootstrap scripts and I'm loading with the psql client.
It works just fine and it is quite very fast. I was a bit afraid it would be a time consuming thing, but it's not, really. Cool!
The Pg backend is actually running from Docker, which is very handy for my scenario.
The solution depends on how you are creating the tables.
For example: the pg_restore has option —-no-tablespaces. With this option, all objects will be created in whichever tablespace is the default during restore. The pg_dump has similar.
If you are running CREATE TABLE statements that have hard-coded tablespaces, then maybe pass your scripts through the sed or awk utility to replace the name with pg_default just prior to execution.
Or, go ahead and create all possible tablespaces before running the CREATE TABLE statementssince each tablespace is just metadata not files like in Oracle or SQL Server.
I have a big set of migration queries (that I do not control) that I must run on my automatic test database, in order to set ip up and run tests. These queries create all sorts of things like indexes, tables, and so. But they also include the specification of the tablespace they expect to use (tablespace_000, tablespace_001, up to tablespace_999). This would require me to setup hundreds of tablespaces before I can start the migration process, and run the tests.
Is there a way to tell the postgres server to ignore that part and just use some default tablespace? My present situation is that I can not bring the test database to a usable point because many migration queries fail due to the tablespace they need has not been created. My problem is that I would like to avoid creating them.