On 1/30/20 3:08 PM, Andrus wrote:
> Hi!
>
>> You need to connect to a database that exists with --dbname, for
>> instance --dbname=postgres. Postgres will then use that connection to
>> create the new database, in your case mydb.
>
> Thank you, this seems work.
>
> There are total 24 databases, .backup files total size in 37GB , aprox 60 %
> from this from bytea columns ( pdf documents, images).
> Using VPS server, 4 cores, 11 GB RAM, used only for postgres.
> Which is the fastest way to restore data from all of them to empty
> databases. Should I run all commands in sequence like
These days 37 GB is relatively small, so you maybe getting into the
realm of premature optimization. Do the 24 databases represent an entire
cluster you are trying to transfer? If so have you looked at pg_dumpall?:
https://www.postgresql.org/docs/12/app-pg-dumpall.html
It is a text based backup, but it will include all the databases and the
globals.
Otherwise I think you are going to find that the below will not make
much difference given the overall size of the backup files.
>
> pg_restore --clean --create --if-exists --verbose --dbname=postgres
> --jobs=4
> "database1.backup"
> pg_restore --clean --create --if-exists --verbose --dbname=postgres
> --jobs=4
> "database2.backup"
> ...
> pg_restore --clean --create --if-exists --verbose --dbname=postgres
> --jobs=4
> "database24.backup"
>
> or run them all parallel without --jobs=4 like
>
> pg_restore --clean --create --if-exists --verbose --dbname=postgres
> "database1.backup" &
> pg_restore --clean --create --if-exists --verbose --dbname=postgres
> "database2.backup" &
> ...
> pg_restore --clean --create --if-exists --verbose --dbname=postgres
> --jobs=4
> "database24.backup" &
>
>
> or some balance between those ?
> Is there some postgres or Debian setting which can used during restore time
> to speed up restore ?
> I use shared_buffers=1GB , other settings from debian installation.
>
> Andrus.
--
Adrian Klaver
adrian.klaver@aklaver.com