Re: pg_restore >10million large objects - Mailing list pgsql-admin

From bricklen
Subject Re: pg_restore >10million large objects
Date
Msg-id CAGrpgQ_oWuPdGgM1GjW42eW7AM10RUzpAWf6kqZj0xc4QZoU4w@mail.gmail.com
Whole thread Raw
In response to pg_restore >10million large objects  (Mike Williams <mike.williams@comodo.com>)
List pgsql-admin

On Mon, Dec 23, 2013 at 7:19 AM, Mike Williams <mike.williams@comodo.com> wrote:

How can restoring a database with a lot of large objects run faster?

It seems that each "SELECT pg_catalog.lo_create('xxxxx');" is run
independently and sequentially, despite having --jobs=8 specified.


I don't have an answer for why the restore seems to be serialized, but have you considered creating your pg_dump (-Fc) but exclude all the lobs, then dump or COPY the large objects out separately which you can them import with a manually-specified number of processes? By "manually specified", I mean execute a number of COPY FROM commands using separate threads.

pgsql-admin by date:

Previous
From: Mike Williams
Date:
Subject: pg_restore >10million large objects
Next
From: olimaz
Date:
Subject: Re: HOT Standby - slave does not appear to be removing wal files