Re: Problem with multi-job pg_restore - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Problem with multi-job pg_restore
Date
Msg-id 16092.1335891540@sss.pgh.pa.us
Whole thread Raw
In response to Problem with multi-job pg_restore  (Brian Weaver <cmdrclueless@gmail.com>)
Responses Re: Problem with multi-job pg_restore
Re: Problem with multi-job pg_restore
List pgsql-hackers
Brian Weaver <cmdrclueless@gmail.com> writes:
> I think I've discovered an issue with multi-job pg_restore on a 700 GB
> data file created with pg_dump.

Just to clarify, you mean parallel restore, right?  Are you using any
options beyond -j, that is any sort of selective restore?

> The problem occurs during the restore when one of the bulk loads
> (COPY) seems to get disconnected from the restore process. I captured
> stdout and stderr from the pg_restore execution and there isn't a
> single hint of a problem. When I look at the log file in the
> $PGDATA/pg_log directory I found the following errors:

> LOG:  could not send data to client: Connection reset by peer
> STATEMENT:  COPY public.outlet_readings_rollup (id, outlet_id,
> rollup_interval, reading_time, min_current, max_current,
> average_current, min_active_power, max_active_power,
> average_active_power, min_apparent_power, max_apparent_power,
> average_apparent_power, watt_hour, pdu_id, min_voltage, max_voltage,
> average_voltage) TO stdout;

I'm confused.  A copy-to-stdout ought to be something that pg_dump
would do, not pg_restore.  Are you sure this is related at all?
        regards, tom lane


pgsql-hackers by date:

Previous
From: Joey Adams
Date:
Subject: Re: JSON in 9.2 - Could we have just one to_json() function instead of two separate versions ?
Next
From: Christopher Browne
Date:
Subject: Re: extending relations more efficiently