Re: pg_dump multi VALUES INSERT - Mailing list pgsql-hackers

From Stephen Frost
Subject Re: pg_dump multi VALUES INSERT
Date
Msg-id 20181017190528.GD4184@tamriel.snowman.net
Whole thread Raw
In response to Re: pg_dump multi VALUES INSERT  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: pg_dump multi VALUES INSERT  (Michael Paquier <michael@paquier.xyz>)
List pgsql-hackers
Greetings,

* Tom Lane (tgl@sss.pgh.pa.us) wrote:
> Surafel Temesgen <surafel3000@gmail.com> writes:
> > According to the documentation –inserts option is mainly useful for making
> > dumps that can be loaded into non-PostgreSQL databases and to reduce the
> > amount of rows that might lost during error in reloading but multi values
> > insert command are equally portable and compact and also faster to reload
> > than single row statement. I think it deserve an option of its own
>
> I don't actually see the point of this.  If you want dump/reload speed
> you should be using COPY.  If that isn't your first priority, it's
> unlikely that using an option like this would be a good idea.  It makes
> the effects of a bad row much harder to predict, and it increases your
> odds of OOM problems with wide rows substantially.
>
> I grant that COPY might not be an option if you're trying to transfer
> data to a different DBMS, but the other problems seem likely to apply
> anywhere.  The bad-data hazard, in particular, is probably a much larger
> concern than it is for Postgres-to-Postgres transfers.

I can't say that I can really buy off on this argument.

The point of it is that it makes loading into other RDBMS faster.  Yes,
it has many of the same issues as our COPY does, but we support it
because it's much faster.  The same is true here, just for other
databases, so I'm +1 on the general idea.

I've not looked at the patch itself at all, yet.

Thanks!

Stephen

Attachment

pgsql-hackers by date:

Previous
From: Adam Brusselback
Date:
Subject: Re: Implementation of Flashback Query
Next
From: Peter Eisentraut
Date:
Subject: Re: Large writable variables