El día lunes, febrero 03, 2020 a las 10:01:04a. m. -0600, Steven Lembark escribió:
> On Fri, 31 Jan 2020 19:24:41 +0100
> Matthias Apitz <guru@unixarea.de> wrote:
>
> > Hello,
> >
> > Since ages, we transfer data between different DBS (Informix, Sybase,
> > Oracle, and now PostgreSQL) with our own written tool, based on
> > Perl::DBI which produces a CSV like export in a common way, i.e. an
> > export of Oracle can be loaded into Sybase and vice versa. Export and
> > Import is done row by row, for some tables millions of rows.
> >
> > We produced a special version of the tool to export the rows into a
> > format which understands the PostgreSQL's COPY command and got to know
> > that the import into PostgreSQL of the same data with COPY is 50 times
> > faster than with Perl::DBI, 2.5 minutes ./. 140 minutes for around 6
> > million rows into an empty table without indexes.
> >
> > How can COPY do this so fast?
>
> DBI is a wonderful tool, but not intended for bulk transfer. It
> is useful for post-processing queries that extract specific
> data in ways that SQL cannot readily handle.
>
> One big slowdown is the cycle of pull-a-row, push-a-row involves
> signiicant latency due to database connections. That limits the
> throughput.
I should have mentioned this: the export is done on Linux to file and
the import with that tool is read from such files.
matthias
--
Matthias Apitz, ✉ guru@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub