Re: Any Good Way To Do Sync DB's? - Mailing list pgsql-general

From Gordan Bobic
Subject Re: Any Good Way To Do Sync DB's?
Date
Msg-id Pine.LNX.4.33.0110130527210.28869-100000@sentinel.bobich.net
Whole thread Raw
In response to Re: Any Good Way To Do Sync DB's?  (Doug McNaught <doug@wireboard.com>)
Responses Re: Any Good Way To Do Sync DB's?  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
On 12 Oct 2001, Doug McNaught wrote:

> Joseph Koenig <joe@jwebmedia.com> writes:
>
> > I have a project where a client has products stored in a large Progress
> > DB on an NT server. The web server is a FreeBSD box though, and the
> > client wants to try to avoid the $5,500 license for the Unlimited
> > Connections via OpenLink software and would like to take advantage of
> > the 'free' non-expiring 2 connection (concurrent) license. This wouldn't
> > be a huge problem, but the DB can easily reach 1 million records. Is
> > there any good way to pull this data out of Progess and get it into
> > Postgres? This is way too large of a db to do a "SELECT * FROM table"
> > and do an insert for each row. Any brilliant ideas? Thanks,
>
> Probably the best thing to do is to export the data from Progress in a
> format that the PostgreSQL COPY command can read.  See the docs for
> details.

I'm going to have to rant now. The "dump" and "restore" which use the COPY
method are actually totally useless for large databases. The reason for
this is simple. Copying a 4 GB table with 40M rows requires over 40GB of
temporary scratch space to copy, due to the WAL temp files. That sounds
totally silly. Why doesn't pg_dump insert commits every 1000 rows or so???

Cheers.

Gordan


pgsql-general by date:

Previous
From: Andrew Gould
Date:
Subject: Re: Joining Between Databases
Next
From: Bruce Momjian
Date:
Subject: Re: phonetic and/or synonym search