Ted Rolle writes:
> We have 73 databases, two dozen with hundreds of thousands to millions of
> records, with lengths in the 500-byte range. I'm planning to convert them
> >from Btrieve to PostgreSQL.
>
> Of course, I want the highest reasonable speed so that the conversion can be
> completed - say - in a week-end.
The fastest possible way to get data loaded into PostgreSQL is to create a
tab-delimited file and feed it directly to the backend with the COPY
command. To speed things up even more, turn off fsync (-F), create the
indexes after loading, and the same with triggers, if you have any. I'd
like to think that all of this should take significantly less than a
weekend. ;-)
Formatting the data into the right format for COPY can be done with your
favourite text mashing tools.
--
Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter