Hi Didier
On 8 March 2011 14:24, Didier Gasser-Morlay
<didiergm@gmail.com> wrote:
Craig,
1. ODBC is not fast by nature adding a layer on top of the direct access libraries. Can you dump from cobol to text delimited file (or fixed length) and then up into Postgres ?
The native reporting tool "Impromptu" uses the same ODBC link and transfers data at a significantly faster rate, Unfortunately it seems to have a file size cap of 4GB, although I've not yet tried a CSV output, will give that a go.
2. when you say 'Raises an error': what error do you get ?
The error unfortunately does not help much, it complains about not being able to access a file under /var/log/ but this itself is a bug on a widows machine, so I don't get to see the underlying error (I am working on a bug report).
3. 4 millions rows seems like a fairly sizeable amount of data, have you checked you have enough space on the disk where the Postgres DB resides ?
Yes, plenty of space, it's a new data only disk. I will monitor C:\ drive space on the next attempt in case its a temp space issue.
4. Can't you do your upload piece by piece; I would have thought that out of the n millions rows, a large chunk are historical data which do not need to be exported on a daily basis.
Indeed, I will only need incremental data following the initial dump for most tables, I am working on an incremental data dump Python script where time permits, this is how I intend to maintain the data.
It would be nice to get a good head start while I work on this, plus it will allow me to get some metrics to justify the time investment on the scripts.
Either way I have a way forward, Just would be handy if anyone knew of a good tool for the initial data dump.
Didier
Thank you for ideas and suggestions, you have given me some new thngs to try, I will feedback how I get on.
Craig