I’ll try a windows compile of pgloader sometime during the holidays. It’s true that I already have a solution (export <= 65000 row chunks, import into Excel, export via Excel puts quotes around the text columns), but something faster and more efficient would really help in this case.
On Friday 17 December 2010 7:46:12 am Mark Watson wrote: > Hello all, > Firstly, I apologise if this is not the correct list for this subject. > Lately, I've been working on a data conversion, importing into Postgres > using Copy From. The text file I'm copying from is produced from an ancient > program and produces either a tab or semi-colon delimited file. One file > contains about 1.8M rows and has a 'comments' column. The exporting > program, which I am forced to use, does not surround this column with > quotes and this column contains cr/lf characters, which I must deal with > (and have dealt with) before I can import the file via Copy. Hence to my > suggestion: I was envisioning a parameter DELIMITER_COUNT which, if one was > 100% confident that all columns are accounted for in the input file, could > be used to alleviate the need to deal with cr/lf's in varchar and text > columns. i.e., if copy loaded a line with fewer delimiters than > delimiter_count, the next line from the text file would be read and the > assignment of columns would continue for the current row/column. > Just curious as to the thoughts out there. > Thanks to all for this excellent product, and a merry Christmas/holiday > period to all. > > Mark Watson