In article <10FE17AD5F7ED31188CE002048406DE8514CEE@lsv-msg06.stortek.com>,
"Creager, Robert S" <CreagRS@louisville.stortek.com> wrote:
> I think this is a question regarding the backend, but...
[snip]
> (COPY u FROM stdin). The backend process which handles the db connection
> decides that it needs a whole lot of memory, although in a nice
> controlled manner. The backend starts with using 6.5Mb, and at 25000
> records copied, it's taken 10Mb and has slowed down substantially.
> Needless to say, this COPY will not finish before running out of memory
> (estimated 300Mb). When executing the COPY to the loc table, this
> problem does not occur. Am I going to have to resort to inserts for the
> referring tables?
I can't answer the backend question, but how about running
'split' on the big file, then COPYing these smaller files?
Gordon.
--
It doesn't get any easier, you just go faster.
-- Greg LeMond