>I am running v6.3.2 under Linux and have found that the "copy" command
>works only for small amounts of data.
i wouldn't say for only small amounts of data -- i've loaded over 5 million
records (700+ MB) into a table with copy. i don't know how long it took
because i just let it run overnight (it made a couple of indexes, too), but
it didn't crash (running on a PPro 180 with 96 MB RAM) and was done in the
morning.
>When trying to "copy" several
>thousand records I notice that system RAM and swap space continue to get
>eaten until there is no further memory available. "psql" then fails.
>What remains is a .../pgdata/base/XYZ file system with the table being
>copied into. That table may be several (tens, hundreds) of Meg in size,
>but a "psql -d XYS -c 'select count(*) table'" will only return a zero
>count.
you probably ran out of memory for the server process. check out "limit"
(or "ulimit") -- you should be able to bump up the datasize to 64m or so
(that's what mine is normally set to; i don't think i had to adjust it for
the 5 million record+ table)
>I don't know if there are any changes that can be made to speed this type
>of process up, but this is definitely a black-mark.
it is kind of ugly, but it gets the job done.