Kasia Tuszynska <ktuszynska@esri.com> writes:
> PG: 8.3.8
> 2010-08-18 20:29:42 PDT ERROR: out of memory
> 2010-08-18 20:29:42 PDT DETAIL: Failed on request of size 134217728.
What command is this failure occurring in? There aren't that many that
would try to inhale your full work_mem in one gulp like that.
The short answer here is probably that 32-bit Windows is not the
platform to be trying to do memory-intensive work on, and you ought to
get yourself a real operating system and/or a 64-bit machine. You could
make it run within Windows' restrictions by cutting work_mem,
maintenance_work_mem, and/or shared_buffers, but given that it's already
taking a ridiculously long time, that doesn't seem like a path to a
solution.
Alternatively, you might be able to optimize the data load process a
bit. Have you read
http://www.postgresql.org/docs/8.3/static/populate.html
? I'm suspicious in particular that you might be trying to enforce
foreign keys incrementally, which is usually a bad approach for large
data loads.
Also, I'd advise rethinking the choice of PG 8.3.x if possible.
There are relevant performance improvements in 8.4.x.
> Client: In house application, I checked to see if it is not doing anything silly, it seems to be doing everything as
expected,indexes are created after a table has loaded, we commit every 1000 inserts...
Committing after every few inserts is *not* an optimization for
Postgres. Try kicking that number up a few orders of magnitude.
regards, tom lane