i have postgresql version 9.1 and those my configurations:
max_connection=600
shared_buffers=1024M
work_mem=4M
and all others parameters are default as delivered with postgresql.
I installed on Suse Enterprice Linux Server and kernel_shxmem= 2GByte in the sysctl.conf, because if I put 1Gbyte postgresql will never started.
The total RAM of the server is 8GB.
But when the active connections reach 300 then we get out of memory and I checked the system through “top” and I got 7,8GB used from 8GB and started to swap also. At the same time there are no new connection possible.
I have connection pool pgpool II in the front of and the everything with it till now OK. No problem in the system and works fine.
Keep in mind that work_mem is memory used in addition to shared_buffers.
With work_mem = 4MB and a possible 600 connections max, that could take up an additional 2.3GB if all connections were in use and doing simple queries.
Are you running pgpool on the same server? If so, 8GB doesn't sound like enough memory for what you're trying to do if your concurrent connection count is going that high.