hi guys,
we have a single Ubuntu 10.04 box on which we are going to be running a Postgres 8.4 server, ROR passenger and a solr search server.
I was looking at ways to optimize the postgres database and yet limit the amount of memory that it can consume.
I am gonna set my shared_buffers to 256mb and work_mem to 12mb, temp_buffers to 20mb (on a 4GB machine).
Now, the "effective cache size" variable seems more of a hint to the query planner, than any hard limit on the database server.
Q1. if I add "ulimit -m" and "ulimit -v" lines in my postgres upstart files will that be good enough to hard-limit Postgres memory usage ?
Q2. once I have decided my max memory allocation (call it MY_ULIMIT) - should effective cache size be set to MY_ULIMIT - 256 - 12 -20 ? round it off to MY_ULIMIT - 512mb maybe....
Q3. Or will doing something like this play havoc with the query planner/unexplained OOM/crashes ? I ask this because I see that there are other variables that I am not sure, will play nice with ulimit:
1. will this affect the memory usage of vacuum (going to be using default vacuum settings for 8.4) - because ideally I would want to have some control over it as well.
2. Would I have to tune max_connections, max_files_per_process (and any related variables) ?
3. When I turn on WAL, would I have to tune wal_buffers accordingly set effective cache size to account for wal_buffers as well ?
thanks
-Sandeep