Thanks for including your configuration and version; it makes things much easier.
Reply follows inline.
On 11/06/2012 09:04 PM, Bryan Montgomery wrote:
I'm wondering what general ways there are to monitor and improve performance? We look at pgadmin's server status but that only sees the function being run. Additionally, is there a good way to 'explain' a function? Or do you have to execute the function steps individually and explain those?
See the auto_explain contrib module. It can explain statements within functions, as well as the functions themselves.
http://www.postgresql.org/docs/current/static/auto-explain.htmlThe server typically has up to 500 connections with a max of 750 connections.
Get a connection pooler. Urgently. See http://wiki.postgresql.org/wiki/PgBouncer . It is extremely unlikely that your server is running efficiently with that many concurrent connections actively working. Reducing it to (say) 100 and using transaction-level connection pooling may boost performance significantly.
work_mem = 512MB # min 64kB
That's really dangerous with your connection count. If many connections actually use that, you'll run out of RAM in a hurry and enter nasty paging storm. If possible, reduce it, then raise it selectively in transactions where you know a high work_mem is needed.
fsync = off # turns forced synchronization on or off
So you don't value your data and don't mind if you lose all of it, permanently and unrecoverably, if your server loses power or the host OS hard crashes?
It's much safer to use `synchronous_commit = off` and a commit_delay. If that isn't enough, get fast-flushing storage like a good raid controller with a battery backed cache you can put in write-back mode, or some high quality SSDs with power-protected write caches.
full_page_writes = off # recover from partial page writes
As above: I hope your data isn't important to you. --
Craig Ringer