Shams Khan wrote:
> *Need to increase the response time of running queries on
> server...*
> 8 CPU's and 16 cores
> [64GB RAM]
> HDD 200GB
> Database size = 40GB
Without more info, there's a bit of guesswork, but...
> maintenance_work_mem = Not initialised
I would say probably 1GB
> effective_cache_size = Not initialised
48GB
> work_mem = Not initialised
You could probably go 100MB on this.
> wal_buffers = 8MB
16BM
> checkpoint_segments = 16
Higher. Probably not more than 128.
> shared_buffers = 32MB (have read should 20% of Physical memory)
16GB to start. If you have episodes of high latency, where even
queries which normally run very quickly all pause and then all
complete close together after a delay, you may need to reduce this
and/or increase the aggressiveness of the background writer. I've
had to go as low as 1GB to overcome such latency spikes.
> max_connections = 100
Maybe leave alone, possibly reduce. You should be aiming to use a
pool to keep about 20 database connections busy. If you can't do
that in the app, look at pgbouncer.
> checkpoint_completion_target = Not initialised
It is often wise to increase this to 0.8 or 0.9
If I read this right, you have one 200GB drive for writes? That's
going to be your bottleneck if you write much data. You need a RAID
for both performance and reliability, with a good controller with
battery-backed cache configured for write-back. Until you have one
you can be less crippled on preformance by setting
synchronous_commit = off. The trade-off is that there will be a
slight delay between when PostgreSQL acknoleges a commit and when
the data is actually persisted.
-Kevin