Thank you for your answer, but I don't have access to this server any more
and have to just interpret and pick some parameters from test results, but
your advice about tar trick is something I'm gonna try for sure in next
test.
So, because I don't have any chance to do some more tests, should I change
checkpoint_segments parameter?
My colleagues prefer old setting as shown below, because of maintenance
reasons, but I still would like to convince them to much higher setting. 30
segments for machine like that seems to be too humble.
checkpoint_segments = 30
checkpoint_timeout = 8min
The rest of config looks like this:
shared_buffers=2GB
temp_buffers=128MB
max_files_per_process=1000
work_mem=384MB
maintenance_work_mem=10240MB
effective_io_concurrency=1
synchronous_commit=on
wal_buffers=16MB
wal_writer_delay=200ms
commit_delay=0
commit_siblings=5
random_page_cost=1.0
cpu_tuple_cost= 0.01
effective_cache_size=450GB
geqo_threshold=12
geqo_effort=5
geqo_selection_bias=2.0
join_collapse_limit=8
Any ideas about rest of config? Maybe those connected with write operations?
--
View this message in context: http://postgresql.1045698.n5.nabble.com/Optimal-checkpoint-setting-tp5822359p5822951.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.