Re: Sync Rep and shutdown Re: Sync Rep v19 - Mailing list pgsql-hackers

From Yeb Havinga
Subject Re: Sync Rep and shutdown Re: Sync Rep v19
Date
Msg-id 4D84BE95.6040600@gmail.com
Whole thread Raw
In response to Re: Sync Rep and shutdown Re: Sync Rep v19  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: Sync Rep and shutdown Re: Sync Rep v19
List pgsql-hackers
On 2011-03-18 18:25, Robert Haas wrote:
> On Fri, Mar 18, 2011 at 1:15 PM, Simon Riggs<simon@2ndquadrant.com>  wrote:
>> On Thu, 2011-03-17 at 09:33 -0400, Robert Haas wrote:
>>> Thanks for the review!
>> Lets have a look here...
>>
>> You've added a test inside the lock to see if there is a standby, which
>> I took out for performance reasons. Maybe there's another way, I know
>> that code is fiddly.
>>
>> You've also added back in the lock acquisition at wakeup with very
>> little justification, which was a major performance hit.
>>
>> Together that's about a>20% hit in performance in Yeb's tests. I think
>> you should spend a little time thinking how to retune that.
> Ouch.  Do you have a link that describes his testing methodology?  I
> will look at it.
Testing 'methodology' sounds a bit heavy. I tested a number of patch 
versions over time, with 30 second, hourly and nightly pgbench runs. The 
nightly more for durability/memory leak testing than tps numbers, since 
I gradually got the impression that pg_bench tests on syncrep setups 
somehow suffer less from big differences between tests.

postgres and recovery.conf I used to test v17 is listed here 
http://archives.postgresql.org/pgsql-hackers/2011-02/msg02364.php

After the tests on v17 I played a bit with small memory changes in the 
postgres.conf to see if the tps would go up. It went up a little but not 
enough to mention on the lists. All tests after v17 were done with the 
postgres.conf that I've copy pasted below.

I mentioned a performance regression in 
http://archives.postgresql.org/pgsql-hackers/2011-03/msg00298.php

And performance improvement in 
http://archives.postgresql.org/pgsql-hackers/2011-03/msg00464.php

All three servers (el cheapo consumer grade) the same: triple core 
AMD's, 16GB ECC, raid 0 over 2 SATA disks, XFS, nobarrier, separated 
data and xlog partitions. NB: there is no BBU controller in these 
servers. They don't run production stuff, it's just for testing. 1Gbit 
ethernet on non-blocking HP switch. No other load. ./configure 
--enable-depend --with-ossp-uuid --with-libxml --prefix=/mgrid/postgres

regards,
Yeb Havinga


Here's the postgresql.conf non-default I used after each new initdb. 
(synchronous_replication is off since it prevented me from adding a 
replication user, so after a initial basebackup I needed to turn it on)
#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------

#custom_variable_classes = ''           # list of custom variable class 
names

#shared_preload_libraries = 'pg_stat_statements'
#custom_variable_classes = 'pg_stat_statements'
#pg_stat_statements.max = 100
#pg_stat_statements.track = all
########
syslog_ident = relay
autovacuum = off
#debug_print_parse = on
#debug_print_rewritten = on
#debug_print_plan = on
#debug_pretty_print = on
log_error_verbosity = verbose
log_min_messages = warning
log_min_error_statement = warning
listen_addresses = '*'                # what IP address(es) to listen on;
search_path='\"$user\", public, hl7'
archive_mode = on
archive_command = 'cd .'
checkpoint_completion_target = 0.9
checkpoint_segments = 16
default_statistics_target = 500
constraint_exclusion = on
max_connections = 100
maintenance_work_mem = 528MB
effective_cache_size = 5GB
work_mem = 144MB
wal_buffers = 8MB
shared_buffers = 528MB
wal_level = 'archive'
max_wal_senders = 10
wal_keep_segments = 100 # 1600MB (for production increase this)
synchronous_standby_names = 'standby1,standby2,standby3'
#synchronous_replication = on





pgsql-hackers by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: pg_last_xact_replay_timestamp meaning
Next
From: Nikhil Sontakke
Date:
Subject: Re: VACUUM FULL deadlock with backend startup