Thread: 9.2.3 upgrade reduced pgbench performance by 60%

9.2.3 upgrade reduced pgbench performance by 60%

From
Colin Currie
Date:
Hi,

I recently upgraded PostgreSQL from 9.0.12 to 9.2.3 on a test server to compare performance. I'm using pgbench to
measurewhich results in around a 60% reduction.  

The non-default configuration remains identical between versions except archive_command (different location) and
custom_variable_classes(no longer supported) and are detailed are below. Is there some updated default configuration
thatI'm missing? Perhaps it's because of the new cascading replication feature? I've tried tweaking the memory settings
tono avail. 

The Linux server is on a cloud and has 4GB RAM and 2 CPUs and the same server is running both master and slave (these
areseparate in production). If you'd like any more details please ask. Here are the pgbench results: 


PostgreSQL 9.0.12
-----------------
/usr/pgsql-9.0/bin/pgbench -c 4 -t 20000 pgbench

starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 4
number of threads: 1
number of transactions per client: 20000
number of transactions actually processed: 80000/80000
tps = 140.784635 (including connections establishing)
tps = 140.789389 (excluding connections establishing)

starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 4
number of threads: 1
number of transactions per client: 20000
number of transactions actually processed: 80000/80000
tps = 142.027320 (including connections establishing)
tps = 142.032815 (excluding connections establishing)

starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 4
number of threads: 1
number of transactions per client: 20000
number of transactions actually processed: 80000/80000
tps = 150.745875 (including connections establishing)
tps = 150.750959 (excluding connections establishing)


PostgreSQL 9.2.3
-----------------
/usr/pgsql-9.2/bin/pgbench -c 4 -t 20000 pgbench

starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 4
number of threads: 1
number of transactions per client: 20000
number of transactions actually processed: 80000/80000
tps = 60.273767 (including connections establishing)
tps = 60.274429 (excluding connections establishing)

starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 4
number of threads: 1
number of transactions per client: 20000
number of transactions actually processed: 80000/80000
tps = 57.634077 (including connections establishing)
tps = 57.634847 (excluding connections establishing)

starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 4
number of threads: 1
number of transactions per client: 20000
number of transactions actually processed: 80000/80000
tps = 60.516492 (including connections establishing)
tps = 60.517250 (excluding connections establishing)


The non-default configuration items
-----------------
listen_addresses = '*'                # what IP address(es) to listen on;
max_connections = 100            # (change requires restart)
shared_buffers = 256MB            # min 128kB
temp_buffers = 128MB            # min 800kB
work_mem = 32MB                # min 64kB
maintenance_work_mem = 128MB        # min 1MB
max_stack_depth = 8MB            # min 100kB
wal_level = hot_standby            # minimal, archive, or hot_standby
checkpoint_segments = 3        # in logfile segments, min 1, 16MB each
checkpoint_completion_target = 0.9    # checkpoint target duration, 0.0 - 1.0
archive_mode = on        # allows archiving to be done
archive_command = 'cp -f %p /var/lib/pgsql/9.0/archive/%f </dev/null'        # command to use to archive a logfile
segment
max_wal_senders = 10        # max number of walsender processes
effective_cache_size = 1GB
default_statistics_target = 2000    # range 1-10000
log_destination = 'stderr'        # Valid values are combinations of
logging_collector = on            # Enable capturing of stderr and csvlog
log_directory = 'pg_log'        # directory where log files are written,
log_filename = 'postgresql-%a.log'    # log file name pattern,
log_truncate_on_rotation = on        # If on, an existing log file of the
log_rotation_age = 1d            # Automatic rotation of logfiles will
log_rotation_size = 0            # Automatic rotation of logfiles will
log_min_duration_statement = 5000    # -1 is disabled, 0 logs all statements
log_checkpoints = on
log_line_prefix = '%t:%r:%u@%d:[%p]: '    # special values:
log_statement = 'none'            # none, ddl, mod, all
autovacuum = on # changed by pgb_test for pgbench testing
log_autovacuum_min_duration = 100    # -1 disables, 0 logs all actions and
datestyle = 'iso, mdy'
lc_messages = 'en_US.UTF-8'            # locale for system error message
lc_monetary = 'en_US.UTF-8'            # locale for monetary formatting
lc_numeric = 'en_US.UTF-8'            # locale for number formatting
lc_time = 'en_US.UTF-8'                # locale for time formatting
default_text_search_config = 'pg_catalog.english'
shared_preload_libraries = 'auto_explain'
custom_variable_classes = 'auto_explain'
auto_explain.log_min_duration = '15s'

Re: 9.2.3 upgrade reduced pgbench performance by 60%

From
Josh Berkus
Date:
> The Linux server is on a cloud and has 4GB RAM and 2 CPUs and the
> same server is running both master and slave (these are separate in
> production). If you'd like any more details please ask. Here are the
> pgbench results:

Presumably you created a new cloud server for 9.2, yes?  I'd guess that
the difference is between the two cloud servers.  Try testing, for
example, bonnie++ on the two servers.

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


Re: 9.2.3 upgrade reduced pgbench performance by 60%

From
"Nicholson, Brad (Toronto, ON, CA)"
Date:
> -----Original Message-----
> From: pgsql-performance-owner@postgresql.org [mailto:pgsql-
> performance-owner@postgresql.org] On Behalf Of Josh Berkus
> Sent: Monday, March 25, 2013 4:34 PM
> To: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] 9.2.3 upgrade reduced pgbench performance by
> 60%
> 
> 
> > The Linux server is on a cloud and has 4GB RAM and 2 CPUs and the same
> > server is running both master and slave (these are separate in
> > production). If you'd like any more details please ask. Here are the
> > pgbench results:
> 
> Presumably you created a new cloud server for 9.2, yes?  I'd guess that the
> difference is between the two cloud servers.  Try testing, for example,
> bonnie++ on the two servers.
> 

I saw some similar results comparing 9.0 and 9.2 pgbench tests.  My tests were on a VM, but on a dedicate host/hardware
withno other VM's running on it to minimize variables. I didn't have a lot of time to dig into it, but I do recall
seeingmore lock contention on updates on the 9.2 instance though.
 


Brad. 

Re: 9.2.3 upgrade reduced pgbench performance by 60%

From
Josh Berkus
Date:
>
> I saw some similar results comparing 9.0 and 9.2 pgbench tests.  My tests were on a VM, but on a dedicate
host/hardwarewith no other VM's running on it to minimize variables. I didn't have a lot of time to dig into it, but I
dorecall seeing more lock contention on updates on the 9.2 instance though. 

Ah, good point.  At that scale, Colin's test is mostly a contention
test.  There's something there worth investigating, but it's not a
realistic use case.

Colin, please try your pgbench tests with -s 100.

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


Re: 9.2.3 upgrade reduced pgbench performance by 60%

From
Jeff Janes
Date:
On Monday, March 25, 2013, Colin Currie wrote:
Hi,

I recently upgraded PostgreSQL from 9.0.12 to 9.2.3 on a test server to compare performance. I'm using pgbench to measure which results in around a 60% reduction.

The non-default configuration remains identical between versions except archive_command (different location) and custom_variable_classes (no longer supported) and are detailed are below. Is there some updated default configuration that I'm missing? Perhaps it's because of the new cascading replication feature? I've tried tweaking the memory settings to no avail.

The Linux server is on a cloud and has 4GB RAM and 2 CPUs and the same server is running both master and slave (these are separate in production).

What does your recovery.conf look like?  What if you don't run the slave at all, then how do they compare?

 Cheers,

Jeff