Re: driving postgres to achieve benchmark results similar to bonnie++ - Mailing list pgsql-admin
From | Mike Broers |
---|---|
Subject | Re: driving postgres to achieve benchmark results similar to bonnie++ |
Date | |
Msg-id | CAB9893i8zC87KZkhPEKD4vYi=53-YXgTZGBjxx-R5GQKAyY_jA@mail.gmail.com Whole thread Raw |
In response to | Re: driving postgres to achieve benchmark results similar to bonnie++ (Scott Whitney <scott@journyx.com>) |
Responses |
Re: driving postgres to achieve benchmark results similar to bonnie++
Re: driving postgres to achieve benchmark results similar to bonnie++ |
List | pgsql-admin |
What is your connection to your SAN?Sent via the Samsung GALAXY S®4, an AT&T 4G LTE smartphone
-------- Original message --------
From: Mike Broers <mbroers@gmail.com>
Date: 05/10/2016 1:29 PM (GMT-06:00)
To: John Scalia <jayknowsunix@gmail.com>
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] driving postgres to achieve benchmark results similar to bonnie++I've made those changes and seen negligible improvements (increase in reads like 10MB/sec from 99MB/sec to 112MB/sec, so those didnt hurt but they didnt exactly unleash huge gains.Any idea of a way I can really push the postgres server to see numbers closer to the bonnie++ results or provide an explanation of the ceiling?On Tue, May 10, 2016 at 12:08 PM, Mike Broers <mbroers@gmail.com> wrote:Thanks for the feedback, I'll update those configs, run some more tests, and follow up.On Tue, May 10, 2016 at 12:01 PM, John Scalia <jayknowsunix@gmail.com> wrote:
Sent from my iPadWell, I'm no expert with Bonnie, but several of your PostgreSQL settings look incorrect according to the system config you provided. With 12Gb of RAM, shared_buffers should probably be closer to at least 3092Mb. That is if you follow the general suggestion for having that at 1/4 of your available RAM. Also, your max_connections settings looks awfully high. Do you really need 1024 connections? Suggest, if so, that you look into a connection pooling software, like pgpool-II or something.I'm having trouble getting postgres to drive enough disk activity to get even close to the disk benchmarking I'm getting with bonnie++. We have SSD SAN and the xlog is on its own ssd volume as well, postgres 9.5 running on centos 6.bonnie++ -n 0 -f -b is the command im running, pointing to either primary data or xlog location Im consistently seeing numbers like this:
Version 1.03e ------Sequential Output------ --Sequential Input- --Random--Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
23808M 786274 92 157465 29 316097 17 5751 16
So during bonnie++ tests I've confirmed in our monitoring write peaks at 700/800 MB/sec and read peaks around 280/300 MB/sec.
We have 12GB RAM on the server, when I run pgbench with a scale that sets the pgbench database in the realm of 18GB - 25GB I barely break 110MB/sec writes and 80MB/sec. I'm running with different options such unlogged tables and logged tables, prepared transactions or not, and transaction counts between 1000 and 40000.
I thought a parallel pg_dump / restore might also drive disk but that performance doesnt drive disk throughput either, topping out around 75MB/sec read. Nightly vacuums also seem to peak below 110MB/sec reads as well.
Here are the nondefault pg settings:
max_connections = 1024
shared_buffers = 1024MB
wal_buffers = 16MB
checkpoint_completion_target = '.9'
archive_mode = on
random_page_cost = '1.5'
maintenance_work_mem = 512MB
work_mem = 64MB
max_wal_senders = 5
checkpoint_timeout = 10min
effective_io_concurrency = 4
effective_cache_size = 8GB
wal_keep_segments = 512
wal_level = hot_standby
synchronous_commit = offAny idea of if/why postgres might be bottlenecking disk throughput? Or if there is a method for testing to achieve something closer the bonnie++ levels from within postgres that I am missing? I'm guessing I'm just not driving enough activity to push it to the limit but I'm not sure of a straightforward method to verify this.
Thanks,
Mike
With an SSD drive, I also doubt your random page cost is that high. With that setting, you may be forcing a lot of queries to use sequential access, not random.Finally, increase the two work mem settings. I can't recommend any settings without knowing what kind of queries you're running.--JayJournyx, Inc.Do you receive our promotional emails? You can subscribe or unsubscribe to those emails at http://go.journyx.com/emailPreference/e/4932/714/
pgsql-admin by date: