Thread: ~400 TPS - good or bad?
Hello, We are trying to optimize our box for Postgresql. We have i7, 8GB of ram, 2xSATA RAID1 (software) running on XFS filesystem. We are running Postgresql and memcached on that box. Without any optimizations (just edited PG config) we got 50 TPS with pg_bench default run (1 client / 10 transactions), then we've added to /home partition (where PGDATA is) logbuf=8 and nobarrier. With that fs setup TPS in default test is unstable, 150-300 TPS. So we've tested with -c 100 -t 10 and got stable ~400 TPS. Question is - is it decent result or we can get much more from Postgres on that box setup? If yes, what we need to do? We are running Gentoo. Here's our config: http://paste.pocoo.org/show/224393/ PS. pgbench scale is set to "1". -- Greetings, Szymon
2010/6/12 Szymon Kosok <szymon@mwg.pl>: > PS. pgbench scale is set to "1". I've found in mailing list archive that scale = 1 is not good idea. So we have ran pgbench -s 200 (our database is ~3 GB) -c 10 -t 3000 and get about ~600 TPS. Good or bad? -- Greetings, Szymon
On Sat, Jun 12, 2010 at 8:37 AM, Szymon Kosok <szymon@mwg.pl> wrote: > 2010/6/12 Szymon Kosok <szymon@mwg.pl>: >> PS. pgbench scale is set to "1". > > I've found in mailing list archive that scale = 1 is not good idea. So > we have ran pgbench -s 200 (our database is ~3 GB) -c 10 -t 3000 and > get about ~600 TPS. Good or bad? You are being bound by the performance of your disk drives. Since you have 8gb ram, your database fit in memory once the cache warms up. To confirm this, try running a 'select only' test with a longer transaction count: pgbench -c 10 -t 10000 -S And compare the results. If you get much higher results (you should), then we know for sure where the problem is. Your main lines of attack on fixing disk performance issues are going to be: *) simply dealing with 400-600tps *) getting more/faster disk drives *) doing some speed/safety tradeoffs, for example synchronous_commit merlin
Szymon Kosok wrote: > I've found in mailing list archive that scale = 1 is not good idea. So > we have ran pgbench -s 200 (our database is ~3 GB) -c 10 -t 3000 and > get about ~600 TPS. Good or bad? > pgbench in its default only really tests commit rate, and often that's not what is actually important to people. Your results are normal if you don't have a battery-backed RAID controller. In that case, your drives are only capable of committing once per disk rotation, so if you have 7200RPM drives that's no more than 120 times per second. On each physical disk commit, PostgreSQL will include any other pending transactions that are waiting around too. So what I suspect you're seeing is about 100 commits/second, and on average 6 of the 10 clients have something ready to commit each time. That's what I normally see when running pgbench on regular hard drives without a RAID controller, somewhere around 500 commits/second. If you change the number of clients to 1 you'll find out what the commit rate for a single client is, that should help validate whether my suspicion is correct. I'd expect a fairly linear increase from 100 to ~600 TPS as your client count goes from 1 to 10, topping out at under 1000 TPS even with much higher client counts. -- Greg Smith 2ndQuadrant US Baltimore, MD PostgreSQL Training, Services and Support greg@2ndQuadrant.com www.2ndQuadrant.us