Re: dbt2 NOTPM numbers - Mailing list pgsql-performance
From | Markus Schiltknecht |
---|---|
Subject | Re: dbt2 NOTPM numbers |
Date | |
Msg-id | 466D6A9E.4010703@bluegap.ch Whole thread Raw |
In response to | Re: dbt2 NOTPM numbers (Jim Nasby <decibel@decibel.org>) |
Responses |
Re: dbt2 NOTPM numbers
|
List | pgsql-performance |
Hi, Jim Nasby wrote: > I don't think that kind of testing is useful for good raid controllers > on RAID5/6, because the controller will just be streaming the data out; > it'll compute the parity blocks on the fly and just stream data to the > drives as fast as possible. That's why I called it 'simplistic throughput testing'... > But that's not how writes in the database work (except for WAL); you're > writing stuff all over the place, none of which is streamed. So in the > best case (the entire stripe being updated is in the controller's > cache), at a minimum it's going to have to write data + parity ( * 2 for > RAID 6, IIRC) for every write. But any real-sized database is going to > be far larger than your raid cache, which means there's a good chance a > block being written will no longer have it's stripe in cache. In that > case, the controller is going to have to read a bunch of data back off > the drive, which is going to clobber performance. I'm well aware. Our workload (hopefully) consists of a much lower writes/reads ratio than dbt2, so RAID 6 might work anyway. > Now, add that performance bottleneck on top of your WAL writes and > you're in real trouble. Well, I'm basically surprised of the low NOTPM numbers compared to my desktop system, which also does around 200 NOTPMs, with only two platters in RAID 1 config... How can a server with four Cores and 8 Platters be equaly slow? Anyway, I've now reconfigured the system with RAID 1+0 and got more than twice the NOTPMs: Response Time (s) Transaction % Average : 90th % Total Rollbacks % ------------ ----- --------------------- ----------- --------------- ----- Delivery 3.84 204.733 : 241.998 704 0 0.00 New Order 45.77 203.651 : 242.847 8382 75 0.90 Order Status 4.32 199.184 : 238.081 792 0 0.00 Payment 42.02 198.969 : 236.549 7695 0 0.00 Stock Level 4.04 198.668 : 236.113 740 0 0.00 ------------ ----- --------------------- ----------- --------------- ----- 567.72 new-order transactions per minute (NOTPM) 14.5 minute duration 0 total unknown errors 529 second(s) ramping up I'm still feeling that 550 is pretty low. The response times are beyond good and evil. As vmstat.out tells us, the CPUs are still pretty much idle or waiting most of the time. procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 3 494560 2181964 8 1787680 13 15 317 242 140 2 4 1 72 23 0 9 494560 1558892 8 2298348 0 0 2973 2018 584 1114 2 1 76 21 1 14 494496 424116 8 3316000 2 0 5613 9293 935 2943 5 1 29 65 0 15 452840 150148 8 3487160 738 3 5662 8709 925 3444 5 2 21 73 0 11 439172 151052 8 3386556 263 0 5690 8293 969 4145 5 2 23 70 0 17 438996 149748 8 3308184 57 6 5036 7174 902 4104 5 2 25 69 1 25 439940 150344 8 3228304 9 28 4757 7479 922 4269 5 2 26 67 For everybody interested, these settings are different from Pg 8.2 default postgresql.conf: listen_addresses = '*' port = 54321 shared_buffers = 2048MB work_mem = 10MB maintenance_work_mem = 64MB #max_stack_depth = 4MB max_fsm_pages = 409600 eachcheckpoint_segments = 6 checkpoint_timeout = 1h effective_cache_size = 3800MB log_min_duration_statement = 500 For dbt2, I've used 500 warehouses and 90 concurrent connections, default values for everything else. Do I simply have to put more RAM (currently 4GB) in that machine? Or what else can be wrong? Is anybody else seeing low performance with the Areca SATA Controllers? (in my case: "Areca Technology Corp. ARC-1260 16-Port PCI-Express to SATA RAID Controller", according to lspci) Then again, maybe I'm just expecting too much... Regards Markus
pgsql-performance by date: