Thread: Re: [ADMIN] v7.1b4 bad performance
On Sun, 18 Feb 2001, Dmitry Morozovsky wrote: DM> I just done the experiment with increasing HZ to 1000 on my own machine DM> (PII 374). Your test program reports 2 ms instead of 20. The other side DM> of increasing HZ is surely more overhead to scheduler system. Anyway, it's DM> a bit of data to dig into, I suppose ;-) DM> DM> Results for pgbench with 7.1b4: (BTW, machine is FreeBSD 4-stable on IBM DM> DTLA IDE in ATA66 mode with tag queueing and soft updates turned on) Oh, I forgot to paste the results from original system (with HZ=100). Here they are: >> delay = 5 number of clients: 1 number of transactions per client: 1000 number of transactions actually processed: 1000/1000 tps = 47.422866(including connections establishing) tps = 47.493439(excluding connections establishing) number of clients: 10 number of transactions per client: 100 number of transactions actually processed: 1000/1000 tps = 37.930605(including connections establishing) tps = 38.308613(excluding connections establishing) number of clients: 20 number of transactions per client: 50 number of transactions actually processed: 1000/1000 tps = 35.757531(including connections establishing) tps = 36.420532(excluding connections establishing) >> delay = 0 number of clients: 1 number of transactions per client: 1000 number of transactions actually processed: 1000/1000 tps = 111.521859(including connections establishing) tps = 111.904026(excluding connections establishing) number of clients: 10 number of transactions per client: 100 number of transactions actually processed: 1000/1000 tps = 62.808216(including connections establishing) tps = 63.819590(excluding connections establishing) number of clients: 20 number of transactions per client: 50 number of transactions actually processed: 1000/1000 tps = 64.250431(including connections establishing) tps = 66.438067(excluding connections establishing) So, I suppose (very preliminary, of course ;): 1 - at least for dedicated PostgreSQL servers it _may_ be reasonable to increase HZ 2 - there is still no advantages of using delay != 0. Your ideas? DM> DM> >> default delay (5 us) DM> DM> number of clients: 1 DM> number of transactions per client: 1000 DM> number of transactions actually processed: 1000/1000 DM> tps = 96.678008(including connections establishing) DM> tps = 96.982619(excluding connections establishing) DM> DM> number of clients: 10 DM> number of transactions per client: 100 DM> number of transactions actually processed: 1000/1000 DM> tps = 77.538398(including connections establishing) DM> tps = 79.126914(excluding connections establishing) DM> DM> number of clients: 20 DM> number of transactions per client: 50 DM> number of transactions actually processed: 1000/1000 DM> tps = 68.448429(including connections establishing) DM> tps = 70.957500(excluding connections establishing) DM> DM> >> delay of 0 DM> DM> number of clients: 1 DM> number of transactions per client: 1000 DM> number of transactions actually processed: 1000/1000 DM> tps = 111.939751(including connections establishing) DM> tps = 112.335089(excluding connections establishing) DM> DM> number of clients: 10 DM> number of transactions per client: 100 DM> number of transactions actually processed: 1000/1000 DM> tps = 84.262936(including connections establishing) DM> tps = 86.152702(excluding connections establishing) DM> DM> number of clients: 20 DM> number of transactions per client: 50 DM> number of transactions actually processed: 1000/1000 DM> tps = 79.678831(including connections establishing) DM> tps = 83.106418(excluding connections establishing) DM> DM> DM> Results are very close... Another thing to dig into. DM> DM> BTW, postgres parameters were: -B 256 -F -i -S DM> DM> DM> DM> DM> DM> BTW, for modern versions of FreeBSD kernels, there is HZ kernel option DM> DM> which describes maximum timeslice granularity (actually, HZ value is DM> DM> number of timeslice periods per second, with default of 100 = 10 ms). On DM> DM> modern CPUs HZ may be increased to at least 1000, and sometimes even to DM> DM> 5000 (unfortunately, I haven't test platform by hand). DM> DM> DM> DM> So, maybe you can test select granularity at ./configure phase and then DM> DM> define default commit_delay accordingly. DM> DM> DM> DM> Your thoughts? DM> DM> DM> DM> Sincerely, DM> DM> D.Marck [DM5020, DM268-RIPE, DM3-RIPN] DM> DM> ------------------------------------------------------------------------ DM> DM> *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** DM> DM> ------------------------------------------------------------------------ DM> DM> DM> DM> Sincerely, DM> D.Marck [DM5020, DM268-RIPE, DM3-RIPN] DM> ------------------------------------------------------------------------ DM> *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** DM> ------------------------------------------------------------------------ DM> DM> Sincerely, D.Marck [DM5020, DM268-RIPE, DM3-RIPN] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------
Dmitry Morozovsky wrote: > On Sun, 18 Feb 2001, Dmitry Morozovsky wrote: > > DM> I just done the experiment with increasing HZ to 1000 on my own machine > DM> (PII 374). Your test program reports 2 ms instead of 20. The other side > DM> of increasing HZ is surely more overhead to scheduler system. Anyway, it's > DM> a bit of data to dig into, I suppose ;-) > DM> > DM> Results for pgbench with 7.1b4: (BTW, machine is FreeBSD 4-stable on IBM > DM> DTLA IDE in ATA66 mode with tag queueing and soft updates turned on) Is this unmodified pgbench or has it Hiroshi tweaked behaviour of connecting each client to its own database, so that locking and such does not shade the possible benefits (was it about 15% ?) of delay>1 also, IIRC Tom suggested running with at least -B 1024 if you can. ----------------- Hannu
On Fri, Feb 23, 2001 at 01:09:37PM +0200, Hannu Krosing wrote: > Dmitry Morozovsky wrote: > > > DM> I just done the experiment with increasing HZ to 1000 on my own machine > > DM> (PII 374). Your test program reports 2 ms instead of 20. The other side > > DM> of increasing HZ is surely more overhead to scheduler system. Anyway, it's > > DM> a bit of data to dig into, I suppose ;-) > > DM> > > DM> Results for pgbench with 7.1b4: (BTW, machine is FreeBSD 4-stable on IBM > > DM> DTLA IDE in ATA66 mode with tag queueing and soft updates turned on) > > Is this unmodified pgbench or has it Hiroshi tweaked behaviour of > connecting each client to its own database, so that locking and such > does not shade the possible benefits (was it about 15% ?) of delay>1 > > also, IIRC Tom suggested running with at least -B 1024 if you can. Just try this: explain select * from <tablename> where <fieldname>=<any_value> (Use for fieldname an indexed field). If postgres is using an sequential scan in stead of an index scan. You have to vacuum your database. This will REALLY remove deleted data from your indexes. Hope it will work, Dave Mertens System Administrator ISM, Netherlands
Hannu Krosing <hannu@tm.ee> writes: > Is this unmodified pgbench or has it Hiroshi tweaked behaviour of > connecting each client to its own database, so that locking and such > does not shade the possible benefits (was it about 15% ?) of delay>1 I didn't much like that approach to altering the test, since it also means that all the clients are working with separate tables and hence not able to share read I/O; that doesn't seem like it's the same benchmark at all. What would make more sense to me is to increase the number of rows in the branches table. Right now, at the default "scale factor" of 1, pgbench makes tables of these sizes: accounts 100000 branches 1 history 0 (filled during test) tellers 10 It seems to me that the branches table should have at least 10 to 100 entries, and tellers about 10 times whatever branches is. 100000 accounts rows seems enough though. Making such a change would render results not comparable with the prior pgbench, but that would be true with Hiroshi's change too. Alternatively we could just say that we won't believe any numbers taken at scale factors less than, say, 10, but I doubt we really need million-row accounts tables in order to learn anything... regards, tom lane
> I didn't much like that approach to altering the test, since it also > means that all the clients are working with separate tables and hence > not able to share read I/O; that doesn't seem like it's the same > benchmark at all. What would make more sense to me is to increase the > number of rows in the branches table. > > Right now, at the default "scale factor" of 1, pgbench makes tables of > these sizes: > > accounts 100000 > branches 1 > history 0 (filled during test) > tellers 10 > > It seems to me that the branches table should have at least 10 to 100 > entries, and tellers about 10 times whatever branches is. 100000 > accounts rows seems enough though. Those numbers are defined in the TPC-B spec. But pgbench is not an official test tool anyway, so you could modify it if you like. That is the benefit of the open source:-) -- Tatsuo Ishii
Tatsuo Ishii <t-ishii@sra.co.jp> writes: >> It seems to me that the branches table should have at least 10 to 100 >> entries, and tellers about 10 times whatever branches is. 100000 >> accounts rows seems enough though. > Those numbers are defined in the TPC-B spec. Ah. And of course, the TPC bunch never thought anyone would be interested in the results with scale factors so tiny as one ;-), so they didn't see any problem with it. Okay, plan B then: let's ask people to redo their benchmarks with -s bigger than one. Now, how much bigger? To the extent that you think this is a model of a real bank, it should be obvious that the number of concurrent transactions cannot exceed the number of tellers; there should never be any write contention on a teller's table row, because only that teller (client) should be issuing transactions against it. Contention on a branch's row is realistic, but not from more clients than there are tellers in the branch. As a rule of thumb, then, we could say that the benchmark's results are not to be believed for numbers of clients exceeding perhaps 5 times the scale factor, ie, half the number of teller rows (so that it's not too likely we will have contention on a teller row). regards, tom lane
> -----Original Message----- > From: Tom Lane > > Hannu Krosing <hannu@tm.ee> writes: > > Is this unmodified pgbench or has it Hiroshi tweaked behaviour of > > connecting each client to its own database, so that locking and such > > does not shade the possible benefits (was it about 15% ?) of delay>1 > > I didn't much like that approach to altering the test, since it also > means that all the clients are working with separate tables and hence > not able to share read I/O; that doesn't seem like it's the same > benchmark at all. I agree with you at this point. Generally speaking the benchmark has little meaning if it has no conflicts in the test case. I only borrowed pgbench's source code to implement my test cases. Note that there's only one database in my last test case. My modified "pgbench" isn't a pgbench any more and I didn't intend to change pgbench's spec like that. Probably it was my mistake that I had posted my test cases using the form of patch. My intension was to clarify the difference of my test cases. However heavy conflicts with scaling factor 1 doesn't seem preferable at least as the default of pgbench. Regards, Hiroshi Inoue
> Okay, plan B then: let's ask people to redo their benchmarks with > -s bigger than one. Now, how much bigger? > > To the extent that you think this is a model of a real bank, it should > be obvious that the number of concurrent transactions cannot exceed the > number of tellers; there should never be any write contention on a > teller's table row, because only that teller (client) should be issuing > transactions against it. Contention on a branch's row is realistic, > but not from more clients than there are tellers in the branch. > > As a rule of thumb, then, we could say that the benchmark's results are > not to be believed for numbers of clients exceeding perhaps 5 times the > scale factor, ie, half the number of teller rows (so that it's not too > likely we will have contention on a teller row). At least -s 5 seems reasonable for me too. Maybe we should make it as the default setting for pgbench? -- Tatsuo Ishii
On Fri, 23 Feb 2001, Hannu Krosing wrote: HK> > DM> I just done the experiment with increasing HZ to 1000 on my own machine HK> > DM> (PII 374). Your test program reports 2 ms instead of 20. The other side HK> > DM> of increasing HZ is surely more overhead to scheduler system. Anyway, it's HK> > DM> a bit of data to dig into, I suppose ;-) HK> > DM> HK> > DM> Results for pgbench with 7.1b4: (BTW, machine is FreeBSD 4-stable on IBM HK> > DM> DTLA IDE in ATA66 mode with tag queueing and soft updates turned on) HK> HK> Is this unmodified pgbench or has it Hiroshi tweaked behaviour of HK> connecting each client to its own database, so that locking and such HK> does not shade the possible benefits (was it about 15% ?) of delay>1 HK> also, IIRC Tom suggested running with at least -B 1024 if you can. It was original pgbench. Maybe, duritng this weekend I'll make new kernel with big SHM table and try to test with larger -B (for now, -B 256 is the most I can set) Sincerely, D.Marck [DM5020, DM268-RIPE, DM3-RIPN] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------