Thread: High context switches occurring

High context switches occurring

From
"Anjan Dave"
Date:

Hi,

 

One of our PG server is experiencing extreme slowness and there are hundreds of SELECTS building up. I am not sure if heavy context switching is the cause of this or something else is causing it.

 

Is this pretty much the final word on this issue?

http://archives.postgresql.org/pgsql-performance/2004-04/msg00249.php

 

procs                      memory      swap          io     system         cpu

 r  b   swpd   free   buff  cache              si   so   bi    bo   in    cs       us sy id wa

 2  0     20 2860544 124816 8042544    0    0     0     0    0     0  0  0  0  0

 2  0     20 2860376 124816 8042552    0    0     0    24  157 115322 13 10 76  0

 3  0     20 2860364 124840 8042540    0    0     0   228  172 120003 12 10 77  0

 2  0     20 2860364 124840 8042540    0    0     0    20  158 118816 15 10 75  0

 2  0     20 2860080 124840 8042540    0    0     0    10  152 117858 12 11 77  0

 1  0     20 2860080 124848 8042572    0    0     0   210  202 114724 14 10 76  0

 2  0     20 2860080 124848 8042572    0    0     0    20  169 114843 13 10 77  0

 3  0     20 2859908 124860 8042576    0    0     0   188  180 115134 14 11 75  0

 3  0     20 2859848 124860 8042576    0    0     0    20  173 113470 13 10 77  0

 2  0     20 2859836 124860 8042576    0    0     0    10  157 112839 14 11 75  0

 

The system seems to be fine on iowait/memory side, except the CPU being busy with the CS. Here’s the top output:

 

11:54:57  up 59 days, 14:11,  2 users,  load average: 1.13, 1.66, 1.52

282 processes: 281 sleeping, 1 running, 0 zombie, 0 stopped

CPU states:  cpu    user    nice  system    irq  softirq  iowait    idle

           total   13.8%    0.0%    9.7%   0.0%     0.0%    0.0%   76.2%

           cpu00   12.3%    0.0%   10.5%   0.0%     0.0%    0.1%   76.8%

           cpu01   12.1%    0.0%    6.1%   0.0%     0.0%    0.1%   81.5%

           cpu02   10.9%    0.0%    9.1%   0.0%     0.0%    0.0%   79.9%

           cpu03   19.4%    0.0%   14.9%   0.0%     0.0%    0.0%   65.6%

           cpu04   13.9%    0.0%   11.1%   0.0%     0.0%    0.0%   74.9%

           cpu05   14.9%    0.0%    9.1%   0.0%     0.0%    0.0%   75.9%

           cpu06   12.9%    0.0%    8.9%   0.0%     0.0%    0.0%   78.1%

           cpu07   14.3%    0.0%    8.1%   0.0%     0.1%    0.0%   77.3%

Mem:  12081720k av, 9273304k used, 2808416k free,       0k shrd,  126048k buff

                   4686808k actv, 3211872k in_d,  170240k in_c

Swap: 4096532k av,      20k used, 4096512k free                 8044072k cached

 

 

PostgreSQL 7.4.7 on i686-redhat-linux-gnu

Red Hat Enterprise Linux AS release 3 (Taroon Update 5)

Linux vl-pe6650-004 2.4.21-32.0.1.ELsmp

 

This is a Dell Quad XEON. Hyperthreading is turned on, and I am planning to turn it off as soon as I get a chance to bring it down.

 

WAL is on separate drives from the OS and database.

 

Appreciate any inputs please….

 

Thanks,
Anjan

 

 

Re: High context switches occurring

From
Vivek Khera
Date:

On Nov 22, 2005, at 11:59 AM, Anjan Dave wrote: 

This is a Dell Quad XEON. Hyperthreading is turned on, and I am planning to turn it off as soon as I get a chance to bring it down.


You should probably also upgrade to Pg 8.0 or newer since it is a known problem with XEON processors and older postgres versions.  Upgrading Pg may solve your problem or it may not.  It is just a fluke with XEON processors...


Re: High context switches occurring

From
Tom Lane
Date:
Vivek Khera <vivek@khera.org> writes:
> On Nov 22, 2005, at 11:59 AM, Anjan Dave wrote:
>> This is a Dell Quad XEON. Hyperthreading is turned on, and I am
>> planning to turn it off as soon as I get a chance to bring it down.

> You should probably also upgrade to Pg 8.0 or newer since it is a
> known problem with XEON processors and older postgres versions.
> Upgrading Pg may solve your problem or it may not.

PG 8.1 is the first release that has a reasonable probability of
avoiding heavy contention for the buffer manager lock when there
are multiple CPUs.  If you're going to update to try to fix this,
you need to go straight to 8.1.

I've recently been chasing a report from Rob Creager that seems to
indicate contention on SubTransControlLock, so the slru code is
likely to be our next bottleneck to fix :-(

            regards, tom lane

Re: High context switches occurring

From
"Anjan Dave"
Date:
Thanks, guys, I'll start planning on upgrading to PG8.1

Would this problem change it's nature in any way on the recent Dual-Core
Intel XEON MP machines?

Thanks,
Anjan

-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Tuesday, November 22, 2005 12:36 PM
To: Vivek Khera
Cc: Postgresql Performance; Anjan Dave
Subject: Re: [PERFORM] High context switches occurring

Vivek Khera <vivek@khera.org> writes:
> On Nov 22, 2005, at 11:59 AM, Anjan Dave wrote:
>> This is a Dell Quad XEON. Hyperthreading is turned on, and I am
>> planning to turn it off as soon as I get a chance to bring it down.

> You should probably also upgrade to Pg 8.0 or newer since it is a
> known problem with XEON processors and older postgres versions.
> Upgrading Pg may solve your problem or it may not.

PG 8.1 is the first release that has a reasonable probability of
avoiding heavy contention for the buffer manager lock when there
are multiple CPUs.  If you're going to update to try to fix this,
you need to go straight to 8.1.

I've recently been chasing a report from Rob Creager that seems to
indicate contention on SubTransControlLock, so the slru code is
likely to be our next bottleneck to fix :-(

            regards, tom lane


Re: High context switches occurring

From
Tom Lane
Date:
"Anjan Dave" <adave@vantage.com> writes:
> Would this problem change it's nature in any way on the recent Dual-Core
> Intel XEON MP machines?

Probably not much.

There's some evidence that Opterons have less of a problem than Xeons
in multi-chip configurations, but we've seen CS thrashing on Opterons
too.  I think the issue is probably there to some extent in any modern
SMP architecture.

            regards, tom lane

Re: High context switches occurring

From
"Anjan Dave"
Date:
Is there any way to get a temporary relief from this Context Switching
storm? Does restarting postmaster help?

It seems that I can recreate the heavy CS with just one SELECT
statement...and then when multiple such SELECT queries are coming in,
things just get hosed up until we cancel a bunch of queries...

Thanks,
Anjan


-----Original Message-----
From: Anjan Dave
Sent: Tuesday, November 22, 2005 2:24 PM
To: Tom Lane; Vivek Khera
Cc: Postgresql Performance
Subject: Re: [PERFORM] High context switches occurring

Thanks, guys, I'll start planning on upgrading to PG8.1

Would this problem change it's nature in any way on the recent Dual-Core
Intel XEON MP machines?

Thanks,
Anjan

-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Tuesday, November 22, 2005 12:36 PM
To: Vivek Khera
Cc: Postgresql Performance; Anjan Dave
Subject: Re: [PERFORM] High context switches occurring

Vivek Khera <vivek@khera.org> writes:
> On Nov 22, 2005, at 11:59 AM, Anjan Dave wrote:
>> This is a Dell Quad XEON. Hyperthreading is turned on, and I am
>> planning to turn it off as soon as I get a chance to bring it down.

> You should probably also upgrade to Pg 8.0 or newer since it is a
> known problem with XEON processors and older postgres versions.
> Upgrading Pg may solve your problem or it may not.

PG 8.1 is the first release that has a reasonable probability of
avoiding heavy contention for the buffer manager lock when there
are multiple CPUs.  If you're going to update to try to fix this,
you need to go straight to 8.1.

I've recently been chasing a report from Rob Creager that seems to
indicate contention on SubTransControlLock, so the slru code is
likely to be our next bottleneck to fix :-(

            regards, tom lane


---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org


Re: High context switches occurring

From
Scott Marlowe
Date:
On Tue, 2005-11-22 at 14:33, Anjan Dave wrote:
> Is there any way to get a temporary relief from this Context Switching
> storm? Does restarting postmaster help?
>
> It seems that I can recreate the heavy CS with just one SELECT
> statement...and then when multiple such SELECT queries are coming in,
> things just get hosed up until we cancel a bunch of queries...

Is your machine a hyperthreaded one?  Some folks have found that turning
off hyper threading helps.  I knew it made my servers better behaved in
the past.

Re: High context switches occurring

From
Scott Marlowe
Date:
P.s., followup to my last post, I don't know if turning of HT actually
lowered the number of context switches, just that it made my server run
faster.

Re: High context switches occurring

From
"Anjan Dave"
Date:
Yes, it's turned on, unfortunately it got overlooked during the setup,
and until now...!

It's mostly a 'read' application, I increased the vm.max-readahead to
2048 from the default 256, after which I've not seen the CS storm,
though it could be incidental.

Thanks,
Anjan

-----Original Message-----
From: Scott Marlowe [mailto:smarlowe@g2switchworks.com]
Sent: Tuesday, November 22, 2005 3:38 PM
To: Anjan Dave
Cc: Tom Lane; Vivek Khera; Postgresql Performance
Subject: Re: [PERFORM] High context switches occurring

On Tue, 2005-11-22 at 14:33, Anjan Dave wrote:
> Is there any way to get a temporary relief from this Context Switching
> storm? Does restarting postmaster help?
>
> It seems that I can recreate the heavy CS with just one SELECT
> statement...and then when multiple such SELECT queries are coming in,
> things just get hosed up until we cancel a bunch of queries...

Is your machine a hyperthreaded one?  Some folks have found that turning
off hyper threading helps.  I knew it made my servers better behaved in
the past.


Re: High context switches occurring

From
Simon Riggs
Date:
On Tue, 2005-11-22 at 18:17 -0500, Anjan Dave wrote:

> It's mostly a 'read' application, I increased the vm.max-readahead to
> 2048 from the default 256, after which I've not seen the CS storm,
> though it could be incidental.

Can you verify this, please?

Turn it back down again, try the test, then reset and try the test.

If that is a repeatable way of recreating one manifestation of the
problem then we will be further ahead than we are now.

Thanks,

Best Regards, Simon Riggs


Re: High context switches occurring

From
"Anjan Dave"
Date:
The offending SELECT query that invoked the CS storm was optimized by
folks here last night, so it's hard to say if the VM setting made a
difference. I'll give it a try anyway.

Thanks,
Anjan

-----Original Message-----
From: Simon Riggs [mailto:simon@2ndquadrant.com]
Sent: Wednesday, November 23, 2005 1:14 PM
To: Anjan Dave
Cc: Scott Marlowe; Tom Lane; Vivek Khera; Postgresql Performance
Subject: Re: [PERFORM] High context switches occurring

On Tue, 2005-11-22 at 18:17 -0500, Anjan Dave wrote:

> It's mostly a 'read' application, I increased the vm.max-readahead to
> 2048 from the default 256, after which I've not seen the CS storm,
> though it could be incidental.

Can you verify this, please?

Turn it back down again, try the test, then reset and try the test.

If that is a repeatable way of recreating one manifestation of the
problem then we will be further ahead than we are now.

Thanks,

Best Regards, Simon Riggs



Re: High context switches occurring

From
"Anjan Dave"
Date:
Simon,

I tested it by running two of those simultaneous queries (the
'unoptimized' one), and it doesn't make any difference whether
vm.max-readahead is 256 or 2048...the modified query runs in a snap.

Thanks,
Anjan

-----Original Message-----
From: Anjan Dave
Sent: Wednesday, November 23, 2005 1:33 PM
To: Simon Riggs
Cc: Scott Marlowe; Tom Lane; Vivek Khera; Postgresql Performance
Subject: Re: [PERFORM] High context switches occurring

The offending SELECT query that invoked the CS storm was optimized by
folks here last night, so it's hard to say if the VM setting made a
difference. I'll give it a try anyway.

Thanks,
Anjan

-----Original Message-----
From: Simon Riggs [mailto:simon@2ndquadrant.com]
Sent: Wednesday, November 23, 2005 1:14 PM
To: Anjan Dave
Cc: Scott Marlowe; Tom Lane; Vivek Khera; Postgresql Performance
Subject: Re: [PERFORM] High context switches occurring

On Tue, 2005-11-22 at 18:17 -0500, Anjan Dave wrote:

> It's mostly a 'read' application, I increased the vm.max-readahead to
> 2048 from the default 256, after which I've not seen the CS storm,
> though it could be incidental.

Can you verify this, please?

Turn it back down again, try the test, then reset and try the test.

If that is a repeatable way of recreating one manifestation of the
problem then we will be further ahead than we are now.

Thanks,

Best Regards, Simon Riggs



---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend


Re: High context switches occurring

From
Sven Geisler
Date:
Hi Anjan,

I can support Scott. You should turn on HT if you see high values for CS.

I do have a few customers running a web-based 3-tier application with
PostgreSQL. We had to turn off HT to have better overall performance.
The issue is the behavior under high load. I notice that HT on does
collapse faster.

Just a question. Which version of XEON do you have? What is does the
server have as memory architecture.

I think, Dual-Core XEON's are no issue. One of our customers does use a
4-way Dual-Core Opteron 875 since a few months. We have Pg 8.0.3 and it
runs perfect. I have to say that we use a special patch from Tom which
fix an issue with the looking of shared buffers and the Opteron.
I notice that this patch is also useful for XEON's with EMT64.

Best regards
Sven.

Anjan Dave schrieb:
> Yes, it's turned on, unfortunately it got overlooked during the setup,
> and until now...!
>
> It's mostly a 'read' application, I increased the vm.max-readahead to
> 2048 from the default 256, after which I've not seen the CS storm,
> though it could be incidental.
>
> Thanks,
> Anjan
>
> -----Original Message-----
> From: Scott Marlowe [mailto:smarlowe@g2switchworks.com]
> Sent: Tuesday, November 22, 2005 3:38 PM
> To: Anjan Dave
> Cc: Tom Lane; Vivek Khera; Postgresql Performance
> Subject: Re: [PERFORM] High context switches occurring
>
> On Tue, 2005-11-22 at 14:33, Anjan Dave wrote:
>
>>Is there any way to get a temporary relief from this Context Switching
>>storm? Does restarting postmaster help?
>>
>>It seems that I can recreate the heavy CS with just one SELECT
>>statement...and then when multiple such SELECT queries are coming in,
>>things just get hosed up until we cancel a bunch of queries...
>
>
> Is your machine a hyperthreaded one?  Some folks have found that turning
> off hyper threading helps.  I knew it made my servers better behaved in
> the past.
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: Don't 'kill -9' the postmaster

--
/This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you are not the intended recipient, you should not
copy it, re-transmit it, use it or disclose its contents, but should
return it to the sender immediately and delete your copy from your
system. Thank you for your cooperation./

Sven Geisler <sgeisler@aeccom.com> Tel +49.30.5362.1627 Fax .1638
Senior Developer,    AEC/communications GmbH    Berlin,   Germany

Re: High context switches occurring

From
"Anjan Dave"
Date:
I ran a bit exhaustive pgbench on 2 test machines I have (quad dual core
Intel and Opteron). Ofcourse the Opteron was much faster, but
interestingly, it was experiencing 3x more context switches than the
Intel box (upto 100k, versus ~30k avg on Dell). Both are RH4.0
64bit/PG8.1 64bit.

Sun (v40z):
-bash-3.00$ time pgbench -c 1000 -t 30 pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
number of clients: 1000
number of transactions per client: 30
number of transactions actually processed: 30000/30000
tps = 45.871234 (including connections establishing)
tps = 46.092629 (excluding connections establishing)

real    10m54.240s
user    0m34.894s
sys     3m9.470s


Dell (6850):
-bash-3.00$ time pgbench -c 1000 -t 30 pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
number of clients: 1000
number of transactions per client: 30
number of transactions actually processed: 30000/30000
tps = 22.088214 (including connections establishing)
tps = 22.162454 (excluding connections establishing)

real    22m38.301s
user    0m43.520s
sys     5m42.108s

Thanks,
Anjan

-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Tuesday, November 22, 2005 2:42 PM
To: Anjan Dave
Cc: Vivek Khera; Postgresql Performance
Subject: Re: [PERFORM] High context switches occurring

"Anjan Dave" <adave@vantage.com> writes:
> Would this problem change it's nature in any way on the recent
Dual-Core
> Intel XEON MP machines?

Probably not much.

There's some evidence that Opterons have less of a problem than Xeons
in multi-chip configurations, but we've seen CS thrashing on Opterons
too.  I think the issue is probably there to some extent in any modern
SMP architecture.

            regards, tom lane


Re: High context switches occurring

From
Vivek Khera
Date:
On Dec 6, 2005, at 2:04 PM, Anjan Dave wrote:

> interestingly, it was experiencing 3x more context switches than the
> Intel box (upto 100k, versus ~30k avg on Dell). Both are RH4.0

I'll assume that's context switches per second... so for the opteron
that's 65400000 cs's and for the Dell that's 40740000 switches during
the duration of the test.  Not so much a difference...

You see, the opteron was context switching more because it was doing
more work :-)



Re: High context switches occurring

From
Tom Lane
Date:
"Anjan Dave" <adave@vantage.com> writes:
> -bash-3.00$ time pgbench -c 1000 -t 30 pgbench
> starting vacuum...end.
> transaction type: TPC-B (sort of)
> scaling factor: 1
> number of clients: 1000
> number of transactions per client: 30
> number of transactions actually processed: 30000/30000
> tps = 45.871234 (including connections establishing)
> tps = 46.092629 (excluding connections establishing)

I can hardly think of a worse way to run pgbench :-(.  These numbers are
about meaningless, for two reasons:

1. You don't want number of clients (-c) much higher than scaling factor
(-s in the initialization step).  The number of rows in the "branches"
table will equal -s, and since every transaction updates one
randomly-chosen "branches" row, you will be measuring mostly row-update
contention overhead if there's more concurrent transactions than there
are rows.  In the case -s 1, which is what you've got here, there is no
actual concurrency at all --- all the transactions stack up on the
single branches row.

2. Running a small number of transactions per client means that
startup/shutdown transients overwhelm the steady-state data.  You should
probably run at least a thousand transactions per client if you want
repeatable numbers.

Try something like "-s 10 -c 10 -t 3000" to get numbers reflecting test
conditions more like what the TPC council had in mind when they designed
this benchmark.  I tend to repeat such a test 3 times to see if the
numbers are repeatable, and quote the middle TPS number as long as
they're not too far apart.

            regards, tom lane

Re: High context switches occurring

From
Bruce Momjian
Date:
Tom Lane wrote:
> "Anjan Dave" <adave@vantage.com> writes:
> > -bash-3.00$ time pgbench -c 1000 -t 30 pgbench
> > starting vacuum...end.
> > transaction type: TPC-B (sort of)
> > scaling factor: 1
> > number of clients: 1000
> > number of transactions per client: 30
> > number of transactions actually processed: 30000/30000
> > tps = 45.871234 (including connections establishing)
> > tps = 46.092629 (excluding connections establishing)
>
> I can hardly think of a worse way to run pgbench :-(.  These numbers are
> about meaningless, for two reasons:
>
> 1. You don't want number of clients (-c) much higher than scaling factor
> (-s in the initialization step).  The number of rows in the "branches"
> table will equal -s, and since every transaction updates one

Should we throw a warning when someone runs the test this way?

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

Re: High context switches occurring

From
Tom Lane
Date:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
> Tom Lane wrote:
>> 1. You don't want number of clients (-c) much higher than scaling factor
>> (-s in the initialization step).

> Should we throw a warning when someone runs the test this way?

Not a bad idea (though of course only for the "standard" scripts).
Tatsuo, what do you think?

            regards, tom lane

Re: High context switches occurring

From
"Anjan Dave"
Date:
Thanks for your inputs, Tom. I was going after high concurrent clients,
but should have read this carefully -

-s scaling_factor
                this should be used with -i (initialize) option.
                number of tuples generated will be multiple of the
                scaling factor. For example, -s 100 will imply 10M
                (10,000,000) tuples in the accounts table.
                default is 1.  NOTE: scaling factor should be at least
                as large as the largest number of clients you intend
                to test; else you'll mostly be measuring update
contention.

I'll rerun the tests.

Thanks,
Anjan


-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Tuesday, December 06, 2005 6:45 PM
To: Anjan Dave
Cc: Vivek Khera; Postgresql Performance
Subject: Re: [PERFORM] High context switches occurring

"Anjan Dave" <adave@vantage.com> writes:
> -bash-3.00$ time pgbench -c 1000 -t 30 pgbench
> starting vacuum...end.
> transaction type: TPC-B (sort of)
> scaling factor: 1
> number of clients: 1000
> number of transactions per client: 30
> number of transactions actually processed: 30000/30000
> tps = 45.871234 (including connections establishing)
> tps = 46.092629 (excluding connections establishing)

I can hardly think of a worse way to run pgbench :-(.  These numbers are
about meaningless, for two reasons:

1. You don't want number of clients (-c) much higher than scaling factor
(-s in the initialization step).  The number of rows in the "branches"
table will equal -s, and since every transaction updates one
randomly-chosen "branches" row, you will be measuring mostly row-update
contention overhead if there's more concurrent transactions than there
are rows.  In the case -s 1, which is what you've got here, there is no
actual concurrency at all --- all the transactions stack up on the
single branches row.

2. Running a small number of transactions per client means that
startup/shutdown transients overwhelm the steady-state data.  You should
probably run at least a thousand transactions per client if you want
repeatable numbers.

Try something like "-s 10 -c 10 -t 3000" to get numbers reflecting test
conditions more like what the TPC council had in mind when they designed
this benchmark.  I tend to repeat such a test 3 times to see if the
numbers are repeatable, and quote the middle TPS number as long as
they're not too far apart.

            regards, tom lane


Re: High context switches occurring

From
Scott Marlowe
Date:
On Tue, 2005-12-06 at 22:49, Tom Lane wrote:
> Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > Tom Lane wrote:
> >> 1. You don't want number of clients (-c) much higher than scaling factor
> >> (-s in the initialization step).
>
> > Should we throw a warning when someone runs the test this way?
>
> Not a bad idea (though of course only for the "standard" scripts).
> Tatsuo, what do you think?

Just to clarify, I think the pgbench program should throw the warning,
not postgresql itself.  Not sure if that's what you were meaning or
not.  Maybe even have it require a switch to run in such a mode, like a
--yes-i-want-to-run-a-meaningless-test switch or something.

Re: High context switches occurring

From
Tatsuo Ishii
Date:
> Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > Tom Lane wrote:
> >> 1. You don't want number of clients (-c) much higher than scaling factor
> >> (-s in the initialization step).
>
> > Should we throw a warning when someone runs the test this way?
>
> Not a bad idea (though of course only for the "standard" scripts).
> Tatsuo, what do you think?

That would be annoying since almost every users will get the kind of
warnings. What about improving the README?
--
Tatsuo Ishii
SRA OSS, Inc. Japan

Re: High context switches occurring

From
"Anjan Dave"
Date:
Re-ran it 3 times on each host - 
 
Sun:
-bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
number of clients: 10
number of transactions per client: 3000
number of transactions actually processed: 30000/30000
tps = 827.810778 (including connections establishing)
tps = 828.410801 (excluding connections establishing)
real    0m36.579s
user    0m1.222s
sys     0m3.422s

Intel:
-bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
number of clients: 10
number of transactions per client: 3000
number of transactions actually processed: 30000/30000
tps = 597.067503 (including connections establishing)
tps = 597.606169 (excluding connections establishing)
real    0m50.380s
user    0m2.621s
sys     0m7.818s

Thanks,
Anjan
 

    -----Original Message----- 
    From: Anjan Dave 
    Sent: Wed 12/7/2005 10:54 AM 
    To: Tom Lane 
    Cc: Vivek Khera; Postgresql Performance 
    Subject: Re: [PERFORM] High context switches occurring 
    
    

    Thanks for your inputs, Tom. I was going after high concurrent clients, 
    but should have read this carefully - 

    -s scaling_factor 
                    this should be used with -i (initialize) option. 
                    number of tuples generated will be multiple of the 
                    scaling factor. For example, -s 100 will imply 10M 
                    (10,000,000) tuples in the accounts table. 
                    default is 1.  NOTE: scaling factor should be at least 
                    as large as the largest number of clients you intend 
                    to test; else you'll mostly be measuring update 
    contention. 

    I'll rerun the tests. 

    Thanks, 
    Anjan 


    -----Original Message----- 
    From: Tom Lane [mailto:tgl@sss.pgh.pa.us] 
    Sent: Tuesday, December 06, 2005 6:45 PM 
    To: Anjan Dave 
    Cc: Vivek Khera; Postgresql Performance 
    Subject: Re: [PERFORM] High context switches occurring 

    "Anjan Dave" <adave@vantage.com> writes: 
    > -bash-3.00$ time pgbench -c 1000 -t 30 pgbench 
    > starting vacuum...end. 
    > transaction type: TPC-B (sort of) 
    > scaling factor: 1 
    > number of clients: 1000 
    > number of transactions per client: 30 
    > number of transactions actually processed: 30000/30000 
    > tps = 45.871234 (including connections establishing) 
    > tps = 46.092629 (excluding connections establishing) 

    I can hardly think of a worse way to run pgbench :-(.  These numbers are 
    about meaningless, for two reasons: 

    1. You don't want number of clients (-c) much higher than scaling factor 
    (-s in the initialization step).  The number of rows in the "branches" 
    table will equal -s, and since every transaction updates one 
    randomly-chosen "branches" row, you will be measuring mostly row-update 
    contention overhead if there's more concurrent transactions than there 
    are rows.  In the case -s 1, which is what you've got here, there is no 
    actual concurrency at all --- all the transactions stack up on the 
    single branches row. 

    2. Running a small number of transactions per client means that 
    startup/shutdown transients overwhelm the steady-state data.  You should 
    probably run at least a thousand transactions per client if you want 
    repeatable numbers. 

    Try something like "-s 10 -c 10 -t 3000" to get numbers reflecting test 
    conditions more like what the TPC council had in mind when they designed 
    this benchmark.  I tend to repeat such a test 3 times to see if the 
    numbers are repeatable, and quote the middle TPS number as long as 
    they're not too far apart. 

                            regards, tom lane 


    ---------------------------(end of broadcast)--------------------------- 
    TIP 5: don't forget to increase your free space map settings 


Re: High context switches occurring

From
Juan Casero
Date:
Guys -

Help me out here as I try to understand this benchmark.  What is the Sun
hardware and operating system we are talking about here and what is the intel
hardware and operating system?   What was the Sun version of PostgreSQL
compiled with?  Gcc on Solaris (assuming sparc) or Sun studio?  What was
PostgreSQL compiled with on intel?   Gcc on linux?

Thanks,
Juan

On Monday 19 December 2005 21:08, Anjan Dave wrote:
> Re-ran it 3 times on each host -
>
> Sun:
> -bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench
> starting vacuum...end.
> transaction type: TPC-B (sort of)
> scaling factor: 1
> number of clients: 10
> number of transactions per client: 3000
> number of transactions actually processed: 30000/30000
> tps = 827.810778 (including connections establishing)
> tps = 828.410801 (excluding connections establishing)
> real    0m36.579s
> user    0m1.222s
> sys     0m3.422s
>
> Intel:
> -bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench
> starting vacuum...end.
> transaction type: TPC-B (sort of)
> scaling factor: 1
> number of clients: 10
> number of transactions per client: 3000
> number of transactions actually processed: 30000/30000
> tps = 597.067503 (including connections establishing)
> tps = 597.606169 (excluding connections establishing)
> real    0m50.380s
> user    0m2.621s
> sys     0m7.818s
>
> Thanks,
> Anjan
>
>
>     -----Original Message-----
>     From: Anjan Dave
>     Sent: Wed 12/7/2005 10:54 AM
>     To: Tom Lane
>     Cc: Vivek Khera; Postgresql Performance
>     Subject: Re: [PERFORM] High context switches occurring
>
>
>
>     Thanks for your inputs, Tom. I was going after high concurrent clients,
>     but should have read this carefully -
>
>     -s scaling_factor
>                     this should be used with -i (initialize) option.
>                     number of tuples generated will be multiple of the
>                     scaling factor. For example, -s 100 will imply 10M
>                     (10,000,000) tuples in the accounts table.
>                     default is 1.  NOTE: scaling factor should be at least
>                     as large as the largest number of clients you intend
>                     to test; else you'll mostly be measuring update
>     contention.
>
>     I'll rerun the tests.
>
>     Thanks,
>     Anjan
>
>
>     -----Original Message-----
>     From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
>     Sent: Tuesday, December 06, 2005 6:45 PM
>     To: Anjan Dave
>     Cc: Vivek Khera; Postgresql Performance
>     Subject: Re: [PERFORM] High context switches occurring
>
>     "Anjan Dave" <adave@vantage.com> writes:
>     > -bash-3.00$ time pgbench -c 1000 -t 30 pgbench
>     > starting vacuum...end.
>     > transaction type: TPC-B (sort of)
>     > scaling factor: 1
>     > number of clients: 1000
>     > number of transactions per client: 30
>     > number of transactions actually processed: 30000/30000
>     > tps = 45.871234 (including connections establishing)
>     > tps = 46.092629 (excluding connections establishing)
>
>     I can hardly think of a worse way to run pgbench :-(.  These numbers are
>     about meaningless, for two reasons:
>
>     1. You don't want number of clients (-c) much higher than scaling factor
>     (-s in the initialization step).  The number of rows in the "branches"
>     table will equal -s, and since every transaction updates one
>     randomly-chosen "branches" row, you will be measuring mostly row-update
>     contention overhead if there's more concurrent transactions than there
>     are rows.  In the case -s 1, which is what you've got here, there is no
>     actual concurrency at all --- all the transactions stack up on the
>     single branches row.
>
>     2. Running a small number of transactions per client means that
>     startup/shutdown transients overwhelm the steady-state data.  You should
>     probably run at least a thousand transactions per client if you want
>     repeatable numbers.
>
>     Try something like "-s 10 -c 10 -t 3000" to get numbers reflecting test
>     conditions more like what the TPC council had in mind when they designed
>     this benchmark.  I tend to repeat such a test 3 times to see if the
>     numbers are repeatable, and quote the middle TPS number as long as
>     they're not too far apart.
>
>                             regards, tom lane
>
>
>     ---------------------------(end of broadcast)---------------------------
>     TIP 5: don't forget to increase your free space map settings
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: if posting/reading through Usenet, please send an appropriate
>        subscribe-nomail command to majordomo@postgresql.org so that your
>        message can get through to the mailing list cleanly

Re: High context switches occurring

From
Oleg Bartunov
Date:
Hi there,

I see a very low performance and high context switches on our
dual itanium2 slackware box (Linux ptah 2.6.14 #1 SMP)
with 8Gb of RAM, running 8.1_STABLE. Any tips here ?

postgres@ptah:~/cvs/8.1/pgsql/contrib/pgbench$ time pgbench -s 10 -c 10 -t 3000 pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
number of clients: 10
number of transactions per client: 3000
number of transactions actually processed: 30000/30000
tps = 163.817425 (including connections establishing)
tps = 163.830558 (excluding connections establishing)

real    3m3.374s
user    0m1.888s
sys     0m2.472s

output from vmstat 2

  2  1      0 4185104 197904 3213888    0    0     0  1456  673  6852 25  1 45 29
  6  0      0 4184880 197904 3213888    0    0     0  1456  673  6317 28  2 49 21
  0  1      0 4184656 197904 3213888    0    0     0  1464  671  7049 25  2 42 31
  3  0      0 4184432 197904 3213888    0    0     0  1436  671  7073 25  1 44 29
  0  1      0 4184432 197904 3213888    0    0     0  1460  671  7014 28  1 42 29
  0  1      0 4184096 197920 3213872    0    0     0  1440  670  7065 25  2 42 31
  0  1      0 4183872 197920 3213872    0    0     0  1444  671  6718 26  2 44 28
  0  1      0 4183648 197920 3213872    0    0     0  1468  670  6525 15  3 50 33
  0  1      0 4184352 197920 3213872    0    0     0  1584  676  6476 12  2 50 36
  0  1      0 4193232 197920 3213872    0    0     0  1424  671  5848 12  1 50 37
  0  0      0 4195536 197920 3213872    0    0     0    20  509   104  0  0 99  1
  0  0      0 4195536 197920 3213872    0    0     0  1680  573    25  0  0 99  1
  0  0      0 4195536 197920 3213872    0    0     0     0  504    22  0  0 100

processor  : 1
vendor     : GenuineIntel
arch       : IA-64
family     : Itanium 2
model      : 2
revision   : 2
archrev    : 0
features   : branchlong
cpu number : 0
cpu regs   : 4
cpu MHz    : 1600.010490
itc MHz    : 1600.010490
BogoMIPS   : 2392.06
siblings   : 1



On Mon, 19 Dec 2005, Anjan Dave wrote:


> Re-ran it 3 times on each host -
>
> Sun:
> -bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench
> starting vacuum...end.
> transaction type: TPC-B (sort of)
> scaling factor: 1
> number of clients: 10
> number of transactions per client: 3000
> number of transactions actually processed: 30000/30000
> tps = 827.810778 (including connections establishing)
> tps = 828.410801 (excluding connections establishing)
> real    0m36.579s
> user    0m1.222s
> sys     0m3.422s
>
> Intel:
> -bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench
> starting vacuum...end.
> transaction type: TPC-B (sort of)
> scaling factor: 1
> number of clients: 10
> number of transactions per client: 3000
> number of transactions actually processed: 30000/30000
> tps = 597.067503 (including connections establishing)
> tps = 597.606169 (excluding connections establishing)
> real    0m50.380s
> user    0m2.621s
> sys     0m7.818s
>
> Thanks,
> Anjan
>
>
>     -----Original Message-----
>     From: Anjan Dave
>     Sent: Wed 12/7/2005 10:54 AM
>     To: Tom Lane
>     Cc: Vivek Khera; Postgresql Performance
>     Subject: Re: [PERFORM] High context switches occurring
>
>
>
>     Thanks for your inputs, Tom. I was going after high concurrent clients,
>     but should have read this carefully -
>
>     -s scaling_factor
>                     this should be used with -i (initialize) option.
>                     number of tuples generated will be multiple of the
>                     scaling factor. For example, -s 100 will imply 10M
>                     (10,000,000) tuples in the accounts table.
>                     default is 1.  NOTE: scaling factor should be at least
>                     as large as the largest number of clients you intend
>                     to test; else you'll mostly be measuring update
>     contention.
>
>     I'll rerun the tests.
>
>     Thanks,
>     Anjan
>
>
>     -----Original Message-----
>     From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
>     Sent: Tuesday, December 06, 2005 6:45 PM
>     To: Anjan Dave
>     Cc: Vivek Khera; Postgresql Performance
>     Subject: Re: [PERFORM] High context switches occurring
>
>     "Anjan Dave" <adave@vantage.com> writes:
>     > -bash-3.00$ time pgbench -c 1000 -t 30 pgbench
>     > starting vacuum...end.
>     > transaction type: TPC-B (sort of)
>     > scaling factor: 1
>     > number of clients: 1000
>     > number of transactions per client: 30
>     > number of transactions actually processed: 30000/30000
>     > tps = 45.871234 (including connections establishing)
>     > tps = 46.092629 (excluding connections establishing)
>
>     I can hardly think of a worse way to run pgbench :-(.  These numbers are
>     about meaningless, for two reasons:
>
>     1. You don't want number of clients (-c) much higher than scaling factor
>     (-s in the initialization step).  The number of rows in the "branches"
>     table will equal -s, and since every transaction updates one
>     randomly-chosen "branches" row, you will be measuring mostly row-update
>     contention overhead if there's more concurrent transactions than there
>     are rows.  In the case -s 1, which is what you've got here, there is no
>     actual concurrency at all --- all the transactions stack up on the
>     single branches row.
>
>     2. Running a small number of transactions per client means that
>     startup/shutdown transients overwhelm the steady-state data.  You should
>     probably run at least a thousand transactions per client if you want
>     repeatable numbers.
>
>     Try something like "-s 10 -c 10 -t 3000" to get numbers reflecting test
>     conditions more like what the TPC council had in mind when they designed
>     this benchmark.  I tend to repeat such a test 3 times to see if the
>     numbers are repeatable, and quote the middle TPS number as long as
>     they're not too far apart.
>
>                             regards, tom lane
>
>
>     ---------------------------(end of broadcast)---------------------------
>     TIP 5: don't forget to increase your free space map settings
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: if posting/reading through Usenet, please send an appropriate
>       subscribe-nomail command to majordomo@postgresql.org so that your
>       message can get through to the mailing list cleanly
>

     Regards,
         Oleg
_____________________________________________________________
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

Re: High context switches occurring

From
"Jignesh K. Shah"
Date:
It basically says pg_xlog is the bottleneck and move it to the disk with
the best response time that you can afford. :-)
Increasing checkpoint_segments doesn't seem to help much. Playing with
wal_sync_method might change the behavior.

For proof .. On Solaris, the /tmp is like a RAM Drive...Of course DO NOT
TRY ON PRODUCTION.

-bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 10
number of clients: 10
number of transactions per client: 3000
number of transactions actually processed: 30000/30000
tps = 356.578050 (including connections establishing)
tps = 356.733043 (excluding connections establishing)

real    1m24.396s
user    0m2.550s
sys     0m3.404s
-bash-3.00$ mv pg_xlog /tmp
-bash-3.00$ ln -s /tmp/pg_xlog pg_xlog
-bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 10
number of clients: 10
number of transactions per client: 3000
number of transactions actually processed: 30000/30000
tps = 2413.661323 (including connections establishing)
tps = 2420.754581 (excluding connections establishing)

real    0m12.617s
user    0m2.229s
sys     0m2.950s
-bash-3.00$ rm pg_xlog
-bash-3.00$ mv /tmp/pg_xlog pg_xlog
-bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 10
number of clients: 10
number of transactions per client: 3000
number of transactions actually processed: 30000/30000
tps = 350.227682 (including connections establishing)
tps = 350.382825 (excluding connections establishing)

real    1m27.595s
user    0m2.537s
sys     0m3.386s
-bash-3.00$


Regards,
Jignesh


Oleg Bartunov wrote:

> Hi there,
>
> I see a very low performance and high context switches on our
> dual itanium2 slackware box (Linux ptah 2.6.14 #1 SMP)
> with 8Gb of RAM, running 8.1_STABLE. Any tips here ?
>
> postgres@ptah:~/cvs/8.1/pgsql/contrib/pgbench$ time pgbench -s 10 -c
> 10 -t 3000 pgbench
> starting vacuum...end.
> transaction type: TPC-B (sort of)
> scaling factor: 1
> number of clients: 10
> number of transactions per client: 3000
> number of transactions actually processed: 30000/30000
> tps = 163.817425 (including connections establishing)
> tps = 163.830558 (excluding connections establishing)
>
> real    3m3.374s
> user    0m1.888s
> sys     0m2.472s
>
> output from vmstat 2
>
>  2  1      0 4185104 197904 3213888    0    0     0  1456  673  6852
> 25  1 45 29
>  6  0      0 4184880 197904 3213888    0    0     0  1456  673  6317
> 28  2 49 21
>  0  1      0 4184656 197904 3213888    0    0     0  1464  671  7049
> 25  2 42 31
>  3  0      0 4184432 197904 3213888    0    0     0  1436  671  7073
> 25  1 44 29
>  0  1      0 4184432 197904 3213888    0    0     0  1460  671  7014
> 28  1 42 29
>  0  1      0 4184096 197920 3213872    0    0     0  1440  670  7065
> 25  2 42 31
>  0  1      0 4183872 197920 3213872    0    0     0  1444  671  6718
> 26  2 44 28
>  0  1      0 4183648 197920 3213872    0    0     0  1468  670  6525
> 15  3 50 33
>  0  1      0 4184352 197920 3213872    0    0     0  1584  676  6476
> 12  2 50 36
>  0  1      0 4193232 197920 3213872    0    0     0  1424  671  5848
> 12  1 50 37
>  0  0      0 4195536 197920 3213872    0    0     0    20  509   104
> 0  0 99  1
>  0  0      0 4195536 197920 3213872    0    0     0  1680  573    25
> 0  0 99  1
>  0  0      0 4195536 197920 3213872    0    0     0     0  504    22
> 0  0 100
>
> processor  : 1
> vendor     : GenuineIntel
> arch       : IA-64
> family     : Itanium 2
> model      : 2
> revision   : 2
> archrev    : 0
> features   : branchlong
> cpu number : 0
> cpu regs   : 4
> cpu MHz    : 1600.010490
> itc MHz    : 1600.010490
> BogoMIPS   : 2392.06
> siblings   : 1
>
>
>
> On Mon, 19 Dec 2005, Anjan Dave wrote:
>
>
>> Re-ran it 3 times on each host -
>>
>> Sun:
>> -bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench
>> starting vacuum...end.
>> transaction type: TPC-B (sort of)
>> scaling factor: 1
>> number of clients: 10
>> number of transactions per client: 3000
>> number of transactions actually processed: 30000/30000
>> tps = 827.810778 (including connections establishing)
>> tps = 828.410801 (excluding connections establishing)
>> real    0m36.579s
>> user    0m1.222s
>> sys     0m3.422s
>>
>> Intel:
>> -bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench
>> starting vacuum...end.
>> transaction type: TPC-B (sort of)
>> scaling factor: 1
>> number of clients: 10
>> number of transactions per client: 3000
>> number of transactions actually processed: 30000/30000
>> tps = 597.067503 (including connections establishing)
>> tps = 597.606169 (excluding connections establishing)
>> real    0m50.380s
>> user    0m2.621s
>> sys     0m7.818s
>>
>> Thanks,
>> Anjan
>>
>>
>>     -----Original Message-----
>>     From: Anjan Dave
>>     Sent: Wed 12/7/2005 10:54 AM
>>     To: Tom Lane
>>     Cc: Vivek Khera; Postgresql Performance
>>     Subject: Re: [PERFORM] High context switches occurring
>>
>>
>>
>>     Thanks for your inputs, Tom. I was going after high concurrent
>> clients,
>>     but should have read this carefully -
>>
>>     -s scaling_factor
>>                     this should be used with -i (initialize) option.
>>                     number of tuples generated will be multiple of the
>>                     scaling factor. For example, -s 100 will imply 10M
>>                     (10,000,000) tuples in the accounts table.
>>                     default is 1.  NOTE: scaling factor should be at
>> least
>>                     as large as the largest number of clients you intend
>>                     to test; else you'll mostly be measuring update
>>     contention.
>>
>>     I'll rerun the tests.
>>
>>     Thanks,
>>     Anjan
>>
>>
>>     -----Original Message-----
>>     From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
>>     Sent: Tuesday, December 06, 2005 6:45 PM
>>     To: Anjan Dave
>>     Cc: Vivek Khera; Postgresql Performance
>>     Subject: Re: [PERFORM] High context switches occurring
>>
>>     "Anjan Dave" <adave@vantage.com> writes:
>>     > -bash-3.00$ time pgbench -c 1000 -t 30 pgbench
>>     > starting vacuum...end.
>>     > transaction type: TPC-B (sort of)
>>     > scaling factor: 1
>>     > number of clients: 1000
>>     > number of transactions per client: 30
>>     > number of transactions actually processed: 30000/30000
>>     > tps = 45.871234 (including connections establishing)
>>     > tps = 46.092629 (excluding connections establishing)
>>
>>     I can hardly think of a worse way to run pgbench :-(.  These
>> numbers are
>>     about meaningless, for two reasons:
>>
>>     1. You don't want number of clients (-c) much higher than scaling
>> factor
>>     (-s in the initialization step).  The number of rows in the
>> "branches"
>>     table will equal -s, and since every transaction updates one
>>     randomly-chosen "branches" row, you will be measuring mostly
>> row-update
>>     contention overhead if there's more concurrent transactions than
>> there
>>     are rows.  In the case -s 1, which is what you've got here, there
>> is no
>>     actual concurrency at all --- all the transactions stack up on the
>>     single branches row.
>>
>>     2. Running a small number of transactions per client means that
>>     startup/shutdown transients overwhelm the steady-state data.  You
>> should
>>     probably run at least a thousand transactions per client if you want
>>     repeatable numbers.
>>
>>     Try something like "-s 10 -c 10 -t 3000" to get numbers
>> reflecting test
>>     conditions more like what the TPC council had in mind when they
>> designed
>>     this benchmark.  I tend to repeat such a test 3 times to see if the
>>     numbers are repeatable, and quote the middle TPS number as long as
>>     they're not too far apart.
>>
>>                             regards, tom lane
>>
>>
>>     ---------------------------(end of
>> broadcast)---------------------------
>>     TIP 5: don't forget to increase your free space map settings
>>
>>
>> ---------------------------(end of broadcast)---------------------------
>> TIP 1: if posting/reading through Usenet, please send an appropriate
>>       subscribe-nomail command to majordomo@postgresql.org so that your
>>       message can get through to the mailing list cleanly
>>
>
>     Regards,
>         Oleg
> _____________________________________________________________
> Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
> Sternberg Astronomical Institute, Moscow University, Russia
> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
> phone: +007(495)939-16-83, +007(495)939-23-83
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: don't forget to increase your free space map settings


Re: High context switches occurring

From
Tom Lane
Date:
Oleg Bartunov <oleg@sai.msu.su> writes:
> I see a very low performance and high context switches on our
> dual itanium2 slackware box (Linux ptah 2.6.14 #1 SMP)
> with 8Gb of RAM, running 8.1_STABLE. Any tips here ?

> postgres@ptah:~/cvs/8.1/pgsql/contrib/pgbench$ time pgbench -s 10 -c 10 -t 3000 pgbench
> starting vacuum...end.
> transaction type: TPC-B (sort of)
> scaling factor: 1
> number of clients: 10

You can't expect any different with more clients than scaling factor :-(.

Note that -s is only effective when supplied with -i; it's basically
ignored during an actual test run.

            regards, tom lane

Re: High context switches occurring

From
Oleg Bartunov
Date:
On Tue, 20 Dec 2005, Tom Lane wrote:

> Oleg Bartunov <oleg@sai.msu.su> writes:
>> I see a very low performance and high context switches on our
>> dual itanium2 slackware box (Linux ptah 2.6.14 #1 SMP)
>> with 8Gb of RAM, running 8.1_STABLE. Any tips here ?
>
>> postgres@ptah:~/cvs/8.1/pgsql/contrib/pgbench$ time pgbench -s 10 -c 10 -t 3000 pgbench
>> starting vacuum...end.
>> transaction type: TPC-B (sort of)
>> scaling factor: 1
>> number of clients: 10
>
> You can't expect any different with more clients than scaling factor :-(.

Argh :) I copy'n pasted from previous message.

I still wondering with very poor performance of my server. Moving
pgdata to RAID6 helped - about 600 tps. Then, I moved pg_xlog to separate
disk and got strange error messages

postgres@ptah:~$ time pgbench  -c 10  -t 3000 pgbench
starting vacuum...end.
Client 0 aborted in state 8: ERROR:  integer out of range
Client 7 aborted in state 8: ERROR:  integer out of range

dropdb,createdb helped, but performance is about 160 tps.

Low-end AMD64 with SATA disks gives me ~400 tps in spite of
disks on itanium2 faster ( 80MB/sec ) than on AMD64 (60MB/sec).


     Regards,
         Oleg
_____________________________________________________________
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

Re: High context switches occurring

From
Tom Lane
Date:
Oleg Bartunov <oleg@sai.msu.su> writes:
> I still wondering with very poor performance of my server. Moving
> pgdata to RAID6 helped - about 600 tps. Then, I moved pg_xlog to separate
> disk and got strange error messages

> postgres@ptah:~$ time pgbench  -c 10  -t 3000 pgbench
> starting vacuum...end.
> Client 0 aborted in state 8: ERROR:  integer out of range
> Client 7 aborted in state 8: ERROR:  integer out of range

I've seen that too, after re-using an existing pgbench database enough
times.  I think that the way the test script is written, the adjustments
to the branch balances are always in the same direction, and so
eventually the fields overflow.  It's irrelevant to performance though.

            regards, tom lane

Re: High context switches occurring

From
"Anjan Dave"
Date:
Sun hardware is a 4 CPU (8 cores) v40z, Dell is 6850 Quad XEON (8
cores), both have 16GB RAM, and 2 internal drives, one drive has OS +
data and second drive has pg_xlog.

RedHat AS4.0 U2 64-bit on both servers, PG8.1, 64bit RPMs.

Thanks,
Anjan



-----Original Message-----
From: Juan Casero [mailto:caseroj@comcast.net]
Sent: Monday, December 19, 2005 11:17 PM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] High context switches occurring

Guys -

Help me out here as I try to understand this benchmark.  What is the Sun

hardware and operating system we are talking about here and what is the
intel
hardware and operating system?   What was the Sun version of PostgreSQL
compiled with?  Gcc on Solaris (assuming sparc) or Sun studio?  What was

PostgreSQL compiled with on intel?   Gcc on linux?

Thanks,
Juan

On Monday 19 December 2005 21:08, Anjan Dave wrote:
> Re-ran it 3 times on each host -
>
> Sun:
> -bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench
> starting vacuum...end.
> transaction type: TPC-B (sort of)
> scaling factor: 1
> number of clients: 10
> number of transactions per client: 3000
> number of transactions actually processed: 30000/30000
> tps = 827.810778 (including connections establishing)
> tps = 828.410801 (excluding connections establishing)
> real    0m36.579s
> user    0m1.222s
> sys     0m3.422s
>
> Intel:
> -bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench
> starting vacuum...end.
> transaction type: TPC-B (sort of)
> scaling factor: 1
> number of clients: 10
> number of transactions per client: 3000
> number of transactions actually processed: 30000/30000
> tps = 597.067503 (including connections establishing)
> tps = 597.606169 (excluding connections establishing)
> real    0m50.380s
> user    0m2.621s
> sys     0m7.818s
>
> Thanks,
> Anjan
>
>
>     -----Original Message-----
>     From: Anjan Dave
>     Sent: Wed 12/7/2005 10:54 AM
>     To: Tom Lane
>     Cc: Vivek Khera; Postgresql Performance
>     Subject: Re: [PERFORM] High context switches occurring
>
>
>
>     Thanks for your inputs, Tom. I was going after high concurrent
clients,
>     but should have read this carefully -
>
>     -s scaling_factor
>                     this should be used with -i (initialize) option.
>                     number of tuples generated will be multiple of
the
>                     scaling factor. For example, -s 100 will imply
10M
>                     (10,000,000) tuples in the accounts table.
>                     default is 1.  NOTE: scaling factor should be at
least
>                     as large as the largest number of clients you
intend
>                     to test; else you'll mostly be measuring update
>     contention.
>
>     I'll rerun the tests.
>
>     Thanks,
>     Anjan
>
>
>     -----Original Message-----
>     From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
>     Sent: Tuesday, December 06, 2005 6:45 PM
>     To: Anjan Dave
>     Cc: Vivek Khera; Postgresql Performance
>     Subject: Re: [PERFORM] High context switches occurring
>
>     "Anjan Dave" <adave@vantage.com> writes:
>     > -bash-3.00$ time pgbench -c 1000 -t 30 pgbench
>     > starting vacuum...end.
>     > transaction type: TPC-B (sort of)
>     > scaling factor: 1
>     > number of clients: 1000
>     > number of transactions per client: 30
>     > number of transactions actually processed: 30000/30000
>     > tps = 45.871234 (including connections establishing)
>     > tps = 46.092629 (excluding connections establishing)
>
>     I can hardly think of a worse way to run pgbench :-(.  These
numbers are
>     about meaningless, for two reasons:
>
>     1. You don't want number of clients (-c) much higher than
scaling factor
>     (-s in the initialization step).  The number of rows in the
"branches"
>     table will equal -s, and since every transaction updates one
>     randomly-chosen "branches" row, you will be measuring mostly
row-update
>     contention overhead if there's more concurrent transactions than
there
>     are rows.  In the case -s 1, which is what you've got here,
there is no
>     actual concurrency at all --- all the transactions stack up on
the
>     single branches row.
>
>     2. Running a small number of transactions per client means that
>     startup/shutdown transients overwhelm the steady-state data.
You should
>     probably run at least a thousand transactions per client if you
want
>     repeatable numbers.
>
>     Try something like "-s 10 -c 10 -t 3000" to get numbers
reflecting test
>     conditions more like what the TPC council had in mind when they
designed
>     this benchmark.  I tend to repeat such a test 3 times to see if
the
>     numbers are repeatable, and quote the middle TPS number as long
as
>     they're not too far apart.
>
>                             regards, tom lane
>
>
>     ---------------------------(end of
broadcast)---------------------------
>     TIP 5: don't forget to increase your free space map settings
>
>
> ---------------------------(end of
broadcast)---------------------------
> TIP 1: if posting/reading through Usenet, please send an appropriate
>        subscribe-nomail command to majordomo@postgresql.org so that
your
>        message can get through to the mailing list cleanly

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org