Re: High context switches occurring - Mailing list pgsql-performance

From Anjan Dave
Subject Re: High context switches occurring
Date
Msg-id 4BAFBB6B9CC46F41B2AD7D9F4BBAF785098F16@vt-pe2550-001.vantage.vantage.com
Whole thread Raw
In response to High context switches occurring  ("Anjan Dave" <adave@vantage.com>)
List pgsql-performance
Sun hardware is a 4 CPU (8 cores) v40z, Dell is 6850 Quad XEON (8
cores), both have 16GB RAM, and 2 internal drives, one drive has OS +
data and second drive has pg_xlog.

RedHat AS4.0 U2 64-bit on both servers, PG8.1, 64bit RPMs.

Thanks,
Anjan



-----Original Message-----
From: Juan Casero [mailto:caseroj@comcast.net]
Sent: Monday, December 19, 2005 11:17 PM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] High context switches occurring

Guys -

Help me out here as I try to understand this benchmark.  What is the Sun

hardware and operating system we are talking about here and what is the
intel
hardware and operating system?   What was the Sun version of PostgreSQL
compiled with?  Gcc on Solaris (assuming sparc) or Sun studio?  What was

PostgreSQL compiled with on intel?   Gcc on linux?

Thanks,
Juan

On Monday 19 December 2005 21:08, Anjan Dave wrote:
> Re-ran it 3 times on each host -
>
> Sun:
> -bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench
> starting vacuum...end.
> transaction type: TPC-B (sort of)
> scaling factor: 1
> number of clients: 10
> number of transactions per client: 3000
> number of transactions actually processed: 30000/30000
> tps = 827.810778 (including connections establishing)
> tps = 828.410801 (excluding connections establishing)
> real    0m36.579s
> user    0m1.222s
> sys     0m3.422s
>
> Intel:
> -bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench
> starting vacuum...end.
> transaction type: TPC-B (sort of)
> scaling factor: 1
> number of clients: 10
> number of transactions per client: 3000
> number of transactions actually processed: 30000/30000
> tps = 597.067503 (including connections establishing)
> tps = 597.606169 (excluding connections establishing)
> real    0m50.380s
> user    0m2.621s
> sys     0m7.818s
>
> Thanks,
> Anjan
>
>
>     -----Original Message-----
>     From: Anjan Dave
>     Sent: Wed 12/7/2005 10:54 AM
>     To: Tom Lane
>     Cc: Vivek Khera; Postgresql Performance
>     Subject: Re: [PERFORM] High context switches occurring
>
>
>
>     Thanks for your inputs, Tom. I was going after high concurrent
clients,
>     but should have read this carefully -
>
>     -s scaling_factor
>                     this should be used with -i (initialize) option.
>                     number of tuples generated will be multiple of
the
>                     scaling factor. For example, -s 100 will imply
10M
>                     (10,000,000) tuples in the accounts table.
>                     default is 1.  NOTE: scaling factor should be at
least
>                     as large as the largest number of clients you
intend
>                     to test; else you'll mostly be measuring update
>     contention.
>
>     I'll rerun the tests.
>
>     Thanks,
>     Anjan
>
>
>     -----Original Message-----
>     From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
>     Sent: Tuesday, December 06, 2005 6:45 PM
>     To: Anjan Dave
>     Cc: Vivek Khera; Postgresql Performance
>     Subject: Re: [PERFORM] High context switches occurring
>
>     "Anjan Dave" <adave@vantage.com> writes:
>     > -bash-3.00$ time pgbench -c 1000 -t 30 pgbench
>     > starting vacuum...end.
>     > transaction type: TPC-B (sort of)
>     > scaling factor: 1
>     > number of clients: 1000
>     > number of transactions per client: 30
>     > number of transactions actually processed: 30000/30000
>     > tps = 45.871234 (including connections establishing)
>     > tps = 46.092629 (excluding connections establishing)
>
>     I can hardly think of a worse way to run pgbench :-(.  These
numbers are
>     about meaningless, for two reasons:
>
>     1. You don't want number of clients (-c) much higher than
scaling factor
>     (-s in the initialization step).  The number of rows in the
"branches"
>     table will equal -s, and since every transaction updates one
>     randomly-chosen "branches" row, you will be measuring mostly
row-update
>     contention overhead if there's more concurrent transactions than
there
>     are rows.  In the case -s 1, which is what you've got here,
there is no
>     actual concurrency at all --- all the transactions stack up on
the
>     single branches row.
>
>     2. Running a small number of transactions per client means that
>     startup/shutdown transients overwhelm the steady-state data.
You should
>     probably run at least a thousand transactions per client if you
want
>     repeatable numbers.
>
>     Try something like "-s 10 -c 10 -t 3000" to get numbers
reflecting test
>     conditions more like what the TPC council had in mind when they
designed
>     this benchmark.  I tend to repeat such a test 3 times to see if
the
>     numbers are repeatable, and quote the middle TPS number as long
as
>     they're not too far apart.
>
>                             regards, tom lane
>
>
>     ---------------------------(end of
broadcast)---------------------------
>     TIP 5: don't forget to increase your free space map settings
>
>
> ---------------------------(end of
broadcast)---------------------------
> TIP 1: if posting/reading through Usenet, please send an appropriate
>        subscribe-nomail command to majordomo@postgresql.org so that
your
>        message can get through to the mailing list cleanly

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org


pgsql-performance by date:

Previous
From: "Jim C. Nasby"
Date:
Subject: Re: Any way to optimize GROUP BY queries?
Next
From: "Jim C. Nasby"
Date:
Subject: Re: separate drives for WAL or pgdata files