Thread: can't handle large number of INSERT/UPDATEs

can't handle large number of INSERT/UPDATEs

From
"Anjan Dave"
Date:

Hi,

 

I am dealing with an app here that uses pg to handle a few thousand concurrent web users. It seems that under heavy load, the INSERT and UPDATE statements to one or two specific tables keep queuing up, to the count of 150+ (one table has about 432K rows, other has about 2.6Million rows), resulting in ‘wait’s for other queries, and then everything piles up, with the load average shooting up to 10+.

 

We (development) have gone through the queries/explain analyzes and made sure the appropriate indexes exist among other efforts put in.

 

I would like to know if there is anything that can be changed for better from the systems perspective. Here’s what I have done and some recent changes from the system side:

 

-Upgraded from 7.4.0 to 7.4.1 sometime ago

-Upgraded from RH8 to RHEL 3.0

-The settings from postgresql.conf (carried over, basically) are:

            shared_buffers = 10240 (80MB)

            max_connections = 400

            sort_memory = 1024

            effective_cache_size = 262144 (2GB)

            checkpoint_segments = 15

stats_start_collector = true

stats_command_string = true

Rest everything is at default

 

In /etc/sysctl.conf (512MB shared mem)

kernel.shmall = 536870912

kernel.shmmax = 536870912

 

-This is a new Dell 6650 (quad XEON 2.2GHz, 8GB RAM, Internal HW RAID10), RHEL 3.0 (2.4.21-20.ELsmp), PG 7.4.1

-Vaccum Full run everyday

-contrib/Reindex run everyday

-Disabled HT in BIOS

 

I would greatly appreciate any helpful ideas.

 

Thanks in advance,

 

Anjan

Re: can't handle large number of INSERT/UPDATEs

From
Rod Taylor
Date:
On Mon, 2004-10-25 at 16:53, Anjan Dave wrote:
> Hi,
>
>
>
> I am dealing with an app here that uses pg to handle a few thousand
> concurrent web users. It seems that under heavy load, the INSERT and
> UPDATE statements to one or two specific tables keep queuing up, to
> the count of 150+ (one table has about 432K rows, other has about
> 2.6Million rows), resulting in ‘wait’s for other queries, and then

This isn't an index issue, it's a locking issue. Sounds like you have a
bunch of inserts and updates hitting the same rows over and over again.

Eliminate that contention point, and you will have solved your problem.

Free free to describe the processes involved, and we can help you do
that.



Re: can't handle large number of INSERT/UPDATEs

From
Dustin Sallings
Date:
On Oct 25, 2004, at 13:53, Anjan Dave wrote:

> I am dealing with an app here that uses pg to handle a few thousand
> concurrent web users. It seems that under heavy load, the INSERT and
> UPDATE statements to one or two specific tables keep queuing up, to
> the count of 150+ (one table has about 432K rows, other has about
> 2.6Million rows), resulting in ‘wait’s for other queries, and then
> everything piles up, with the load average shooting up to 10+.

    Depending on your requirements and all that, but I had a similar issue
in one of my applications and made the problem disappear entirely by
serializing the transactions into a separate thread (actually, a thread
pool) responsible for performing these transactions.  This reduced the
load on both the application server and the DB server.

    Not a direct answer to your question, but I've found that a lot of
times when someone has trouble scaling a database application, much of
the performance win can be in trying to be a little smarter about how
and when the database is accessed.

--
SPY                      My girlfriend asked me which one I like better.
pub  1024/3CAE01D5 1994/11/03 Dustin Sallings <dustin@spy.net>
|    Key fingerprint =  87 02 57 08 02 D0 DA D6  C8 0F 3E 65 51 98 D8 BE
L_______________________ I hope the answer won't upset her. ____________


Re: can't handle large number of INSERT/UPDATEs

From
"Anjan Dave"
Date:
It probably is locking issue. I got a long list of locks held when we ran select * from pg_locks during a peak time.

relation | database | transaction |  pid  |       mode       | granted
----------+----------+-------------+-------+------------------+---------
    17239 |    17142 |             |  3856 | AccessShareLock  | t
          |          |    21196323 |  3875 | ExclusiveLock    | t
    16390 |    17142 |             |  3911 | AccessShareLock  | t
    16595 |    17142 |             |  3782 | AccessShareLock  | t
    17227 |    17142 |             |  3840 | AccessShareLock  | t
    17227 |    17142 |             |  3840 | RowExclusiveLock | t
...
...


Vmstat would show a lot of disk IO at the same time.

Is this pointing towards a disk IO issue? (to that end, other than a higher CPU speed, and disabling HT, only thing
changedis that it's RAID5 volume now, instead of a RAID10) 

-anjan


-----Original Message-----
From: Rod Taylor [mailto:pg@rbt.ca]
Sent: Monday, October 25, 2004 5:19 PM
To: Anjan Dave
Cc: Postgresql Performance
Subject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs

On Mon, 2004-10-25 at 16:53, Anjan Dave wrote:
> Hi,
>
>
>
> I am dealing with an app here that uses pg to handle a few thousand
> concurrent web users. It seems that under heavy load, the INSERT and
> UPDATE statements to one or two specific tables keep queuing up, to
> the count of 150+ (one table has about 432K rows, other has about
> 2.6Million rows), resulting in ‘wait’s for other queries, and then

This isn't an index issue, it's a locking issue. Sounds like you have a
bunch of inserts and updates hitting the same rows over and over again.

Eliminate that contention point, and you will have solved your problem.

Free free to describe the processes involved, and we can help you do
that.




Re: can't handle large number of INSERT/UPDATEs

From
Rod Taylor
Date:
On Tue, 2004-10-26 at 13:42, Anjan Dave wrote:
> It probably is locking issue. I got a long list of locks held when we ran select * from pg_locks during a peak time.
>
> relation | database | transaction |  pid  |       mode       | granted
> ----------+----------+-------------+-------+------------------+---------
>     17239 |    17142 |             |  3856 | AccessShareLock  | t

How many have granted = false?

> Vmstat would show a lot of disk IO at the same time.
>
> Is this pointing towards a disk IO issue?

Not necessarily. Is your IO reaching the limit or is it just heavy?


Re: can't handle large number of INSERT/UPDATEs

From
"Anjan Dave"
Date:
None of the locks are in state false actually.

I don't have iostat on that machine, but vmstat shows a lot of writes to
the drives, and the runnable processes are more than 1:

procs                      memory      swap          io     system
cpu
 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy
wa id
 1  2      0 3857568 292936 2791876    0    0     0 44460 1264  2997 23
13 22 41
 2  2      0 3824668 292936 2791884    0    0     0 25262 1113  4797 28
12 29 31
 2  3      0 3784772 292936 2791896    0    0     0 38988 1468  6677 28
12 48 12
 2  4      0 3736256 292936 2791904    0    0     0 50970 1530  5217 19
12 49 20
 4  2      0 3698056 292936 2791908    0    0     0 43576 1369  7316 20
15 35 30
 2  1      0 3667124 292936 2791920    0    0     0 39174 1444  4659 25
16 35 24
 6  1      0 3617652 292936 2791928    0    0     0 52430 1347  4681 25
19 20 37
 1  3      0 3599992 292936 2790868    0    0     0 40156 1439  4394 20
14 29 37
 6  0      0 3797488 292936 2568648    0    0     0 17706 2272 21534 28
23 19 30
 0  0      0 3785396 292936 2568736    0    0     0  1156 1237 14057 33
8  0 59
 0  0      0 3783568 292936 2568736    0    0     0   704  512  1537  5
2  1 92
 1  0      0 3783188 292936 2568752    0    0     0   842  613  1919  6
1  1 92

-anjan

-----Original Message-----
From: Rod Taylor [mailto:pg@rbt.ca]
Sent: Tuesday, October 26, 2004 1:49 PM
To: Anjan Dave
Cc: Postgresql Performance
Subject: RE: [PERFORM] can't handle large number of INSERT/UPDATEs

On Tue, 2004-10-26 at 13:42, Anjan Dave wrote:
> It probably is locking issue. I got a long list of locks held when we
ran select * from pg_locks during a peak time.
>
> relation | database | transaction |  pid  |       mode       | granted

>
----------+----------+-------------+-------+------------------+---------
>     17239 |    17142 |             |  3856 | AccessShareLock  | t

How many have granted = false?

> Vmstat would show a lot of disk IO at the same time.
>
> Is this pointing towards a disk IO issue?

Not necessarily. Is your IO reaching the limit or is it just heavy?



Re: can't handle large number of INSERT/UPDATEs

From
Josh Berkus
Date:
Anjan,

> It probably is locking issue. I got a long list of locks held when we ran
> select * from pg_locks during a peak time.

Do the back-loaded tables have FKs on them?     This would be a likely cause
of lock contention, and thus serializing inserts/updates to the tables.

--
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

Re: can't handle large number of INSERT/UPDATEs

From
Matt Clark
Date:
>I don't have iostat on that machine, but vmstat shows a lot of writes to
>the drives, and the runnable processes are more than 1:
>
> 6  1      0 3617652 292936 2791928    0    0     0 52430 1347  4681 25
>19 20 37
>
>
Assuming that's the output of 'vmstat 1' and not some other delay,
50MB/second of sustained writes is usually considered 'a lot'.

Re: can't handle large number of INSERT/UPDATEs

From
Andrew McMillan
Date:
On Mon, 2004-10-25 at 16:53 -0400, Anjan Dave wrote:
> Hi,
>
>
>
> I am dealing with an app here that uses pg to handle a few thousand
> concurrent web users. It seems that under heavy load, the INSERT and
> UPDATE statements to one or two specific tables keep queuing up, to
> the count of 150+ (one table has about 432K rows, other has about
> 2.6Million rows), resulting in ‘wait’s for other queries, and then
> everything piles up, with the load average shooting up to 10+.

Hi,

We saw a similar problem here that was related to the locking that can
happen against referred tables for referential integrity.

In our case we had referred tables with very few rows (i.e. < 10) which
caused the insert and update on the large tables to be effectively
serialised due to the high contention on the referred tables.

We changed our app to implement those referential integrity checks
differently and performance was hugely boosted.

Regards,
                    Andrew.
-------------------------------------------------------------------------
Andrew @ Catalyst .Net .NZ  Ltd,  PO Box 11-053, Manners St,  Wellington
WEB: http://catalyst.net.nz/            PHYS: Level 2, 150-154 Willis St
DDI: +64(4)803-2201      MOB: +64(272)DEBIAN      OFFICE: +64(4)499-2267
                        Chicken Little was right.
-------------------------------------------------------------------------


Attachment

Re: can't handle large number of INSERT/UPDATEs

From
"Anjan Dave"
Date:
Andrew/Josh,

Josh also suggested to check for any FK/referential integrity checks,
but I am told that we don't have any foreign key constraints.

Thanks,
anjan

-----Original Message-----
From: Andrew McMillan [mailto:andrew@catalyst.net.nz]
Sent: Tuesday, October 26, 2004 4:51 PM
To: Anjan Dave
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs

On Mon, 2004-10-25 at 16:53 -0400, Anjan Dave wrote:
> Hi,
>
>
>
> I am dealing with an app here that uses pg to handle a few thousand
> concurrent web users. It seems that under heavy load, the INSERT and
> UPDATE statements to one or two specific tables keep queuing up, to
> the count of 150+ (one table has about 432K rows, other has about
> 2.6Million rows), resulting in 'wait's for other queries, and then
> everything piles up, with the load average shooting up to 10+.

Hi,

We saw a similar problem here that was related to the locking that can
happen against referred tables for referential integrity.

In our case we had referred tables with very few rows (i.e. < 10) which
caused the insert and update on the large tables to be effectively
serialised due to the high contention on the referred tables.

We changed our app to implement those referential integrity checks
differently and performance was hugely boosted.

Regards,
                    Andrew.
------------------------------------------------------------------------
-
Andrew @ Catalyst .Net .NZ  Ltd,  PO Box 11-053, Manners St,  Wellington
WEB: http://catalyst.net.nz/            PHYS: Level 2, 150-154 Willis St
DDI: +64(4)803-2201      MOB: +64(272)DEBIAN      OFFICE: +64(4)499-2267
                        Chicken Little was right.
------------------------------------------------------------------------
-



Re: can't handle large number of INSERT/UPDATEs

From
"Anjan Dave"
Date:
That is 1 or maybe 2 second interval.

One thing I am not sure is why 'bi' (disk writes) stays at 0 mostly,
it's the 'bo' column that shows high numbers (reads from disk). With so
many INSERT/UPDATEs, I would expect it the other way around...

-anjan



-----Original Message-----
From: Matt Clark [mailto:matt@ymogen.net]
Sent: Tuesday, October 26, 2004 2:29 PM
To: Anjan Dave
Cc: Rod Taylor; Postgresql Performance
Subject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs


>I don't have iostat on that machine, but vmstat shows a lot of writes
to
>the drives, and the runnable processes are more than 1:
>
> 6  1      0 3617652 292936 2791928    0    0     0 52430 1347  4681 25
>19 20 37
>
>
Assuming that's the output of 'vmstat 1' and not some other delay,
50MB/second of sustained writes is usually considered 'a lot'.


Re: can't handle large number of INSERT/UPDATEs

From
Tom Lane
Date:
"Anjan Dave" <adave@vantage.com> writes:
> None of the locks are in state false actually.

In that case you don't have a locking problem.

> I don't have iostat on that machine, but vmstat shows a lot of writes to
> the drives, and the runnable processes are more than 1:

I get the impression that you are just saturating the write bandwidth of
your disk :-(

It's fairly likely that this happens during checkpoints.  Look to see if
the postmaster has a child that shows itself as a checkpointer in "ps"
when the saturation is occurring.  You might be able to improve matters
by altering the checkpoint frequency parameters (though beware that
either too small or too large will likely make matters even worse).

            regards, tom lane

Re: can't handle large number of INSERT/UPDATEs

From
"Anjan Dave"
Date:
It just seems that the more activity there is (that is when there's a
lot of disk activity) the checkpoints happen quicker too.

Here's a snapshot from the /var/log/messages -

Oct 26 17:21:22 vl-pe6650-003 postgres[13978]: [2-1] LOG:  recycled
transaction
log file "0000000B0000007E"
Oct 26 17:21:22 vl-pe6650-003 postgres[13978]: [3-1] LOG:  recycled
transaction
log file "0000000B0000007F"
...
Oct 26 17:26:25 vl-pe6650-003 postgres[14273]: [2-1] LOG:  recycled
transaction
log file "0000000B00000080"
Oct 26 17:26:25 vl-pe6650-003 postgres[14273]: [3-1] LOG:  recycled
transaction
log file "0000000B00000081"
Oct 26 17:26:25 vl-pe6650-003 postgres[14273]: [4-1] LOG:  recycled
transaction
log file "0000000B00000082"
...
Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [2-1] LOG:  recycled
transaction
log file "0000000B00000083"
Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [3-1] LOG:  recycled
transaction
log file "0000000B00000084"
Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [4-1] LOG:  recycled
transaction
log file "0000000B00000085"
...

I have increased them from default 3 to 15. Haven't altered the
frequency though....

Thanks,
Anjan

-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Tuesday, October 26, 2004 5:53 PM
To: Anjan Dave
Cc: Rod Taylor; Postgresql Performance
Subject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs

"Anjan Dave" <adave@vantage.com> writes:
> None of the locks are in state false actually.

In that case you don't have a locking problem.

> I don't have iostat on that machine, but vmstat shows a lot of writes
to
> the drives, and the runnable processes are more than 1:

I get the impression that you are just saturating the write bandwidth of
your disk :-(

It's fairly likely that this happens during checkpoints.  Look to see if
the postmaster has a child that shows itself as a checkpointer in "ps"
when the saturation is occurring.  You might be able to improve matters
by altering the checkpoint frequency parameters (though beware that
either too small or too large will likely make matters even worse).

            regards, tom lane


Re: can't handle large number of INSERT/UPDATEs

From
Tom Lane
Date:
"Anjan Dave" <adave@vantage.com> writes:
> One thing I am not sure is why 'bi' (disk writes) stays at 0 mostly,
> it's the 'bo' column that shows high numbers (reads from disk). With so
> many INSERT/UPDATEs, I would expect it the other way around...

Er ... it *is* the other way around.  bi is blocks in (to the CPU),
bo is blocks out (from the CPU).

            regards, tom lane

Re: can't handle large number of INSERT/UPDATEs

From
"Anjan Dave"
Date:
Ok, i was thinking from the disk perspective. Thanks!

    -----Original Message----- 
    From: Tom Lane [mailto:tgl@sss.pgh.pa.us] 
    Sent: Tue 10/26/2004 6:37 PM 
    To: Anjan Dave 
    Cc: Matt Clark; Rod Taylor; Postgresql Performance 
    Subject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs 
    
    

    "Anjan Dave" <adave@vantage.com> writes: 
    > One thing I am not sure is why 'bi' (disk writes) stays at 0 mostly, 
    > it's the 'bo' column that shows high numbers (reads from disk). With so 
    > many INSERT/UPDATEs, I would expect it the other way around... 

    Er ... it *is* the other way around.  bi is blocks in (to the CPU), 
    bo is blocks out (from the CPU). 

                            regards, tom lane 


Re: can't handle large number of INSERT/UPDATEs

From
Curtis Zinzilieta
Date:
On Tue, 26 Oct 2004, Tom Lane wrote:

> "Anjan Dave" <adave@vantage.com> writes:
> > One thing I am not sure is why 'bi' (disk writes) stays at 0 mostly,
> > it's the 'bo' column that shows high numbers (reads from disk). With so
> > many INSERT/UPDATEs, I would expect it the other way around...
>
> Er ... it *is* the other way around.  bi is blocks in (to the CPU),
> bo is blocks out (from the CPU).
>
>             regards, tom lane

Ummm.....

[curtisz@labsoft T2]$ man vmstat
<snip>
FIELD DESCRIPTIONS
<snip>
   IO
       bi: Blocks sent to a block device (blocks/s).
       bo: Blocks received from a block device (blocks/s).

And on my read-heavy 7.4.2 system (running on rh8 at the moment....)
(truncated for readability...)

[root@labsoft T2]# vmstat 1
   procs                      memory    swap          io     system
 r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us
 0  0  0 127592  56832 365496 2013788   0   1     3     6    4     0   4
 2  0  0 127592  56868 365496 2013788   0   0     0     0  363   611   1
 1  0  0 127592  57444 365508 2013788   0   0     8   972 1556  3616  11
 0  0  1 127592  57408 365512 2013800   0   0     0   448  614  1216   5
 0  0  0 127592  56660 365512 2013800   0   0     0     0  666  1150   6
 0  3  1 127592  56680 365512 2013816   0   0    16   180 1280  2050   2
 0  0  0 127592  56864 365516 2013852   0   0    20   728 2111  4360  11
 0  0  0 127592  57952 365544 2013824   0   0     0   552 1153  2002  10
 0  0  0 127592  57276 365544 2013824   0   0     0   504  718  1111   5
 1  0  0 127592  57244 365544 2013824   0   0     0   436 1495  2366   7
 0  0  0 127592  57252 365544 2013824   0   0     0     0  618  1380   5
 0  0  0 127592  57276 365556 2014192   0   0   360  1240 2418  5056  14
 2  0  0 127592  56664 365564 2014176   0   0     0   156  658  1349   5
 1  0  0 127592  55864 365568 2014184   0   0     0  1572 1388  3598   9
 2  0  0 127592  56160 365572 2014184   0   0     0   536 4860  6621  13

Which seems appropriate for both the database and the man page....

-Curtis



Re: can't handle large number of INSERT/UPDATEs

From
Josh Berkus
Date:
Anjan,

> Oct 26 17:26:25 vl-pe6650-003 postgres[14273]: [4-1] LOG:  recycled
> transaction
> log file "0000000B00000082"
> ...
> Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [2-1] LOG:  recycled
> transaction
> log file "0000000B00000083"
> Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [3-1] LOG:  recycled
> transaction
> log file "0000000B00000084"
> Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [4-1] LOG:  recycled
> transaction
> log file "0000000B00000085"

Looks like you're running out of disk space for pending transactions.  Can you
afford more checkpoint_segments?   Have you considered checkpoint_siblings?

--
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

Re: can't handle large number of INSERT/UPDATEs

From
"Anjan Dave"
Date:
Josh,
 
I have increased them to 30, will see if that helps. Space is not a concern. slightly longer recovery time could be
finetoo. Wonder what people use (examples) for this value for high volume databases (except for dump/restore)...?
 
 
I don't know what is checkpoint_sibling. I'll read about it if there's some info on it somewhere.
 
Thanks,
Anjan
 
-----Original Message----- 
From: Josh Berkus [mailto:josh@agliodbs.com] 
Sent: Tue 10/26/2004 8:42 PM 
To: pgsql-performance@postgresql.org 
Cc: Anjan Dave; Tom Lane; Rod Taylor 
Subject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs



    Anjan, 

    > Oct 26 17:26:25 vl-pe6650-003 postgres[14273]: [4-1] LOG:  recycled 
    > transaction 
    > log file "0000000B00000082" 
    > ... 
    > Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [2-1] LOG:  recycled 
    > transaction 
    > log file "0000000B00000083" 
    > Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [3-1] LOG:  recycled 
    > transaction 
    > log file "0000000B00000084" 
    > Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [4-1] LOG:  recycled 
    > transaction 
    > log file "0000000B00000085" 

    Looks like you're running out of disk space for pending transactions.  Can you 
    afford more checkpoint_segments?   Have you considered checkpoint_siblings? 

    -- 
    --Josh 

    Josh Berkus 
    Aglio Database Solutions 
    San Francisco 


Re: can't handle large number of INSERT/UPDATEs

From
Tom Lane
Date:
Curtis Zinzilieta <curtisz@norchemlab.com> writes:
> On Tue, 26 Oct 2004, Tom Lane wrote:
>> Er ... it *is* the other way around.  bi is blocks in (to the CPU),
>> bo is blocks out (from the CPU).

> Ummm.....
> [curtisz@labsoft T2]$ man vmstat
>        bi: Blocks sent to a block device (blocks/s).
>        bo: Blocks received from a block device (blocks/s).

You might want to have a word with your OS vendor.  My vmstat
man page says

   IO
       bi: Blocks received from a block device (blocks/s).
       bo: Blocks sent to a block device (blocks/s).

and certainly anyone who's been around a computer more than a week or
two knows which direction "in" and "out" are customarily seen from.

            regards, tom lane

Re: can't handle large number of INSERT/UPDATEs

From
John Meinel
Date:
Tom Lane wrote:
> Curtis Zinzilieta <curtisz@norchemlab.com> writes:
>
>>On Tue, 26 Oct 2004, Tom Lane wrote:
>>
>>>Er ... it *is* the other way around.  bi is blocks in (to the CPU),
>>>bo is blocks out (from the CPU).
>
>
>>Ummm.....
>>[curtisz@labsoft T2]$ man vmstat
>>       bi: Blocks sent to a block device (blocks/s).
>>       bo: Blocks received from a block device (blocks/s).
>
>
> You might want to have a word with your OS vendor.  My vmstat
> man page says
>
>    IO
>        bi: Blocks received from a block device (blocks/s).
>        bo: Blocks sent to a block device (blocks/s).
>
> and certainly anyone who's been around a computer more than a week or
> two knows which direction "in" and "out" are customarily seen from.
>
>             regards, tom lane
>

Interesting. I checked this on several machines. They actually say
different things.

Redhat 9- bi: Blocks sent to a block device (blocks/s).
Latest Cygwin- bi: Blocks sent to a block device (blocks/s).
Redhat 7.x- bi: Blocks sent to a block device (blocks/s).
Redhat AS3- bi: blocks sent out to a block device (in blocks/s)

I would say that I probably agree, things should be relative to the cpu.
However, it doesn't seem to be something that was universally agreed
upon. Or maybe the man-pages were all wrong, and only got updated recently.

John
=:->



Attachment

Re: can't handle large number of INSERT/UPDATEs

From
"Iain"
Date:
Turbo linux 7 sems to be agreeing with Curtis,

bi: ブロックデバイスに送られたブロック (blocks/s)。
bo: ブロックデバイスから受け取ったブロック (blocks/s)。

Sorry it's in Japanese but bi says "blocks sent to block device" and bo is
"blocks received from block device".

I don't know that much about it but the actual output seems to suggest that
the man page is wrong. I find it just the slightest bit amusing that such
errors in the docs should be translated faithfully when translating
invariably introduces errors of it's own ;)

Regards
Iain


----- Original Message -----
From: "Tom Lane" <tgl@sss.pgh.pa.us>
To: "Curtis Zinzilieta" <curtisz@norchemlab.com>
Cc: "Anjan Dave" <adave@vantage.com>; "Matt Clark" <matt@ymogen.net>; "Rod
Taylor" <pg@rbt.ca>; "Postgresql Performance"
<pgsql-performance@postgresql.org>
Sent: Wednesday, October 27, 2004 12:21 PM
Subject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs


> Curtis Zinzilieta <curtisz@norchemlab.com> writes:
>> On Tue, 26 Oct 2004, Tom Lane wrote:
>>> Er ... it *is* the other way around.  bi is blocks in (to the CPU),
>>> bo is blocks out (from the CPU).
>
>> Ummm.....
>> [curtisz@labsoft T2]$ man vmstat
>>        bi: Blocks sent to a block device (blocks/s).
>>        bo: Blocks received from a block device (blocks/s).
>
> You might want to have a word with your OS vendor.  My vmstat
> man page says
>
>   IO
>       bi: Blocks received from a block device (blocks/s).
>       bo: Blocks sent to a block device (blocks/s).
>
> and certainly anyone who's been around a computer more than a week or
> two knows which direction "in" and "out" are customarily seen from.
>
> regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster


Re: can't handle large number of INSERT/UPDATEs

From
Matt Clark
Date:
>> and certainly anyone who's been around a computer more than a week or
>> two knows which direction "in" and "out" are customarily seen from.
>>
>>             regards, tom lane
>>
>
Apparently not whoever wrote the man page that everyone copied ;-)

> Interesting. I checked this on several machines. They actually say
> different things.
>
> Redhat 9- bi: Blocks sent to a block device (blocks/s).
> Latest Cygwin- bi: Blocks sent to a block device (blocks/s).
> Redhat 7.x- bi: Blocks sent to a block device (blocks/s).
> Redhat AS3- bi: blocks sent out to a block device (in blocks/s)
>
> I would say that I probably agree, things should be relative to the
> cpu. However, it doesn't seem to be something that was universally
> agreed upon. Or maybe the man-pages were all wrong, and only got
> updated recently.
>
Looks like the man pages are wrong, for RH7.3 at least.  It says bi is
'blocks written', but an actual test like 'dd if=/dev/zero of=/tmp/test
bs=1024 count=16384' on an otherwise nearly idle RH7.3 box gives:
   procs                      memory    swap          io
system         cpu
 r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us
sy  id
0  0  0  75936 474704 230452 953580   0   0     0     0  106  2527   0
0  99
 0  0  0  75936 474704 230452 953580   0   0     0 16512  376  2572
0   2  98
 0  0  0  75936 474704 230452 953580   0   0     0     0  105  2537
0   0 100

Which is in line with bo being 'blocks written'.

M

Re: can't handle large number of INSERT/UPDATEs

From
Markus Bertheau
Date:
Turns out the man page of vmstat in procps was changed on Oct 8 2002:

http://cvs.sourceforge.net/viewcvs.py/procps/procps/vmstat.8?r1=1.1&r2=1.2

in reaction to a debian bug report:

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=157935

--
Markus Bertheau <twanger@bluetwanger.de>