Thread: vacuumdb -z do a reindex?

vacuumdb -z do a reindex?

From
Irene Barg
Date:
I thought 'vacuumdb -z dbname' also reindex is this true?

I've had a simple update running for over 4 hours now (see results from
pg_top below). The sql is:

The database has 1016789 records, vacuumdb -z is ran once a day. I have
not ran 'reindexdb' in weeks. The system is a:

2xIntel 4-core Xeon Model E5430 (Harpertown) 2.66GHz, 32GB RAM and
8x145GB SAS drives configured with software RAID10

Your comments are appreciated.
--irene

>> last pid:  1185;  load avg:  2.17,  2.21,  1.60;       up 38+01:36:40
                 13:52:27 
>> 14 processes: 2 running, 12 sleeping
>> CPU states: 14.0% user,  0.0% nice, 10.5% system, 75.4% idle,  0.0% iowait
>> Memory: 11G used, 20G free, 456M buffers, 8724M cached
>> Swap:
>>
>>   PID USERNAME PRI NICE  SIZE   RES STATE   TIME   WCPU    CPU COMMAND
>> 28508 postgres  17    0   93M   38M run   265:53 58.42% 99.08% postgres: postgres metadata 140.252.26.34(34717)
UPDATE
>> 31609 postgres  16    0   91M   36M run     7:05 57.85% 98.09% postgres: system_admin metadata 140.252.26.34(43303)
SELECT
>> 25156 postgres  16    0  102M   46M sleep   7:28  0.00%  0.00% postgres: system_admin metadata 140.252.6.51(40350)
idle
>> 25363 postgres  18    0   93M   37M sleep   5:08  0.00%  0.00% postgres: system_admin metadata 140.252.6.51(35951)
idle
>> 31622 postgres  15    0   95M   38M sleep   1:45  0.00%  0.00% postgres: system_admin metadata 140.252.6.51(51917)
idle
>> 31624 postgres  15    0   95M   38M sleep   0:14  0.00%  0.00% postgres: system_admin metadata 140.252.6.51(53908)
idle
>> 28755 postgres  15    0   91M   10M sleep   0:02  0.00%  0.00% postgres: postgres keyword 140.252.26.33(41270) idle
>> 28757 postgres  15    0   91M   10M sleep   0:02  0.00%  0.00% postgres: postgres keyword 140.252.26.33(41272) idle
>> 28756 postgres  15    0   91M   10M sleep   0:02  0.00%  0.00% postgres: postgres keyword 140.252.26.33(41271) idle
>> 28758 postgres  15    0   91M   10M sleep   0:02  0.00%  0.00% postgres: postgres keyword 140.252.26.33(41273) idle
>> 28754 postgres  15    0   92M 9724K sleep   0:02  0.00%  0.00% postgres: postgres keyword 140.252.26.33(41269) idle
>> 25180 postgres  15    0   91M 7016K sleep   0:00  0.00%  0.00% postgres: postgres metadata 140.252.6.51(33997) idle
>> 25179 postgres  15    0   91M 6956K sleep   0:00  0.00%  0.00% postgres: postgres metadata 140.252.6.51(47331) idle
>>  1186 postgres  16    0   90M 4808K sleep   0:00  0.00%  0.00% postgres: arcsoft metadata [local] idle
>> [arcsoft@archdbn1 ~]$ date
>> Fri Nov 27 13:53:28 MST 2009

--
---------------------------------------------------------------------
Irene Barg                    Email:  ibarg@noao.edu
NOAO/AURA Inc.                 http://www.noao.edu/noao/staff/ibarg
950 N. Cherry Ave.            Voice:  520-318-8273
Tucson, AZ  85726 USA           FAX:  520-318-8360
---------------------------------------------------------------------

Re: vacuumdb -z do a reindex?

From
Guillaume Lelarge
Date:
Le vendredi 27 novembre 2009 à 22:17:50, Irene Barg a écrit :
> I thought 'vacuumdb -z dbname' also reindex is this true?
>

No. vacuumdb -z is a VACUUM ANALYZE. Moreover, vacuumdb has no option to do a
REINDEX.


--
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

Re: vacuumdb -z do a reindex?

From
Scott Marlowe
Date:
On Fri, Nov 27, 2009 at 2:17 PM, Irene Barg <ibarg@noao.edu> wrote:
> I've had a simple update running for over 4 hours now (see results from
> pg_top below). The sql is:

Have you looked in pg_locks and pg_stat_activity?

> The database has 1016789 records, vacuumdb -z is ran once a day. I have not
> ran 'reindexdb' in weeks. The system is a:
>
> 2xIntel 4-core Xeon Model E5430 (Harpertown) 2.66GHz, 32GB RAM and 8x145GB
> SAS drives configured with software RAID10

So do you have autovacuum disabled? What pg version are you running?

an 8 drive RAID array is usually pretty fast, unless it's on a bad
RAID controller or something.  What do "vmstat 10" and "iostat -x 10"
say about your io activity?


>
> Your comments are appreciated.
> --irene
>
>>> last pid:  1185;  load avg:  2.17,  2.21,  1.60;       up 38+01:36:40
>>>                                                              13:52:27
>>> 14 processes: 2 running, 12 sleeping
>>> CPU states: 14.0% user,  0.0% nice, 10.5% system, 75.4% idle,  0.0%
>>> iowait
>>> Memory: 11G used, 20G free, 456M buffers, 8724M cached
>>> Swap:
>>>  PID USERNAME PRI NICE  SIZE   RES STATE   TIME   WCPU    CPU COMMAND
>>> 28508 postgres  17    0   93M   38M run   265:53 58.42% 99.08% postgres:
>>> postgres metadata 140.252.26.34(34717) UPDATE
>>> 31609 postgres  16    0   91M   36M run     7:05 57.85% 98.09% postgres:
>>> system_admin metadata 140.252.26.34(43303) SELECT
>>> 25156 postgres  16    0  102M   46M sleep   7:28  0.00%  0.00% postgres:
>>> system_admin metadata 140.252.6.51(40350) idle
>>> 25363 postgres  18    0   93M   37M sleep   5:08  0.00%  0.00% postgres:
>>> system_admin metadata 140.252.6.51(35951) idle
>>> 31622 postgres  15    0   95M   38M sleep   1:45  0.00%  0.00% postgres:
>>> system_admin metadata 140.252.6.51(51917) idle
>>> 31624 postgres  15    0   95M   38M sleep   0:14  0.00%  0.00% postgres:
>>> system_admin metadata 140.252.6.51(53908) idle
>>> 28755 postgres  15    0   91M   10M sleep   0:02  0.00%  0.00% postgres:
>>> postgres keyword 140.252.26.33(41270) idle
>>> 28757 postgres  15    0   91M   10M sleep   0:02  0.00%  0.00% postgres:
>>> postgres keyword 140.252.26.33(41272) idle
>>> 28756 postgres  15    0   91M   10M sleep   0:02  0.00%  0.00% postgres:
>>> postgres keyword 140.252.26.33(41271) idle
>>> 28758 postgres  15    0   91M   10M sleep   0:02  0.00%  0.00% postgres:
>>> postgres keyword 140.252.26.33(41273) idle
>>> 28754 postgres  15    0   92M 9724K sleep   0:02  0.00%  0.00% postgres:
>>> postgres keyword 140.252.26.33(41269) idle
>>> 25180 postgres  15    0   91M 7016K sleep   0:00  0.00%  0.00% postgres:
>>> postgres metadata 140.252.6.51(33997) idle
>>> 25179 postgres  15    0   91M 6956K sleep   0:00  0.00%  0.00% postgres:
>>> postgres metadata 140.252.6.51(47331) idle
>>>  1186 postgres  16    0   90M 4808K sleep   0:00  0.00%  0.00% postgres:
>>> arcsoft metadata [local] idle    [arcsoft@archdbn1 ~]$ date
>>> Fri Nov 27 13:53:28 MST 2009
>
> --
> ---------------------------------------------------------------------
> Irene Barg                    Email:  ibarg@noao.edu
> NOAO/AURA Inc.                 http://www.noao.edu/noao/staff/ibarg
> 950 N. Cherry Ave.            Voice:  520-318-8273
> Tucson, AZ  85726 USA           FAX:  520-318-8360
> ---------------------------------------------------------------------
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



--
When fascism comes to America, it will be intolerance sold as diversity.

Re: vacuumdb -z do a reindex?

From
Irene Barg
Date:
Hi Scott,

Scott Marlowe wrote:
> On Fri, Nov 27, 2009 at 2:17 PM, Irene Barg <ibarg@noao.edu> wrote:
>> I've had a simple update running for over 4 hours now (see results from
>> pg_top below). The sql is:
>
> Have you looked in pg_locks and pg_stat_activity?

Yes, I did look at pg_stat_activity and did not see anything alarming.
What would have been indicators of something bad? The runtime was the
only alarming thing I saw.

>
>> The database has 1016789 records, vacuumdb -z is ran once a day. I have not
>> ran 'reindexdb' in weeks. The system is a:
>>
>> 2xIntel 4-core Xeon Model E5430 (Harpertown) 2.66GHz, 32GB RAM and 8x145GB
>> SAS drives configured with software RAID10
>
> So do you have autovacuum disabled? What pg version are you running?

Yes. It seems simpler than trying to configure the many options.

>
> an 8 drive RAID array is usually pretty fast, unless it's on a bad
> RAID controller or something.  What do "vmstat 10" and "iostat -x 10"
> say about your io activity?

-bash-3.2$ vmstat 10
procs -----------memory---------- ---swap-- -----io---- --system--
-----cpu------
  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy
id wa st
  0  0      0 21143944 471304 8928016    0    0     0     4    0    1  2
  1 97  0  0

-bash-3.2$ iostat -x 10
Linux 2.6.18-128.1.10.el5 (archdbn1)     11/28/09

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
            1.91    0.00    1.42    0.00    0.00   96.67

Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
sda               0.01     5.19  0.03  3.27     2.71    67.69    21.37
    0.02    5.71   0.12   0.04
sda1              0.00     0.00  0.00  0.00     0.00     0.00    11.66
    0.00    1.84   1.27   0.00
sda2              0.01     4.07  0.02  3.21     2.69    58.24    18.84
    0.02    5.69   0.12   0.04
sda3              0.00     0.00  0.00  0.00     0.01     0.04    11.15
    0.00    0.83   0.80   0.00
sda4              0.00     0.00  0.00  0.00     0.00     0.00     2.00
    0.00    8.75   8.75   0.00
sda5              0.00     0.00  0.00  0.00     0.00     0.01    15.44
    0.00    0.82   0.70   0.00
sda6              0.00     1.12  0.00  0.05     0.01     9.41   171.06
    0.00    7.70   0.13   0.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
            0.00    0.00    0.01    0.00    0.00   99.99

Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
sda               0.00     0.60  0.00  1.00     0.00    12.80    12.80
    0.00    0.00   0.00   0.00
sda1              0.00     0.00  0.00  0.00     0.00     0.00     0.00
    0.00    0.00   0.00   0.00
sda2              0.00     0.60  0.00  1.00     0.00    12.80    12.80
    0.00    0.00   0.00   0.00
sda3              0.00     0.00  0.00  0.00     0.00     0.00     0.00
    0.00    0.00   0.00   0.00
sda4              0.00     0.00  0.00  0.00     0.00     0.00     0.00
    0.00    0.00   0.00   0.00
sda5              0.00     0.00  0.00  0.00     0.00     0.00     0.00
    0.00    0.00   0.00   0.00
sda6              0.00     0.00  0.00  0.00     0.00     0.00     0.00
    0.00    0.00   0.00   0.00

I did a reindexdb today, and it took less than 2 minutes. So I don't
think it had anything to do with a bloated db or index.

I need some utilities and training to be able to convince myself when a
problem is with the 'system' (PostgreSQL+hw+config) vs design of the db.

Thanks to all who responded.
Cheers,
--irene

>
>
>> Your comments are appreciated.
>> --irene
>>
>>>> last pid:  1185;  load avg:  2.17,  2.21,  1.60;       up 38+01:36:40
>>>>                                                              13:52:27
>>>> 14 processes: 2 running, 12 sleeping
>>>> CPU states: 14.0% user,  0.0% nice, 10.5% system, 75.4% idle,  0.0%
>>>> iowait
>>>> Memory: 11G used, 20G free, 456M buffers, 8724M cached
>>>> Swap:
>>>>  PID USERNAME PRI NICE  SIZE   RES STATE   TIME   WCPU    CPU COMMAND
>>>> 28508 postgres  17    0   93M   38M run   265:53 58.42% 99.08% postgres:
>>>> postgres metadata 140.252.26.34(34717) UPDATE
>>>> 31609 postgres  16    0   91M   36M run     7:05 57.85% 98.09% postgres:
>>>> system_admin metadata 140.252.26.34(43303) SELECT
>>>> 25156 postgres  16    0  102M   46M sleep   7:28  0.00%  0.00% postgres:
>>>> system_admin metadata 140.252.6.51(40350) idle
>>>> 25363 postgres  18    0   93M   37M sleep   5:08  0.00%  0.00% postgres:
>>>> system_admin metadata 140.252.6.51(35951) idle
>>>> 31622 postgres  15    0   95M   38M sleep   1:45  0.00%  0.00% postgres:
>>>> system_admin metadata 140.252.6.51(51917) idle
>>>> 31624 postgres  15    0   95M   38M sleep   0:14  0.00%  0.00% postgres:
>>>> system_admin metadata 140.252.6.51(53908) idle
>>>> 28755 postgres  15    0   91M   10M sleep   0:02  0.00%  0.00% postgres:
>>>> postgres keyword 140.252.26.33(41270) idle
>>>> 28757 postgres  15    0   91M   10M sleep   0:02  0.00%  0.00% postgres:
>>>> postgres keyword 140.252.26.33(41272) idle
>>>> 28756 postgres  15    0   91M   10M sleep   0:02  0.00%  0.00% postgres:
>>>> postgres keyword 140.252.26.33(41271) idle
>>>> 28758 postgres  15    0   91M   10M sleep   0:02  0.00%  0.00% postgres:
>>>> postgres keyword 140.252.26.33(41273) idle
>>>> 28754 postgres  15    0   92M 9724K sleep   0:02  0.00%  0.00% postgres:
>>>> postgres keyword 140.252.26.33(41269) idle
>>>> 25180 postgres  15    0   91M 7016K sleep   0:00  0.00%  0.00% postgres:
>>>> postgres metadata 140.252.6.51(33997) idle
>>>> 25179 postgres  15    0   91M 6956K sleep   0:00  0.00%  0.00% postgres:
>>>> postgres metadata 140.252.6.51(47331) idle
>>>>  1186 postgres  16    0   90M 4808K sleep   0:00  0.00%  0.00% postgres:
>>>> arcsoft metadata [local] idle    [arcsoft@archdbn1 ~]$ date
>>>> Fri Nov 27 13:53:28 MST 2009
>> --
>> ---------------------------------------------------------------------
>> Irene Barg                    Email:  ibarg@noao.edu
>> NOAO/AURA Inc.                 http://www.noao.edu/noao/staff/ibarg
>> 950 N. Cherry Ave.            Voice:  520-318-8273
>> Tucson, AZ  85726 USA           FAX:  520-318-8360
>> ---------------------------------------------------------------------
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>
>

--
---------------------------------------------------------------------
Irene Barg                    Email:  ibarg@noao.edu
NOAO/AURA Inc.                 http://www.noao.edu/noao/staff/ibarg
950 N. Cherry Ave.            Voice:  520-318-8273
Tucson, AZ  85726 USA           FAX:  520-318-8360
---------------------------------------------------------------------

Re: vacuumdb -z do a reindex?

From
John R Pierce
Date:
Irene Barg wrote:
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>            0.00    0.00    0.01    0.00    0.00   99.99
>
> Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz
> avgqu-sz   await  svctm  %util
> sda               0.00     0.60  0.00  1.00     0.00    12.80    12.80
>    0.00    0.00   0.00   0.00
> sda1              0.00     0.00  0.00  0.00     0.00     0.00     0.00
>    0.00    0.00   0.00   0.00
> sda2              0.00     0.60  0.00  1.00     0.00    12.80    12.80
>    0.00    0.00   0.00   0.00
> sda3              0.00     0.00  0.00  0.00     0.00     0.00     0.00
>    0.00    0.00   0.00   0.00
> sda4              0.00     0.00  0.00  0.00     0.00     0.00     0.00
>    0.00    0.00   0.00   0.00
> sda5              0.00     0.00  0.00  0.00     0.00     0.00     0.00
>    0.00    0.00   0.00   0.00
> sda6              0.00     0.00  0.00  0.00     0.00     0.00     0.00
>    0.00    0.00   0.00   0.00

that appears to be a system thats close to totally idle.   what volume
is sda2 ?  It had an average of 1 write of 12.8 sectors/second  thats
about 6.5 kbyte/sec, whihc easily could be .bash_history or syslog or
other background activity





Re: vacuumdb -z do a reindex?

From
Scott Marlowe
Date:
On Sat, Nov 28, 2009 at 3:12 PM, Irene Barg <ibarg@noao.edu> wrote:
> Hi Scott,
>
> Scott Marlowe wrote:
>>
>> On Fri, Nov 27, 2009 at 2:17 PM, Irene Barg <ibarg@noao.edu> wrote:
>>>
>>> I've had a simple update running for over 4 hours now (see results from
>>> pg_top below). The sql is:
>>
>> Have you looked in pg_locks and pg_stat_activity?
>
> Yes, I did look at pg_stat_activity and did not see anything alarming. What
> would have been indicators of something bad? The runtime was the only
> alarming thing I saw.

In pg_stat_activity anything that's waiting and has been for any length of time.

>>> The database has 1016789 records, vacuumdb -z is ran once a day. I have
>>> not
>>> ran 'reindexdb' in weeks. The system is a:
>>>
>>> 2xIntel 4-core Xeon Model E5430 (Harpertown) 2.66GHz, 32GB RAM and
>>> 8x145GB
>>> SAS drives configured with software RAID10
>>
>> So do you have autovacuum disabled? What pg version are you running?
>
> Yes. It seems simpler than trying to configure the many options.

Bad idea usually unless you're sure it's a problem.

>> an 8 drive RAID array is usually pretty fast, unless it's on a bad
>> RAID controller or something.  What do "vmstat 10" and "iostat -x 10"
>> say about your io activity?
>
> -bash-3.2$ vmstat 10
> procs -----------memory---------- ---swap-- -----io---- --system--
> -----cpu------
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id
> wa st
>  0  0      0 21143944 471304 8928016    0    0     0     4    0    1  2  1
> 97  0  0

The first line is since the machine started up, the lines AFTER that
are what's important really.


> -bash-3.2$ iostat -x 10
> Linux 2.6.18-128.1.10.el5 (archdbn1)    11/28/09
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>           1.91    0.00    1.42    0.00    0.00   96.67
>
> Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz
> avgqu-sz   await  svctm  %util
> sda               0.01     5.19  0.03  3.27     2.71    67.69    21.37
> 0.02    5.71   0.12   0.04
> sda1              0.00     0.00  0.00  0.00     0.00     0.00    11.66
> 0.00    1.84   1.27   0.00
> sda2              0.01     4.07  0.02  3.21     2.69    58.24    18.84
> 0.02    5.69   0.12   0.04
> sda3              0.00     0.00  0.00  0.00     0.01     0.04    11.15
> 0.00    0.83   0.80   0.00
> sda4              0.00     0.00  0.00  0.00     0.00     0.00     2.00
> 0.00    8.75   8.75   0.00
> sda5              0.00     0.00  0.00  0.00     0.00     0.01    15.44
> 0.00    0.82   0.70   0.00
> sda6              0.00     1.12  0.00  0.05     0.01     9.41   171.06
> 0.00    7.70   0.13   0.00
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>           0.00    0.00    0.01    0.00    0.00   99.99
>
> Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz
> avgqu-sz   await  svctm  %util
> sda               0.00     0.60  0.00  1.00     0.00    12.80    12.80
> 0.00    0.00   0.00   0.00
> sda1              0.00     0.00  0.00  0.00     0.00     0.00     0.00
> 0.00    0.00   0.00   0.00
> sda2              0.00     0.60  0.00  1.00     0.00    12.80    12.80
> 0.00    0.00   0.00   0.00
> sda3              0.00     0.00  0.00  0.00     0.00     0.00     0.00
> 0.00    0.00   0.00   0.00
> sda4              0.00     0.00  0.00  0.00     0.00     0.00     0.00
> 0.00    0.00   0.00   0.00
> sda5              0.00     0.00  0.00  0.00     0.00     0.00     0.00
> 0.00    0.00   0.00   0.00
> sda6              0.00     0.00  0.00  0.00     0.00     0.00     0.00
> 0.00    0.00   0.00   0.00

This machine looks idle,  was this WHILE the bad queries etc were
running?  The rest of the time it'll tell us nothing.

Re: vacuumdb -z do a reindex?

From
Irene Barg
Date:
Hi Scott,

On Sat, Nov 28, 2009 at 3:12 PM, Irene Barg <ibarg@noao.edu> wrote:
>> > Hi Scott,
>> >
>> > Scott Marlowe wrote:
>>> >>
>>> >> On Fri, Nov 27, 2009 at 2:17 PM, Irene Barg <ibarg@noao.edu> wrote:
>>>> >>>
>>>> >>> I've had a simple update running for over 4 hours now (see results from
>>>> >>> pg_top below). The sql is:
>>> >>
>>> >> Have you looked in pg_locks and pg_stat_activity?
>> >

By the time I saw your last post, the 'update' had finished, but it took
  8 hours. I loaded a dump of the same database on our test system, then
ran the same 'update' statement. Below is all of the stats plus a few
others. The test system is 2xAMD Athlon(tm) 64 X2 Dual Core Processor
5600+, 6GB RAM, 2xSATA drives.


> ##### pg_locks #####
> metadata=# select * from pg_locks;
>    locktype    | database | relation | page | tuple | transactionid | classid | objid | objsubid | transaction |  pid
|       mode       | granted  
>
---------------+----------+----------+------+-------+---------------+---------+-------+----------+-------------+-------+------------------+---------
>  transactionid |          |          |      |       |        841483 |         |       |          |      841483 |
12742| ExclusiveLock    | t 
>  relation      |    21800 |    21849 |      |       |               |         |       |          |      841478 |
12753| RowShareLock     | t 
>  relation      |    21800 |    22086 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    22054 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    21847 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    22064 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    22088 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    22090 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    22134 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    21873 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    10328 |      |       |               |         |       |          |      841483 |
12742| AccessShareLock  | t 
>  relation      |    21800 |    22092 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    22094 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    22136 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    22062 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  transactionid |          |          |      |       |        841478 |         |       |          |      841478 |
12753| ExclusiveLock    | t 
>  relation      |    21800 |    21851 |      |       |               |         |       |          |      841478 |
12753| RowExclusiveLock | t 
>  relation      |    21800 |    22066 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    21892 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    21892 |      |       |               |         |       |          |      841478 |
12753| RowExclusiveLock | t 
>  relation      |    21800 |    22050 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    21915 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    22056 |      |       |               |         |       |          |      841478 |
12753| RowExclusiveLock | t 
>  relation      |    21800 |    21837 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    22048 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    22135 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    22060 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    22058 |      |       |               |         |       |          |      841478 |
12753| RowExclusiveLock | t 
>  relation      |    21800 |    22070 |      |       |               |         |       |          |      841478 |
12753| RowExclusiveLock | t 
>  relation      |    21800 |    21890 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    22072 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    21840 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    22138 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    22137 |      |       |               |         |       |          |      841478 |
12753| AccessShareLock  | t 
>  relation      |    21800 |    22068 |      |       |               |         |       |          |      841478 |
12753| RowExclusiveLock | t 
> (35 rows)
>
> ##### pg_stat_activity #####
> metadata=# select * from pg_stat_activity;
>  datid | datname  | procpid | usesysid |   usename    |
current_query                                                            | waiting |          query_start          |
    backend_start         |  client_addr  | client_port  
>
-------+----------+---------+----------+--------------+--------------------------------------------------------------------------------------------------------------------------------------+---------+-------------------------------+-------------------------------+---------------+-------------
>  21800 | metadata |   12742 |    19472 | arcsoft      | select * from pg_stat_activity;
                                                                      | f       | 2009-11-29 03:55:59.918524-07 |
2009-11-2903:52:40.954479-07 |               |          -1 
>  21800 | metadata |   12753 |    19472 | arcsoft      | update edu_noao_nsa_security.user_credentials set
distinguished_name= 'kelson@us-vo.org', status = 4 where credential_id = 1540815; | f       | 2009-11-29
03:54:23.823678-07| 2009-11-29 03:54:23.818717-07 | 140.252.26.34 |       37863 
>  21800 | metadata |   12610 |    17193 | system_admin | <IDLE>
                                                                      | f       | 2009-11-29 03:53:46.120621-07 |
2009-11-2903:43:44.954155-07 | 140.252.6.54  |       34038 
>  21800 | metadata |   12611 |    17193 | system_admin | <IDLE>
                                                                      | f       | 2009-11-29 03:43:46.81295-07  |
2009-11-2903:43:46.807336-07 | 140.252.6.54  |       38141 
> (4 rows)
>
> ##### vmstat #####
> -bash-3.2$ vmstat 10
> procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
>  1  0    148  38504 118212 5419140    0    0     1    11    5    4  1  0 99  0  0
>  1  0    148  38504 118240 5419288    0    0     1    19 1025  188 50  0 50  0  0
>  1  0    148  38504 118264 5419320    0    0     0    21 1025  186 50  0 50  0  0
>  1  0    148  38504 118276 5419376    0    0     0     4 1024  189 50  0 50  0  0
>  1  0    148  38372 118308 5419424    0    0     0    25 1032  199 50  0 50  0  0
>  1  0    148  38372 118320 5419488    0    0     0    12 1024  184 50  0 50  0  0
>  1  0    148  38248 118372 5419568    0    0     0    18 1025  189 50  0 50  0  0
>  1  0    148  37876 118408 5419944    0    0     0    20 1025  188 50  0 50  0  0
>  1  0    148  37876 118420 5419968    0    0     0     7 1024  184 50  0 50  0  0
>  1  0    148  37876 118444 5420036    0    0     0    19 1025  189 50  0 50  0  0
>  1  0    148  36644 118480 5420100    0    0     0    20 1028  206 50  0 50  0  0
>  1  0    148  36768 118504 5420140    0    0     0     5 1025  188 50  0 50  0  0
>
> ##### iostat #####
> -bash-3.2$ iostat -x 10
> Linux 2.6.18-92.1.13.el5 (somboo.tuc.noao.edu)     11/29/2009
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>            0.71    0.00    0.26    0.03    0.00   99.00
>
> Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
> sda               0.01     4.34  0.05  1.25     3.47    44.72    37.18     0.03   20.12   0.66   0.09
> sdb               0.00     0.00  0.00  0.00     0.00     0.00    14.08     0.00    2.77   2.36   0.00
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>           50.20    0.00    0.05    0.05    0.00   49.70
>
> Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
> sda               0.00     2.50  0.00  2.70     0.00    41.60    15.41     0.00    1.41   0.74   0.20
> sdb               0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>           49.95    0.00    0.10    0.00    0.00   49.95
>
> Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
> sda               0.00     1.90  0.00  1.90     0.00    30.40    16.00     0.00    0.74   0.21   0.04
> sdb               0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
>
> ###### pgstat ######
> -bash-3.2$ pgstat -d metadata -u arcsoft -p Royrkval
> time                     commits     rollbks      blksrd     blkshit      bkends     seqscan     seqtprd      idxscn
   idxtrd         ins         upd         del       locks         hit        load      active 
> 29-11-2009.04:03:38          18           1           0           0           4          35        2891           0
       0           0           0           0          54       100.0          40           1 
> 29-11-2009.04:03:48           6           0           0           0           4           7           0           0
       0           0           0           0          54       100.0          40           1 
> 29-11-2009.04:03:58           1           0           0           0           4           0           0           0
       0           0           0           0          54       100.0          40           1 
> 29-11-2009.04:04:08           1           0           0           0           4           0           0           0
       0           0           0           0          54       100.0          40           1 
> 29-11-2009.04:04:18           1           0           0           0           4           0           0           0
       0           0           0           0          54       100.0          41           1 
> 29-11-2009.04:04:28           1           0           0           0           4           0           0           0
       0           0           0           0          54       100.0          41           1 
> 29-11-2009.04:04:38           1           0           0           0           4           0           0           0
       0           0           0           0          54       100.0          41           1 
> 29-11-2009.04:04:48           1           0           0           0           4           0           0           0
       0           0           0           0          54       100.0          41           1 
> 29-11-2009.04:04:58           3           0           0           0           4           7        2891           0
       0           0           0           0          54       100.0          41           1 
> 29-11-2009.04:05:08           1           0           0           0           4           0           0           0
       0           0           0           0          54       100.0          41           1 
> 29-11-2009.04:05:18           1           0           0           0           4           0           0           0
       0           0           0           0          54       100.0          42           1 
>
> #### pg_top ####
> last pid: 14085;  load avg:  1.00,  1.00,  0.82;       up 39+15:57:38
                                                             04:20:20 
> 3 processes: 2 running, 1 sleeping
> CPU states: 50.0% user,  0.0% nice,  0.1% system, 49.9% idle,  0.0% iowait
> Memory: 5886M used, 77M free, 130M buffers, 5233M cached
> Swap: 148K used, 4095M free
>
>   PID USERNAME PRI NICE  SIZE   RES STATE   TIME   WCPU    CPU COMMAND
> 12753 postgres  25    0 1508M  218M run    25:54 17.37% 99.91% postgres: arcsoft metadata 140.252.26.34(37863) UPDATE
> 12610 postgres  15    0 1507M 6512K sleep   0:00  0.00%  0.00% postgres: system_admin metadata 140.252.6.54(34038)
idle
> 14086 postgres  17    0 1505M 4288K run     0:00  0.00%  0.00% postgres: postgres metadata [local] idle

--
---------------------------------------------------------------------
Irene Barg                    Email:  ibarg@noao.edu
NOAO/AURA Inc.                 http://www.noao.edu/noao/staff/ibarg
950 N. Cherry Ave.            Voice:  520-318-8273
Tucson, AZ  85726 USA           FAX:  520-318-8360
---------------------------------------------------------------------