Thread: multiple threads inserting into the same table

From:
Brian Cox
Date:

The application log shows that 99652 rows are being inserted into
relation ts_stats_transet_user_daily. 5 threads are doing the inserts.
The schema is lengthy, but it has a synthetic primary key (ts_id int8
not null) and the following constraints:

alter table ts_stats_transet_user_daily add constraint FK8ED105ED9DADA24
     foreign key (ts_transet_id) references ts_transets;
alter table ts_stats_transet_user_daily add constraint K8ED105ED545ADA6D
     foreign key (ts_user_id) references ts_users;

This relation currently has 456532 rows and is not partitioned.

The inserts have been going on now for almost 1 hour -- not exactly
speedy. Here's what I find on the postgres side:

cemdb=> select current_query, procpid, xact_start from pg_stat_activity;
                           current_query                           |
procpid |          xact_start
------------------------------------------------------------------+---------+-------------------------------
  <IDLE> in transaction                                            |
15147 | 2009-03-23 12:08:31.604433-07
  <IDLE>                                                           |
15382 |
  select current_query, procpid, xact_start from pg_stat_activity; |
15434 | 2009-03-23 12:10:38.913764-07
  <IDLE>                                                           |
15152 |
  <IDLE>                                                           |
15150 |
  <IDLE>                                                           |
15156 |
  <IDLE> in transaction                                            |
15183 | 2009-03-23 12:09:50.864992-07
  <IDLE> in transaction                                            |
15186 | 2009-03-23 12:10:07.955838-07
  <IDLE>                                                           |
15188 |
  <IDLE>                                                           |
15192 |
  <IDLE> in transaction                                            |
15193 | 2009-03-23 12:10:07.955859-07
  <IDLE> in transaction                                            |
15194 | 2009-03-23 12:08:59.940101-07
(12 rows)

cemdb=> select c.oid,c.relname,l.pid,l.mode,l.granted from pg_class c
join pg_locks l on c.oid=l.relation order by l.mode;
    oid    |           relname           |  pid  |       mode       |
granted
----------+-----------------------------+-------+------------------+---------
  26493289 | ts_users_pkey               | 15183 | AccessShareLock  | t
  26493267 | ts_transets_pkey            | 15186 | AccessShareLock  | t
      1259 | pg_class                    | 15434 | AccessShareLock  | t
  26493289 | ts_users_pkey               | 15147 | AccessShareLock  | t
     10969 | pg_locks                    | 15434 | AccessShareLock  | t
  26493267 | ts_transets_pkey            | 15193 | AccessShareLock  | t
  26493289 | ts_users_pkey               | 15194 | AccessShareLock  | t
      2662 | pg_class_oid_index          | 15434 | AccessShareLock  | t
  26493267 | ts_transets_pkey            | 15194 | AccessShareLock  | t
  26493289 | ts_users_pkey               | 15193 | AccessShareLock  | t
  26493267 | ts_transets_pkey            | 15147 | AccessShareLock  | t
  26493289 | ts_users_pkey               | 15186 | AccessShareLock  | t
  26493267 | ts_transets_pkey            | 15183 | AccessShareLock  | t
      2663 | pg_class_relname_nsp_index  | 15434 | AccessShareLock  | t
  26472890 | ts_stats_transet_user_daily | 15147 | RowExclusiveLock | t
  26472890 | ts_stats_transet_user_daily | 15186 | RowExclusiveLock | t
  26472890 | ts_stats_transet_user_daily | 15193 | RowExclusiveLock | t
  26472890 | ts_stats_transet_user_daily | 15194 | RowExclusiveLock | t
  26472890 | ts_stats_transet_user_daily | 15183 | RowExclusiveLock | t
  26473252 | ts_users                    | 15194 | RowShareLock     | t
  26472508 | ts_transets                 | 15183 | RowShareLock     | t
  26472508 | ts_transets                 | 15193 | RowShareLock     | t
  26473252 | ts_users                    | 15193 | RowShareLock     | t
  26473252 | ts_users                    | 15183 | RowShareLock     | t
  26472508 | ts_transets                 | 15147 | RowShareLock     | t
  26473252 | ts_users                    | 15186 | RowShareLock     | t
  26472508 | ts_transets                 | 15186 | RowShareLock     | t
  26473252 | ts_users                    | 15147 | RowShareLock     | t
  26472508 | ts_transets                 | 15194 | RowShareLock     | t
(29 rows)

cemdb=> select c.oid,c.relname,l.pid,l.mode,l.granted from pg_class c
join pg_locks l on c.oid=l.relation order by l.pid;
    oid    |           relname           |  pid  |       mode       |
granted
----------+-----------------------------+-------+------------------+---------
  26493289 | ts_users_pkey               | 15147 | AccessShareLock  | t
  26473252 | ts_users                    | 15147 | RowShareLock     | t
  26493267 | ts_transets_pkey            | 15147 | AccessShareLock  | t
  26472508 | ts_transets                 | 15147 | RowShareLock     | t
  26472890 | ts_stats_transet_user_daily | 15147 | RowExclusiveLock | t
  26472890 | ts_stats_transet_user_daily | 15150 | RowExclusiveLock | t
  26493289 | ts_users_pkey               | 15150 | AccessShareLock  | t
  26493267 | ts_transets_pkey            | 15150 | AccessShareLock  | t
  26472508 | ts_transets                 | 15150 | RowShareLock     | t
  26473252 | ts_users                    | 15150 | RowShareLock     | t
  26472890 | ts_stats_transet_user_daily | 15186 | RowExclusiveLock | t
  26473252 | ts_users                    | 15186 | RowShareLock     | t
  26493267 | ts_transets_pkey            | 15186 | AccessShareLock  | t
  26472508 | ts_transets                 | 15186 | RowShareLock     | t
  26493289 | ts_users_pkey               | 15186 | AccessShareLock  | t
  26472890 | ts_stats_transet_user_daily | 15193 | RowExclusiveLock | t
  26493289 | ts_users_pkey               | 15193 | AccessShareLock  | t
  26473252 | ts_users                    | 15193 | RowShareLock     | t
  26472508 | ts_transets                 | 15193 | RowShareLock     | t
  26493267 | ts_transets_pkey            | 15193 | AccessShareLock  | t
  26493267 | ts_transets_pkey            | 15194 | AccessShareLock  | t
  26472508 | ts_transets                 | 15194 | RowShareLock     | t
  26493289 | ts_users_pkey               | 15194 | AccessShareLock  | t
  26472890 | ts_stats_transet_user_daily | 15194 | RowExclusiveLock | t
  26473252 | ts_users                    | 15194 | RowShareLock     | t
      1259 | pg_class                    | 15434 | AccessShareLock  | t
      2663 | pg_class_relname_nsp_index  | 15434 | AccessShareLock  | t
      2662 | pg_class_oid_index          | 15434 | AccessShareLock  | t
     10969 | pg_locks                    | 15434 | AccessShareLock  | t
(29 rows)

cemdb=> select c.oid,c.relname,l.pid,l.mode,l.granted from pg_class c
join pg_locks l on c.oid=l.relation order by c.relname;
    oid    |           relname           |  pid  |       mode       |
granted
----------+-----------------------------+-------+------------------+---------
      1259 | pg_class                    | 15434 | AccessShareLock  | t
      2662 | pg_class_oid_index          | 15434 | AccessShareLock  | t
      2663 | pg_class_relname_nsp_index  | 15434 | AccessShareLock  | t
     10969 | pg_locks                    | 15434 | AccessShareLock  | t
  26472890 | ts_stats_transet_user_daily | 15150 | RowExclusiveLock | t
  26472890 | ts_stats_transet_user_daily | 15193 | RowExclusiveLock | t
  26472890 | ts_stats_transet_user_daily | 15194 | RowExclusiveLock | t
  26472890 | ts_stats_transet_user_daily | 15192 | RowExclusiveLock | t
  26472890 | ts_stats_transet_user_daily | 15186 | RowExclusiveLock | t
  26472508 | ts_transets                 | 15193 | RowShareLock     | t
  26472508 | ts_transets                 | 15186 | RowShareLock     | t
  26472508 | ts_transets                 | 15194 | RowShareLock     | t
  26472508 | ts_transets                 | 15192 | RowShareLock     | t
  26472508 | ts_transets                 | 15150 | RowShareLock     | t
  26493267 | ts_transets_pkey            | 15192 | AccessShareLock  | t
  26493267 | ts_transets_pkey            | 15194 | AccessShareLock  | t
  26493267 | ts_transets_pkey            | 15150 | AccessShareLock  | t
  26493267 | ts_transets_pkey            | 15186 | AccessShareLock  | t
  26493267 | ts_transets_pkey            | 15193 | AccessShareLock  | t
  26473252 | ts_users                    | 15150 | RowShareLock     | t
  26473252 | ts_users                    | 15194 | RowShareLock     | t
  26473252 | ts_users                    | 15186 | RowShareLock     | t
  26473252 | ts_users                    | 15193 | RowShareLock     | t
  26473252 | ts_users                    | 15192 | RowShareLock     | t
  26493289 | ts_users_pkey               | 15186 | AccessShareLock  | t
  26493289 | ts_users_pkey               | 15192 | AccessShareLock  | t
  26493289 | ts_users_pkey               | 15193 | AccessShareLock  | t
  26493289 | ts_users_pkey               | 15194 | AccessShareLock  | t
  26493289 | ts_users_pkey               | 15150 | AccessShareLock  | t
(29 rows)

Any ideas as to what is happening here would be appreciated.

Thanks,
Brian

From:
David Wilson
Date:

On Mon, Mar 23, 2009 at 3:34 PM, Brian Cox <> wrote:
> The application log shows that 99652 rows are being inserted into relation
> ts_stats_transet_user_daily. 5 threads are doing the inserts. The schema is
> lengthy, but it has a synthetic primary key (ts_id int8 not null) and the
> following constraints:

How many indexes are there on ts_stats_transset_user_daily?

Are these rows being inserted in individual insert statements, or are
they batched in some fashion?

What's the disk/cpu activity on your system look like?

--
- David T. Wilson


From:
Tom Lane
Date:

Brian Cox <> writes:
> The application log shows that 99652 rows are being inserted into
> relation ts_stats_transet_user_daily. 5 threads are doing the inserts.

pg_stat_activity says those five threads are doing nothing except
sitting around with open transactions.  You sure you don't have a bug on
the application side?

            regards, tom lane

From:
Brian Cox
Date:

Tom Lane [] wrote:
> pg_stat_activity says those five threads are doing nothing except
> sitting around with open transactions.  You sure you don't have a bug on
> the application side?
>
>                         regards, tom lane

This is a java app. A thread dump reveals that these 5 threads are all
asleep on a socket read to postgres (see below). DbUtils.java:2265 is:

session.connection().createStatement()
.executeUpdate(((DatabaseInsert) insertObject).getInsertStmt(session));

This generates and executes a single SQL insert. Since, as you point
out, postgres seems to think that this transaction isn't doing anything,
it's hard to figure out what the read is doing.

Brian


"DatabasePool.Thread1" prio=10 tid=0x27f04c00 nid=0x3b38 runnable
[0x29e27000..0x29e281b0]
    java.lang.Thread.State: RUNNABLE
    at java.net.SocketInputStream.socketRead0(Native Method)
    at java.net.SocketInputStream.read(SocketInputStream.java:129)
    at
org.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:135)
    at
org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:104)
    at
org.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:73)
    at org.postgresql.core.PGStream.ReceiveChar(PGStream.java:259)
    at
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1182)
    at
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:194)
    - locked <0x8975c878> (a org.postgresql.core.v3.QueryExecutorImpl)
    at
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:451)
    at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:336)
    at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:282)
    at
com.mchange.v2.c3p0.impl.NewProxyStatement.executeUpdate(NewProxyStatement.java:64)
    at
com.timestock.tess.util.DbUtils$DatabaseInsertTask.insertObject(DbUtils.java:2265)
    at
com.timestock.tess.util.DbUtils$DatabaseInsertTask.call(DbUtils.java:2200)
    at
com.timestock.tess.util.DbUtils$DatabaseInsertTask.call(DbUtils.java:2157)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)


From:
Scott Marlowe
Date:

On Mon, Mar 23, 2009 at 2:25 PM, Brian Cox <> wrote:
> Tom Lane [] wrote:
>>
>> pg_stat_activity says those five threads are doing nothing except
>> sitting around with open transactions.  You sure you don't have a bug on
>> the application side?
>>
>>                        regards, tom lane
>
> This is a java app. A thread dump reveals that these 5 threads are all
> asleep on a socket read to postgres (see below). DbUtils.java:2265 is:
>
> session.connection().createStatement() .executeUpdate(((DatabaseInsert)
> insertObject).getInsertStmt(session));
>
> This generates and executes a single SQL insert. Since, as you point out,
> postgres seems to think that this transaction isn't doing anything,
> it's hard to figure out what the read is doing.

Might you have a firewall that's killing the connections?  What does
netstat -an on the client side say about these connections?

From:
Brian Cox
Date:

Scott Marlowe [] wrote:
> Might you have a firewall that's killing the connections?  What does
> netstat -an on the client side say about these connections?
I don't think so: 1) app and postgres are on the same machine and 2)
this has been the set up for months and I don't think anyone has diddled
with the machine.

netstat -an | fgrep 5432 shows:

[root@rdl64xeoserv01 log]# netstat -an | fgrep 5432
tcp        0      0 0.0.0.0:5432                0.0.0.0:*
     LISTEN
tcp        0      0 127.0.0.1:5432              127.0.0.1:35996
     ESTABLISHED
tcp        0      0 127.0.0.1:5432              127.0.0.1:35999
     ESTABLISHED
tcp        0      0 127.0.0.1:5432              127.0.0.1:36017
     ESTABLISHED
tcp        0      0 127.0.0.1:5432              127.0.0.1:36019
     ESTABLISHED
tcp        0      0 127.0.0.1:5432              127.0.0.1:36018
     ESTABLISHED
tcp        0      0 127.0.0.1:5432              127.0.0.1:36005
     ESTABLISHED
tcp        0      0 127.0.0.1:5432              127.0.0.1:36001
     ESTABLISHED
tcp        0      0 127.0.0.1:5432              127.0.0.1:36013
     ESTABLISHED
tcp        0      0 127.0.0.1:5432              127.0.0.1:36008
     ESTABLISHED
tcp        0      0 127.0.0.1:5432              127.0.0.1:36011
     ESTABLISHED
tcp        0      0 130.200.163.80:5432         130.200.164.26:50481
     ESTABLISHED
tcp        0      0 :::5432                     :::*
     LISTEN
tcp        0      0 ::ffff:127.0.0.1:36001      ::ffff:127.0.0.1:5432
     ESTABLISHED
tcp        0      0 ::ffff:127.0.0.1:36005      ::ffff:127.0.0.1:5432
     ESTABLISHED
tcp        0      0 ::ffff:127.0.0.1:36008      ::ffff:127.0.0.1:5432
     ESTABLISHED
tcp        0      0 ::ffff:127.0.0.1:36011      ::ffff:127.0.0.1:5432
     ESTABLISHED
tcp        0      0 ::ffff:127.0.0.1:36013      ::ffff:127.0.0.1:5432
     ESTABLISHED
tcp        0      0 ::ffff:127.0.0.1:36017      ::ffff:127.0.0.1:5432
     ESTABLISHED
tcp        0      0 ::ffff:127.0.0.1:36019      ::ffff:127.0.0.1:5432
     ESTABLISHED
tcp        0      0 ::ffff:127.0.0.1:36018      ::ffff:127.0.0.1:5432
     ESTABLISHED
tcp        0      0 ::ffff:127.0.0.1:35996      ::ffff:127.0.0.1:5432
     ESTABLISHED
tcp        0      0 ::ffff:127.0.0.1:35999      ::ffff:127.0.0.1:5432
     ESTABLISHED
unix  2      [ ACC ]     STREAM     LISTENING     2640437 /tmp/.s.PGSQL.5432




From:
Brian Cox
Date:

David Wilson [] wrote:

> How many indexes are there on ts_stats_transset_user_daily?
10:
create index ts_stats_transet_user_daily_DayIndex on
ts_stats_transet_user_daily (ts_day);
create index ts_stats_transet_user_daily_HourIndex on
ts_stats_transet_user_daily (ts_hour);
create index ts_stats_transet_user_daily_LastAggregatedRowIndex on
ts_stats_transet_user_daily (ts_last_aggregated_row);
create index ts_stats_transet_user_daily_MonthIndex on
ts_stats_transet_user_daily (ts_month);
create index ts_stats_transet_user_daily_StartTimeIndex on
ts_stats_transet_user_daily (ts_interval_start_time);
create index ts_stats_transet_user_daily_TranSetIncarnationIdIndex on
ts_stats_transet_user_daily (ts_transet_incarnation_id);
create index ts_stats_transet_user_daily_TranSetIndex on
ts_stats_transet_user_daily (ts_transet_id);
create index ts_stats_transet_user_daily_UserIncarnationIdIndex on
ts_stats_transet_user_daily (ts_user_incarnation_id);
create index ts_stats_transet_user_daily_UserIndex on
ts_stats_transet_user_daily (ts_user_id);
create index ts_stats_transet_user_daily_WeekIndex on
ts_stats_transet_user_daily (ts_week);
create index ts_stats_transet_user_daily_YearIndex on
ts_stats_transet_user_daily (ts_year);


> Are these rows being inserted in individual insert statements, or are
> they batched in some fashion?
individual insert stmts in a single transaction.

> What's the disk/cpu activity on your system look like?
The app is using 100% CPU and I haven't figured out why, but the insert
threads are generally doing socket reads. But they can't be completely
blocked as 1 thread is doing a read in one thread dump and is doing
processing (preparing for another insert) in a later thread dump. So,
it looks as if the inserts are taking a l-o-n-g time.

Here's the output of vmstat and iostat. I've never looked at this info
before, so I'm not sure what it says.

[root@rdl64xeoserv01 log]# vmstat
procs -----------memory---------- ---swap-- -----io---- --system--
----cpu----
  r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us
sy id wa
  1  0      0 9194740  58676 20980264    0    0     8    21    1     2
2  0 98  0
[root@rdl64xeoserv01 log]# iostat
Linux 2.6.9-42.ELsmp (rdl64xeoserv01.ca.com)    03/23/2009

avg-cpu:  %user   %nice    %sys %iowait   %idle
            1.71    0.00    0.09    0.02   98.18

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              10.46       126.23       343.38  304224943  827588034
sda1              0.00         0.00         0.00       1428         58
sda2             57.73       126.23       343.37  304221426  827576144
sda3              0.00         0.00         0.00       1073          0


From:
Tom Lane
Date:

Scott Marlowe <> writes:
> On Mon, Mar 23, 2009 at 2:25 PM, Brian Cox <> wrote:
>> This generates and executes a single SQL insert. Since, as you point out,
>> postgres seems to think that this transaction isn't doing anything,
>> it's hard to figure out what the read is doing.

> Might you have a firewall that's killing the connections?  What does
> netstat -an on the client side say about these connections?

netstat will probably say the connection is open on both sides ---
otherwise the sockets would have closed.  It looks like both sides
still think the connection is open.  A firewall timeout is still
a possibility, but you'd have had to have a fairly long idle time
for that to happen.  Are any of the threads issuing commands that might
have run for very long intervals (tens of minutes)?

            regards, tom lane

From:
Tom Lane
Date:

Brian Cox <> writes:
> This is a java app. A thread dump reveals that these 5 threads are all
> asleep on a socket read to postgres (see below).

It seems clear that what you've got isn't a performance problem.
May I suggest taking it to pgsql-jdbc?  The folk there are more likely
to be able to help you figure out what's wrong than most of us.

            regards, tom lane