Thread: big pg 6.5 and 7.1 problem in simple application

big pg 6.5 and 7.1 problem in simple application

From
Aaron Brashears
Date:
We have a simple ad tracking application, which has a (mostly) fixed
table size where each row represents a particular ad. We have about 70
rows in the database and use php scripts in apache which connect over
odbc, read a single row, increment a counter, and update that
row. We're performing about 30 updates a second and after a few
minutes the postmaster either hangs or dumps core.

We've tried this scenario on both pg 6.5 and 7.1 on redhat linux, from
redhat's rpms and built from source with the same results. We launch
256 backends with a reasonable shared buffer size. We're using the
unixodbc's odbc driver version 2.0.5. I don't think we're doing row
locks for the query, but that shouldn't crash it - it should just give
us bad data.

The tables, selects, and update calls are all pretty simple, so I'm
baffled by this behavior. Has anyone else seen this problem, or have a
solution?


Re: big pg 6.5 and 7.1 problem in simple application

From
Doug McNaught
Date:
Aaron Brashears <gila@gila.org> writes:

> We have a simple ad tracking application, which has a (mostly) fixed
> table size where each row represents a particular ad. We have about 70
> rows in the database and use php scripts in apache which connect over
> odbc, read a single row, increment a counter, and update that
> row. We're performing about 30 updates a second and after a few
> minutes the postmaster either hangs or dumps core.

I'd try compiling 7.1 with debugging enabled, and do a GDB backtrace
on your core dumps.  Otherwise it's hard to help you.

-Doug
--
The rain man gave me two cures; he said jump right in,
The first was Texas medicine--the second was just railroad gin,
And like a fool I mixed them, and it strangled up my mind,
Now people just get uglier, and I got no sense of time...          --Dylan

Re: big pg 6.5 and 7.1 problem in simple application

From
"Eric G. Miller"
Date:
On Wed, May 02, 2001 at 12:23:14PM -0700, Aaron Brashears wrote:
> We have a simple ad tracking application, which has a (mostly) fixed
> table size where each row represents a particular ad. We have about 70
> rows in the database and use php scripts in apache which connect over
> odbc, read a single row, increment a counter, and update that
> row. We're performing about 30 updates a second and after a few
> minutes the postmaster either hangs or dumps core.
>
> We've tried this scenario on both pg 6.5 and 7.1 on redhat linux, from
> redhat's rpms and built from source with the same results. We launch
> 256 backends with a reasonable shared buffer size. We're using the
> unixodbc's odbc driver version 2.0.5. I don't think we're doing row
> locks for the query, but that shouldn't crash it - it should just give
> us bad data.
>
> The tables, selects, and update calls are all pretty simple, so I'm
> baffled by this behavior. Has anyone else seen this problem, or have a
> solution?

I'll make a WAG:

1) odbc adds overhead to the queries...
2) 30 updates per second possibly leads to swamping postmaster with
connection attempts (and/or it hits the limit on connections), unless
there's some connection pooling...

Can't say this'd work, but inserts are generally faster than updates,
especially if there aren't any key checks. So, maybe it'd work better
to insert into a counter table an ID for the advertisement and have
a timestamp with default now().  Then have a cron job running at
appropriate intevals to update the summary stats, then truncate the
table (wrapped in a transaction).  Hmm, the timestamp may be
unnecessary.  Can't say if it'd solve the problem.  There'd be a little
bottleneck everytime the summary stats are updated.

--
Eric G. Miller <egm2@jps.net>