Thread: persistent connections, AOLserver (Was: [SQL] maybe Offtopic : PostgreSQL & PHP ?)

Roberto Mello wrote:
[ . . . ]
>         I am using PHP 4 under AOLserver/OpenNSD, which has been fully
> threaded from scratch (1994) so I hope this won't be much of an issue
(but
> then PHP 4 still has threading problems AFAIK).

Ah, very interesting indeed! I've been meaning to look into this for
quite a while but never had the time to really play with it. Are you
using PHP/AOLserver in a production environment? Did you have any
problems, crashes, etc.? And, most importantly, how _does_ it work with
persistent connections? Does AOLserver open a connection for each
request/thread it creates?

I think the real issue with PHP/PostgreSQL and persistent connections
was to do with transactions. AFAIK the too-many-connections issue was
easy to fix through some fiddling with appropriaty parameters
(max_connections or something, I forget) in php.ini - it's not too well
documented, you'd best just play with it and try some different options
(I don't have the problem, I _am_ using persistent connections, and I
can send you what I put in my php.ini if you want).

The transaction issue with persistent connections was discussed a couple
of weeks back here or on the php list, I am not sure. It kinda boils
down to the situtation where a persistent connection is being re-used by
a different apache child to the one which opened a transaction that has
not yet completed, or was not completed through some error. PostgreSQL
would then assume that the same client is issuing further SQL statements
within the pending transaction block. This would mess things up since
presumably a COMMIT WORK would be missing to complete the transactions .
. . I am actually not sure what would be likely to happen, nor what the
worst-case scenario might be, but it's clear that it'd be messy. I think
the PHP people are working on it.

The workaround was to issue a COMMIT WORK at the end of each script to
close any transaction that might be pending. There was a thread about
this too, a little while back. What puzzles me a little is that I've
never noticed any of these issues and I am using persistent connections
and transaction quite a bit. I am not using the COMMIT WORK workaround.
I don't have a sense of how serious this issue might be, if it came up.

AFAIK the problem is somehow rooted in the fact that PHP folks up until
very recently used to have a strong MySQL bias, which doesn't (or
didn't, until very recently) have proper transaction support. Hence they
didn't really consider this scenario.

However, what this means for the combination  PHP/AOLserver/PostgreSQL
is completely unclear to me, as I have no idea of what a PHP persistent
connection does under AOLserver.

Regards, Frank

NB: I've replied to pgsql-php and not pgsql-sql because this thread
seems more appropriate there.

On Wed, Apr 18, 2001 at 05:37:27PM +0200, Frank Joerdens allegedly wrote:
> I think the real issue with PHP/PostgreSQL and persistent connections
> was to do with transactions. AFAIK the too-many-connections issue was
> easy to fix through some fiddling with appropriaty parameters
> (max_connections or something, I forget) in php.ini - it's not too well
> documented, you'd best just play with it and try some different options
> (I don't have the problem, I _am_ using persistent connections, and I
> can send you what I put in my php.ini if you want).

I've seen both MySQL and PostgreSQL give up (MySQL just hung, pgsql
cored) when I tried simple selects from a couple of hundred concurrent
connections; no transactions or other fancy stuff there. I think I was
using MySQL 3.22.?? and pgsql 6.5.3, so more modern versions may well be
able to cope with these numbers of connections. It's been more than a
year since I last tried it.

> AFAIK the problem is somehow rooted in the fact that PHP folks up until
> very recently used to have a strong MySQL bias, which doesn't (or
> didn't, until very recently) have proper transaction support. Hence they
> didn't really consider this scenario.

I don't know what the current state of affairs is, but it is my
understanding that, while pgsql performs admirably on tasks which mostly
read data, pgsql isn't really able to cope (performance wise) with an
application that has a very high insert to select ratio, such as OLTP.
I'm looking into using pgsql to for implementing datamarts for a CRM
application, so I'll so how that goes in the next couple of weeks. I'm
keeping my fingers crossed ;)

The reason for the previous paragraph is that I think pgsql (PHP even
more so) is mosty used for databases in which the insert to select ratio
is much more favourable. But hey, that's my opinion.

Regards,

Mathijs
--
It's not that perl programmers are idiots, it's that the language
rewards idiotic behavior in a way that no other language or tool has
ever done.
                                                    Erik Naggum

On Wed, Apr 18, 2001 at 05:37:27PM +0200, Frank Joerdens wrote:
>
> Ah, very interesting indeed! I've been meaning to look into this for
> quite a while but never had the time to really play with it. Are you
> using PHP/AOLserver in a production environment? Did you have any
> problems, crashes, etc.? And, most importantly, how _does_ it work with
> persistent connections? Does AOLserver open a connection for each
> request/thread it creates?

    I am not in production yet. Only development. As soon as I am further
down, I'll use ab to test how it handles high load.
    AOLserver itself opens a connection per thread, and it very
intelligently gives that same connection to other threads as they need it
and as soon as the curret one is done using it. It's lots faster than PHP.
    PHP running under AOLserver is another story. In that case AOLserver
does nothing to the connections, it just passes the requests to PHP and
let it deal with it. Since AOLserver is a single process, on which the PHP
module is running, my feeling is that the concurrency/transactions
problem won't exist, since there are no child processes.

> AFAIK the problem is somehow rooted in the fact that PHP folks up until
> very recently used to have a strong MySQL bias, which doesn't (or
> didn't, until very recently) have proper transaction support. Hence they
> didn't really consider this scenario.

    Big mistake of the PHP team IMHO. I for one, didn't even consider using
PHP due to their bias towards a clearly deficient product. It made PHP as
a whole look just as deficient.

    -Roberto
--
+----| http://fslc.usu.edu USU Free Software & GNU/Linux Club |------+
  Roberto Mello - Computer Science, USU - http://www.brasileiro.net
       http://www.sdl.usu.edu - Space Dynamics Lab, Developer
*^ *^                    <- Tribbles Praying

On Wed, Apr 18, 2001 at 08:28:14PM +0200, Mathijs Brands wrote:

> I've seen both MySQL and PostgreSQL give up (MySQL just hung, pgsql
> cored) when I tried simple selects from a couple of hundred concurrent
> connections; no transactions or other fancy stuff there. I think I was
> using MySQL 3.22.?? and pgsql 6.5.3, so more modern versions may well be
> able to cope with these numbers of connections. It's been more than a
> year since I last tried it.

    I can see that happenning in 6.5, but not in later versions.

> I don't know what the current state of affairs is, but it is my
> understanding that, while pgsql performs admirably on tasks which mostly
> read data, pgsql isn't really able to cope (performance wise) with an
> application that has a very high insert to select ratio, such as OLTP.
> I'm looking into using pgsql to for implementing datamarts for a CRM
> application, so I'll so how that goes in the next couple of weeks. I'm
> keeping my fingers crossed ;)

    I think you got that backwards. PG is the one who can deal with OLTP
well. MySQL with its pessimistic locking model just hangs forever. Take a
look at Tim Perdue's tests with SourceForge on phpbuilder.com

> The reason for the previous paragraph is that I think pgsql (PHP even
> more so) is mosty used for databases in which the insert to select ratio
> is much more favourable. But hey, that's my opinion.

    Another misconception. Unless your site is very very simple, you have
_plenty_ of INSERTs and UPDATEs. On my sites, _every_ http request
requires at least 2 SELECTs and 1 INSERT, plus all the other stuff they
are doing. That's so I can keep track of who is visiting what. On MySQL
this would be unbearable.

    -Roberto

--
+----| http://fslc.usu.edu USU Free Software & GNU/Linux Club |------+
  Roberto Mello - Computer Science, USU - http://www.brasileiro.net
       http://www.sdl.usu.edu - Space Dynamics Lab, Developer
All true wisdom is found in taglines.

On Wed, Apr 18, 2001 at 08:28:14PM +0200, Mathijs Brands wrote:
>
> I've seen both MySQL and PostgreSQL give up (MySQL just hung, pgsql
> cored) when I tried simple selects from a couple of hundred concurrent
> connections; no transactions or other fancy stuff there. I think I was
> using MySQL 3.22.?? and pgsql 6.5.3, so more modern versions may well be
> able to cope with these numbers of connections. It's been more than a
> year since I last tried it.

PG 7.0 was released almost 2 years ago, so you had a very old version when
you tried.

> I don't know what the current state of affairs is, but it is my
> understanding that, while pgsql performs admirably on tasks which mostly
> read data, pgsql isn't really able to cope (performance wise) with an
> application that has a very high insert to select ratio, such as OLTP.

This is simply not true. It may have been true in the past, but not since
PG 7.0 came out.

A mistake that I see MySQL users do frequently is that when they have a
bulk of inserts to do, they don't use the COPY command to bulk load the
data and/or they don't wrap the inserts into one (or several) transaction
blocks, leaving each INSERT in its own transaction, which obviously will
hurt performance.

They do that because they usually have no idea of what a database is for,
besides being an SQL interface to their file systems. Exactly what MySQL
was until very recently (and thanks to the work of third-parties, because
the MySQL team itself couldn't care less about real features).

-Roberto

--
+----| http://fslc.usu.edu USU Free Software & GNU/Linux Club |------+
  Roberto Mello - Computer Science, USU - http://www.brasileiro.net
       http://www.sdl.usu.edu - Space Dynamics Lab, Developer
TAFB -> Text Above Fullquote Below

Database Replicatio via triggers

From
"Peter"
Date:
Hi there again,

we are setting up a small cluster with loadbalancing, fail-over, etc -  2
front and 2 back-end db servers. Everything is ok except for database
replication. As I understand it the easiest way of replicating the database
is via triggers; as rows in a table gets updated, deleted or are inserted
the same action is triggered on the second db server. My question is: Does
the trigger function allow for database level triggers or is it only
possible for individual tables? Also has anyone got experience with this
kind of situation and is this idea generaly commendable?

thanks in advance

regards, Peter