Thread: maybe Offtopic : PostgreSQL & PHP ?

maybe Offtopic : PostgreSQL & PHP ?

From
"Picard, Cyril"
Date:
Hi all, sorry for the maybe offtopic questions, but I did not get any answer
from the PHP ML.

I would like to query a postgresql db with the php language.
Today I wrote a script (connectandquery.php) performing the following : 
- connect to the DB : $conn = pg_Connect("dbname = foo");
- execute the query : $res = pg_Exec($conn,"SELECT * from BAR");


But I would like to write this in two scripts :
- connect.php : $conn = pg_Connect("dbname = foo");
- query.php : $res = pg_Exec($conn,"SELECT * from BAR");

but I don't know how to get the $conn variable (defined in connect.php) in
the script query.php.

Any idea is welcome !


Re: maybe Offtopic : PostgreSQL & PHP ?

From
Keith Wong
Date:
Not quite sure how your code is organised...
but you could access the variable $conn by including "connect.php" into the 
"query.php" script.
Otherwise, you will need to use persistent connections... which can be 
achieved via pg_pconnect...
a persistent connection will instead of creating a new database connection 
each time.. it will try to use
an existing connection that is no longer being used (persistent connections 
do tend to have a lot of quirks tho)

Keith

At 11:33 AM 18/04/2001 +0200, Picard, Cyril wrote:
>Hi all, sorry for the maybe offtopic questions, but I did not get any answer
>from the PHP ML.
>
>I would like to query a postgresql db with the php language.
>Today I wrote a script (connectandquery.php) performing the following :
>- connect to the DB : $conn = pg_Connect("dbname = foo");
>- execute the query : $res = pg_Exec($conn,"SELECT * from BAR");
>
>
>But I would like to write this in two scripts :
>- connect.php : $conn = pg_Connect("dbname = foo");
>- query.php : $res = pg_Exec($conn,"SELECT * from BAR");
>
>but I don't know how to get the $conn variable (defined in connect.php) in
>the script query.php.
>
>Any idea is welcome !
>
>---------------------------(end of broadcast)---------------------------
>TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)



Re: maybe Offtopic : PostgreSQL & PHP ?

From
Roberto Mello
Date:
On Wed, Apr 18, 2001 at 10:35:09PM +1000, Keith Wong wrote:

> an existing connection that is no longer being used (persistent connections 
> do tend to have a lot of quirks tho)
What quirks? I am developing a PHP now and using persistant
connections. Coming from the AOLserver/OpenNSD world -- where ALL
connections are persistant, pooled, and dstributed among threads -- I'd be
interested in knowing the quirks of persistant connections in PHP.
-Roberto
-- 
+----| http://fslc.usu.edu USU Free Software & GNU/Linux Club |------+ Roberto Mello - Computer Science, USU -
http://www.brasileiro.net      http://www.sdl.usu.edu - Space Dynamics Lab, Developer    
 
I may be fat, but you are ugly, and I can diet...


Re: maybe Offtopic : PostgreSQL & PHP ?

From
Roberto Mello
Date:
On Wed, Apr 18, 2001 at 04:44:36PM +0200, Mathijs Brands wrote:
> I've written several applications with Apache/PHP. At first I was using
> persistant connections, but I was soon forced to revert to normal db
> connects. The problem is the number of connections opened. If you have
> 50 Apache processes and 5 different databases, it is highly likely that
> you will end up (if you leave your application running long enough) with
> 250 persistant db links; not something pgsql really likes, unless you
> configure it to correctly handle large numbers of db links. Apache/PHP
> doesn't provide you with some way to pool connections :(
Ah, I see the problem. When you mentioned there were problems I
thought you were talking about the connections themselves. I am using PHP 4 under AOLserver/OpenNSD, which has been
fully
threaded from scratch (1994) so I hope this won't be much of an issue (but
then PHP 4 still has threading problems AFAIK). Not using persistant connections is just too slow for me. It's 
instantaneous for AOLserver to generate a page, when the same page with the 
connection overhead in PHP takes A LOT longer.
-Roberto
-- 
+----| http://fslc.usu.edu USU Free Software & GNU/Linux Club |------+ Roberto Mello - Computer Science, USU -
http://www.brasileiro.net      http://www.sdl.usu.edu - Space Dynamics Lab, Developer    
 
Tetris tagline: @@  o@o  oooo  @oo  oo@  


Re: maybe Offtopic : PostgreSQL & PHP ?

From
"Josh Berkus"
Date:
Folks,

Not to be a list-nazi or anything, but can we move this discussion to
the PHP list?  These issues are already part of the PHP list archives.

If anyone is interested in Postgres-PHP topics, the PHP list is still
quite low-traffic.

-Josh Berkus


Re: maybe Offtopic : PostgreSQL & PHP ?

From
Mathijs Brands
Date:
On Wed, Apr 18, 2001 at 08:54:24AM -0600, Roberto Mello allegedly wrote:
>     Ah, I see the problem. When you mentioned there were problems I
> thought you were talking about the connections themselves. 
>     I am using PHP 4 under AOLserver/OpenNSD, which has been fully
> threaded from scratch (1994) so I hope this won't be much of an issue (but
> then PHP 4 still has threading problems AFAIK). 
>     Not using persistant connections is just too slow for me. It's 
> instantaneous for AOLserver to generate a page, when the same page with the 
> connection overhead in PHP takes A LOT longer.

So how does AOLserver/PHP handle connections then? Is AOLserver managing
(pooling) connections for PHP or something like that (like websphere and
nas do for Java programs)? A big problem (in my opinion) with Apache is
that you cannot limit the number of Apache processes that acquire
database links. For instance, it would be great if I could instruct
Apache to direct PHP request to a maximum of, say, 20 dedicated
processes. As it stands now, your best bet is probably a two step
approach; have a reverse proxy (such as a very lean apache installation)
or some other proxy (such as a loadbalancer) redirect requests to
multiple webservers, some of which have the pgsql PHP module. 

Another approach we've used for our own development environment (DBL) is
using a SQL proxy. We've placed a proxy between the webservers and the
databases. It provides multiplexing of connections and caching of
queries. Fail-over or load balancing is not implemented, but not that
difficult in our case, since our webservers do not directly update the
databases themselves. There's an Open Source product that provides
similar functionality (I think it's called SQL Relay or something
similar). There maybe be some commercial offerings for Open Source
databases such as pgsql and mysql, but I haven't looked into that.

Of course, having lots of db connections may not be necessary. We're
running one of the bigger news sites in Holland. It's implemented using
Java servlets, run without big-gun application servers such as WAS or
NAS (just the M5 webserver) and it uses PostgreSQL 7.0 for it's data
storage (both images and xml). It runs on a fairly small server and is
able to sustain more than 600,000 hits daily without breaking out in a
sweat. Of course, caching is a big thing, since actually generating a
page may take a second or more, but due to the way we maintain and prime
the cache, the users normally don't notice. However, restarting the
(web) server is a bitch though, since it usually takes 10-15 minutes to
build up a cache big enough to be able to serve most request directly
from the cache. It may be a fairly common setup, but I quite like the
way this system (built by two collegues) turned out. It was our second
attempt at using pgsql for a bigger production site and while we ran
into a lot of problems, in the end it worked out very well :)

Regards,

Mathijs
-- 
It's not that perl programmers are idiots, it's that the language
rewards idiotic behavior in a way that no other language or tool has
ever done.                                                   Erik Naggum


Re: persistent connections, AOLserver (Was: maybe Offtopic : PostgreSQL & PHP ?)

From
Roberto Mello
Date:
On Wed, Apr 18, 2001 at 08:28:14PM +0200, Mathijs Brands wrote:
>
> I've seen both MySQL and PostgreSQL give up (MySQL just hung, pgsql
> cored) when I tried simple selects from a couple of hundred concurrent
> connections; no transactions or other fancy stuff there. I think I was
> using MySQL 3.22.?? and pgsql 6.5.3, so more modern versions may well be
> able to cope with these numbers of connections. It's been more than a
> year since I last tried it.

PG 7.0 was released almost 2 years ago, so you had a very old version when
you tried.

> I don't know what the current state of affairs is, but it is my
> understanding that, while pgsql performs admirably on tasks which mostly
> read data, pgsql isn't really able to cope (performance wise) with an
> application that has a very high insert to select ratio, such as OLTP.

This is simply not true. It may have been true in the past, but not since
PG 7.0 came out.

A mistake that I see MySQL users do frequently is that when they have a
bulk of inserts to do, they don't use the COPY command to bulk load the
data and/or they don't wrap the inserts into one (or several) transaction
blocks, leaving each INSERT in its own transaction, which obviously will
hurt performance.

They do that because they usually have no idea of what a database is for,
besides being an SQL interface to their file systems. Exactly what MySQL
was until very recently (and thanks to the work of third-parties, because
the MySQL team itself couldn't care less about real features).

-Roberto

--
+----| http://fslc.usu.edu USU Free Software & GNU/Linux Club |------+
  Roberto Mello - Computer Science, USU - http://www.brasileiro.net
       http://www.sdl.usu.edu - Space Dynamics Lab, Developer
TAFB -> Text Above Fullquote Below

Re: persistent connections, AOLserver (Was: maybe Offtopic : PostgreSQL & PHP ?)

From
"Steve Brett"
Date:
is there a limit/guide to the number of inserts you should use in a
transaction block ?

i have an app at the minute written indelphi that moves data from a cache
database and a sql server databse and produces one definitive customer
management database for the company i work for.

the transfer is quite slow but i do use single inserts (not through a lack
of knowledge for what a database is for though :-) and maybe wrapping in a
transaction block is the answer.

i'm moving (parsing and matching) pretty small recorsets (10,000 in one and
approx 60,000 in the other).

Steve


"Roberto Mello" <rmello@cc.usu.edu> wrote in message
news:20011119083547.A22031@cc.usu.edu...
> On Wed, Apr 18, 2001 at 08:28:14PM +0200, Mathijs Brands wrote:
> >
> > I've seen both MySQL and PostgreSQL give up (MySQL just hung, pgsql
> > cored) when I tried simple selects from a couple of hundred concurrent
> > connections; no transactions or other fancy stuff there. I think I was
> > using MySQL 3.22.?? and pgsql 6.5.3, so more modern versions may well be
> > able to cope with these numbers of connections. It's been more than a
> > year since I last tried it.
>
> PG 7.0 was released almost 2 years ago, so you had a very old version when
> you tried.
>
> > I don't know what the current state of affairs is, but it is my
> > understanding that, while pgsql performs admirably on tasks which mostly
> > read data, pgsql isn't really able to cope (performance wise) with an
> > application that has a very high insert to select ratio, such as OLTP.
>
> This is simply not true. It may have been true in the past, but not since
> PG 7.0 came out.
>
> A mistake that I see MySQL users do frequently is that when they have a
> bulk of inserts to do, they don't use the COPY command to bulk load the
> data and/or they don't wrap the inserts into one (or several) transaction
> blocks, leaving each INSERT in its own transaction, which obviously will
> hurt performance.
>
> They do that because they usually have no idea of what a database is for,
> besides being an SQL interface to their file systems. Exactly what MySQL
> was until very recently (and thanks to the work of third-parties, because
> the MySQL team itself couldn't care less about real features).
>
> -Roberto
>
> --
> +----| http://fslc.usu.edu USU Free Software & GNU/Linux Club |------+
>   Roberto Mello - Computer Science, USU - http://www.brasileiro.net
>        http://www.sdl.usu.edu - Space Dynamics Lab, Developer
> TAFB -> Text Above Fullquote Below
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org




Database Replicatio via triggers

From
"Peter"
Date:
Hi there again,

we are setting up a small cluster with loadbalancing, fail-over, etc -  2
front and 2 back-end db servers. Everything is ok except for database
replication. As I understand it the easiest way of replicating the database
is via triggers; as rows in a table gets updated, deleted or are inserted
the same action is triggered on the second db server. My question is: Does
the trigger function allow for database level triggers or is it only
possible for individual tables? Also has anyone got experience with this
kind of situation and is this idea generaly commendable?

thanks in advance

regards, Peter


Re: persistent connections, AOLserver (Was: maybe Offtopic :

From
Stephan Szabo
Date:
On Fri, 30 Nov 2001, Steve Brett wrote:

> is there a limit/guide to the number of inserts you should use in a
> transaction block ?

Not a hard one, but I usually do thousands of rows (around 5k) per
transaction when I'm batch inserting, and I usually set it there so
I can watch its progress from another db session as the counts rise.




Re: persistent connections, AOLserver (Was: maybe Offtopic :

From
marc@oscar.eng.cv.net (Marc Spitzer)
Date:
In article <20011203110458.E93680-100000@megazone23.bigpanda.com>, 
Stephan Szabo wrote:
> On Fri, 30 Nov 2001, Steve Brett wrote:
> 
>> is there a limit/guide to the number of inserts you should use in a
>> transaction block ?
> 
> Not a hard one, but I usually do thousands of rows (around 5k) per
> transaction when I'm batch inserting, and I usually set it there so
> I can watch its progress from another db session as the counts rise.
> 

How do you set up the other session to watch?

marc


> 
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)


Re: persistent connections, AOLserver (Was: maybe Offtopic :

From
Stephan Szabo
Date:
On Mon, 17 Dec 2001, Marc Spitzer wrote:

> In article <20011203110458.E93680-100000@megazone23.bigpanda.com>,
> Stephan Szabo wrote:
> > On Fri, 30 Nov 2001, Steve Brett wrote:
> >
> >> is there a limit/guide to the number of inserts you should use in a
> >> transaction block ?
> >
> > Not a hard one, but I usually do thousands of rows (around 5k) per
> > transaction when I'm batch inserting, and I usually set it there so
> > I can watch its progress from another db session as the counts rise.
> >
>
> How do you set up the other session to watch?

Usually I just set up something to do throw an occasional query to the
server every so often (small script or something).  Something like a
select count(*) or select id order by id desc limit 1 type query.