Re: PHP and PostgreSQL - Mailing list pgsql-general

From Adam Haberlach
Subject Re: PHP and PostgreSQL
Date
Msg-id 20010107194703.C16787@newsnipple.com
Whole thread Raw
In response to Re: PHP and PostgreSQL  (Frank Joerdens <frank@joerdens.de>)
List pgsql-general
On Sat, Jan 06, 2001 at 05:12:27PM +0100, Frank Joerdens wrote:
> > > I understand what's wrong and i know why is that @.
> > >
> > > What i do want to know is, if there is something wrong with this
> > > function or am i doing something wrong. I don't like that kind of
> > > errors. How can i stop before the end.
> >
> >         for($i=0; $i < pg_numrows($qu); $i++) {
>
> As I understand the mechanism, a while loop, as in
>
> while ($data = @pg_fetch_object ($qu, $row)) { . . .
>
> would be faster than a for loop as above because with each iteration, PHP has to execute
> pg_numrows($qu), which, depending on how it is implemented (I don't know that and don't
> read C well enough to be able to take a peek at the source to figure it out), would
> require going through the entire result set to count the rows. Even if this only happens
> once at the first iteration (e.g. if PHP then caches the result), this could be a
> significant, unnecessary, overhead - particularly if the result set is large. With the
> while loop you simply avoid that. I don't see a problem with using the error as an exit
> condition for the loop, except that by switching off error reporting with @ you switch off
> _all_ errors, not only those that you know you'll get and which don't want to see, which
> can make debugging more difficult (but if you're debugging, you just remove the @ and add
> it again when you're done).

    Once again, this is probably all due to a difference between MySQL and Postgres.
Judging by the MySQL code, there is a provision for the client-side libraries to
pass tuples on to the application in the order they are sorted without necessarily
retrieving them all to the client.  AFAIK, Postgres does not do this unless you
specifically use cursors to pull down a window of data at a time (this is correct
behavior IMHO, your feelings may vary).

    I assume that this causes the MySQL client libraries to let you exec a query
and then pull rows out of it until it hits the end, at which point it indicates
that you are done.  PHP carries this model through to the PHP side of things.

    Since Postgres pulls down the entire result set, they are available for
random access.  This shows in the PG libraries and that behavior, as well, has
been carried over to PHP.

    Since PHP has no unified method for database access, you notice these
differences.

    (and judging by the number of times I've mentioned PHP and MySQL in this
post, it is time for this thread to go elsewhere.  Ask the MySQL people to let
you do random access of a large data set or as the PHP people to unify their
database acess model).

--
Adam Haberlach            |A cat spends her life conflicted between a
adam@newsnipple.com       |deep, passionate, and profound desire for
http://www.newsnipple.com |fish and an equally deep, passionate, and
'88 EX500                 |profound desire to avoid getting wet.

pgsql-general by date:

Previous
From: Adam Haberlach
Date:
Subject: Re: PHP and PostgreSQL
Next
From: Denis Perchine
Date:
Subject: Re: Problems with order by, limit, and indices