Thread: phpPgAdmin - prior version available?

phpPgAdmin - prior version available?

From
Bob Hartung
Date:
Hi all,
   I have been struggling with phpPgAdmin 4.1 - login failures.  There
does not yet seem to be a fix.  Where can I find a prior version for FC6
- rpm, tar.gz etc.

Thanks,

Bob

Re: [PHP] phpPgAdmin - prior version available?

From
"Tijnema !"
Date:
On 3/18/07, Bob Hartung <rwhart@mchsi.com> wrote:
> Hi all,
>   I have been struggling with phpPgAdmin 4.1 - login failures.  There
> does not yet seem to be a fix.  Where can I find a prior version for FC6
> - rpm, tar.gz etc.
>
> Thanks,
>
> Bob

Try this one, http://ftp.uni-koeln.de/mirrors/fedora/linux/extras/6/ppc/phpPgAdmin-4.0.1-7.fc6.noarch.rpm

phpPgAdmin 4.0.1

Tijnema
>
> --
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php
>
>

Re: phpPgAdmin - prior version available?

From
Robert Treat
Date:
On Sunday 18 March 2007 12:41, Bob Hartung wrote:
> Hi all,
>    I have been struggling with phpPgAdmin 4.1 - login failures.  There
> does not yet seem to be a fix.  Where can I find a prior version for FC6
> - rpm, tar.gz etc.
>

Can you be a bit more specific on the problem you're seeing?

--
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

Re: [PHP] Re: phpPgAdmin - prior version available?

From
"Tijnema !"
Date:
On 3/21/07, Robert Treat <xzilla@users.sourceforge.net> wrote:
> On Sunday 18 March 2007 12:41, Bob Hartung wrote:
> > Hi all,
> >    I have been struggling with phpPgAdmin 4.1 - login failures.  There
> > does not yet seem to be a fix.  Where can I find a prior version for FC6
> > - rpm, tar.gz etc.
> >
>
> Can you be a bit more specific on the problem you're seeing?

I already solved his problem, he replied to that, but not to the list...
his message:

got it,  Thanks again!

Bob
>
> --
> Robert Treat
> Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL
>
> --
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php
>
>

Wired behavor with LIMIT

From
Thomas Munz
Date:
Hello List!

I tried today to optmize in our companies internal Application the
querys. I come to a point where I tried, if querys with LIMIT are slower
then querys without limit

I tried following query in 8.2.4. Keep in mind that the table hs_company
only contains 10 rows.

thomas@localhost:~$ psql testdb testsuer
Welcome to psql 8.2.4, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
       \h for help with SQL commands
       \? for help with psql commands
       \g or terminate with semicolon to execute query
       \q to quit

ghcp=#  explain analyze select * from hs_company; explain analyze select
* from hs_company limit 10;
                                               QUERY PLAN
--------------------------------------------------------------------------------------------------------
 Seq Scan on hs_company  (cost=0.00..1.10 rows=10 width=186) (actual
time=0.012..0.034 rows=10 loops=1)
 Total runtime: 0.102 ms
(2 rows)

                                                  QUERY PLAN
--------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.00..1.10 rows=10 width=186) (actual time=0.012..0.063
rows=10 loops=1)
   ->  Seq Scan on hs_company  (cost=0.00..1.10 rows=10 width=186)
(actual time=0.007..0.025 rows=10 loops=1)
 Total runtime: 0.138 ms
(3 rows)

I runned this query about 100 times and always resulted, that this query
without limit is about 40 ms faster


Now I putted the same query in the file 'sql.sql' and runned it 100
times with:
psql test testuser -f sql.sql
with following results

thomas@localhost:~$ psql testdb testuser -f sql.sql
                                               QUERY PLAN
--------------------------------------------------------------------------------------------------------
 Seq Scan on hs_company  (cost=0.00..1.10 rows=10 width=186) (actual
time=0.013..0.034 rows=10 loops=1)
 Total runtime: 0.200 ms
(2 rows)

                                                  QUERY PLAN
--------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.00..1.10 rows=10 width=186) (actual time=0.016..0.069
rows=10 loops=1)
   ->  Seq Scan on hs_company  (cost=0.00..1.10 rows=10 width=186)
(actual time=0.008..0.025 rows=10 loops=1)
 Total runtime: 0.153 ms
(3 rows)


The querys are equal but has different speeds. Can me someone explain
why that is?

Thomas

Re: Wired behavor with LIMIT

From
Richard Huxton
Date:
Thomas Munz wrote:
> Hello List!
>
> I tried today to optmize in our companies internal Application the
> querys. I come to a point where I tried, if querys with LIMIT are slower
> then querys without limit
>
> I tried following query in 8.2.4. Keep in mind that the table hs_company
> only contains 10 rows.

Probably too small to provide useful measurements.

> ghcp=#  explain analyze select * from hs_company; explain analyze select
> * from hs_company limit 10;

> Total runtime: 0.102 ms
> Total runtime: 0.138 ms

1. I'm not sure the timings are accurate for sub-millisecond values
2. You've got to parse the LIMIT clause, and then execute it (even if it
does nothing useful)

> I runned this query about 100 times and always resulted, that this query
> without limit is about 40 ms faster

That's 0.4ms

> Now I putted the same query in the file 'sql.sql' and runned it 100
> times with:
> psql test testuser -f sql.sql

> Total runtime: 0.200 ms
> Total runtime: 0.153 ms

> The querys are equal but has different speeds. Can me someone explain
> why that is?

Same as above - you've got to parse & execute the limit clause. There's
no way for the planner to know that the table has exactly 10 rows in it
at the time it executes.

--
   Richard Huxton
   Archonet Ltd

Re: Wired behavor with LIMIT

From
Thomas Munz
Date:
Well,  I did another check on the LIMIT function ( table has without
"where" statment more then  2.000.000 entries) :

 select count(*) from hd_conversation where action_int is null;
  count
---------
 1652888
(1 row)

So, I runned this query now. The query with limit ( which eaven should
select 100.000 entries less then the second one )
is much slower then selecting all entries. This query was also 100 times
executed with allways the same result.

explain ANALYZE select * from hd_conversation where action_int is null
limit 1552888;explain ANALYZE select * from hd_conversation where
action_int is null;
                                                             QUERY PLAN

-------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.00..97491.64 rows=1552888 width=381) (actual
time=6.447..13351.441 rows=1552888 loops=1)
   ->  Seq Scan on hd_conversation  (cost=0.00..103305.78 rows=1645498
width=381) (actual time=6.442..7699.621 rows=1552888 loops=1)
         Filter: (action_int IS NULL)
 Total runtime: 16185.870 ms
(4 rows)

                                                           QUERY PLAN

--------------------------------------------------------------------------------------------------------------------------------
 Seq Scan on hd_conversation  (cost=0.00..103305.78 rows=1645498
width=381) (actual time=6.722..10793.863 rows=1652888 loops=1)
   Filter: (action_int IS NULL)
 Total runtime: 13621.877 ms
(3 rows)

Probably LIMIT creates an 'overhead' that slows down the System for
bigger entries. If I use a smaller amount its faster.

explain ANALYZE select * from hd_conversation where action_int is null
limit 100000;explain ANALYZE select * from hd_conversation where
action_int is null;
                                                            QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.00..6278.09 rows=100000 width=381) (actual
time=9.715..947.696 rows=100000 loops=1)
   ->  Seq Scan on hd_conversation  (cost=0.00..103305.78 rows=1645498
width=381) (actual time=9.710..535.933 rows=100000 loops=1)
         Filter: (action_int IS NULL)
 Total runtime: 1154.158 ms
(4 rows)

                                                           QUERY PLAN

--------------------------------------------------------------------------------------------------------------------------------
 Seq Scan on hd_conversation  (cost=0.00..103305.78 rows=1645498
width=381) (actual time=0.039..11172.030 rows=1652888 loops=1)
   Filter: (action_int IS NULL)
 Total runtime: 14071.620 ms
(3 rows)

But should be LIMIT in any case faster in theory?

Richard Huxton wrote:
> Thomas Munz wrote:
>> Hello List!
>>
>> I tried today to optmize in our companies internal Application the
>> querys. I come to a point where I tried, if querys with LIMIT are
>> slower then querys without limit
>>
>> I tried following query in 8.2.4. Keep in mind that the table
>> hs_company only contains 10 rows.
>
> Probably too small to provide useful measurements.
>
>> ghcp=#  explain analyze select * from hs_company; explain analyze
>> select * from hs_company limit 10;
>
>> Total runtime: 0.102 ms
>> Total runtime: 0.138 ms
>
> 1. I'm not sure the timings are accurate for sub-millisecond values
> 2. You've got to parse the LIMIT clause, and then execute it (even if
> it does nothing useful)
>
>> I runned this query about 100 times and always resulted, that this
>> query without limit is about 40 ms faster
>
> That's 0.4ms
>
>> Now I putted the same query in the file 'sql.sql' and runned it 100
>> times with:
>> psql test testuser -f sql.sql
>
>> Total runtime: 0.200 ms
>> Total runtime: 0.153 ms
>
>> The querys are equal but has different speeds. Can me someone explain
>> why that is?
>
> Same as above - you've got to parse & execute the limit clause.
> There's no way for the planner to know that the table has exactly 10
> rows in it at the time it executes.
>


Re: Wired behavor with LIMIT

From
Gregory Stark
Date:
"Thomas Munz" <thomas@ecommerce.com> writes:

> 100.000 entries less then the second one )
> is much slower then selecting all entries. This query was also 100 times
> executed with allways the same result.
>
> explain ANALYZE select * from hd_conversation where action_int is null limit
> 1552888;explain ANALYZE select * from hd_conversation where action_int is null;

What are the results if you run the query without analyze. Use \timing to get
the timing results, and use count(*) to avoid timing network speed:

 SELECT count(*) FROM (SELECT * FROM hd_conversation ... LIMIT 100000)

I think your timing results are being dominated by the overhead it takes to
actually measure the time spent in each node. With the limit node it has to
call gettimeofday nearly twice as often. That works out to about 2.5us per
gettimeofday which seems about right.

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com