Alexander Elgert wrote:
>
> This results in a structure where I can itereate over all keys in the
> 2-dim array.
> You can see I iterate first over the databases and then over table AND
> columns!
> --- mysql: ~1s (Database X)
> --- postgres: ~1s (Database Y)
> ;)
>
> In contrast: =======================================================
>
> foreach database {
> foreach table {
> foreach column {
> do something ...
> }
> }
> }
> --- mysql: ~1s (Database X)
> --- postgres: ~80s (Database Y)
> ;(
>
>>> The second approach ist much faster, this must be because there is no
>>> nesting. ;(
>>
>> What nesting? Are you trying to do sub-queries of some sort?
> I did a loop over all tables and THEN calling a query for each table to
> get the columns (from the same table).
> Yes, there are definitively more queries the DBMS has to manage.
> (It is a bad style, but it is intuitive. Maybe the overhead of a single
> query is more time consuming than in mysql.)
I think I see what you're doing now. As Tom says, the information_schema
has overheads but I must say I'm surprised at it taking 80 seconds.
I can see how you might find it more intuitive. I think the other way
around myself - grab it all then process it.
--
Richard Huxton
Archonet Ltd