Re: pgadmin4 slow with large tables compared to pgadmin3 - Mailing list pgadmin-hackers

From Dave Page
Subject Re: pgadmin4 slow with large tables compared to pgadmin3
Date
Msg-id CA+OCxowzUrfR_7gQy5j=DOcSke3oP+JkOCc20a7gQa=y9izaLQ@mail.gmail.com
Whole thread Raw
In response to Re: pgadmin4 slow with large tables compared to pgadmin3  (Akshay Joshi <akshay.joshi@enterprisedb.com>)
List pgadmin-hackers
Hi

2016-06-14 8:17 GMT+01:00 Akshay Joshi <akshay.joshi@enterprisedb.com>:
> Hi  Dave
>
> 2016-06-13 21:47 GMT+05:30 Dave Page <dpage@pgadmin.org>:
>>
>> On Mon, Jun 13, 2016 at 5:01 PM, Colin Beckingham <colbec@kingston.net>
>> wrote:
>> > I have the latest fully patched pgadmin4. Runs fine on openSUSE Leap
>> > 42.1
>> > using browser Firefox.
>> > When I load a full large table such as the words table (140K records)
>> > from
>> > wordnet, pgadmin3 takes about 2 seconds to display.
>> > On pgadmin4 I wait about 30+ seconds and click through about 5 reminders
>> > that "a script has stopped working, do you want to continue."
>> > Eventually the table loads so this is not a bug report, more a question
>> > about how to streamline access to large tables. I am quite aware that it
>> > would run much faster by running a query with criteria asking for a
>> > subset
>> > of the table records, but just wondering if this is to be standard in
>> > pgadmin4. I can also disable the warnings, but this will prevent me from
>> > seeing issues with other scripts.
>>
>> Hmm, I tested this with a simple query, and got the crash below :-o.
>> Akshay, can you investigate please?
>
>
>    I have tested the same query (SELECT * FROM pg_description a,
> pg_description b ) and it is crash with below error message:
>
>     RuntimeError: maximum recursion depth exceeded
>     Fatal Python error: Cannot recover from stack overflow.

Yeah, same as me.

>    According to our logic we poll psycopg2 connection and check the status
> if it is busy(read/write) then call the same function recursively. So for
> the long running query Python throws an error. I have googled to increase
> the limit and found one function "sys.setrecursionlimit()"  but we don't
> know what limit to set and also it is not recommended to increase the limit.

Calling anything recursively like that is doomed to failure.

>    Then I have tried it with blocking call and I faced below error message:
>         Error Message:out of memory for query result
>
>    We need to change our logic of recursion, and for out of memory issue
> I'll have to figure out the solution.

Agreed. Thanks.

--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


pgadmin-hackers by date:

Previous
From: Akshay Joshi
Date:
Subject: Re: pgadmin4 slow with large tables compared to pgadmin3
Next
From: Susan Douglas
Date:
Subject: Patch for pgAdmin 4 docs