Thread: Slow statement when using JDBC

Slow statement when using JDBC

From
yazan suleiman
Date:
I am running a statement with bind variable like the following:

<code>
PreparedStatement stmt = psqlConnection.prepareStatement("SELECT * FROM X WHERE X.ID=?", ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
ResultSet rs = stmt.executeQuery();
</code>

This takes 30 seconds.  On the other hand, if I run the following without binding the id:
<code>
PreparedStatement stmt = psqlConnection.prepareStatement("SELECT * FROM X WHERE X.ID=7", ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
ResultSet rs = stmt.executeQuery();
</code>

This runs in 230 ms.

I am using:
JDBC3 Postgresql Driver, Version 9.0-801 against a 9 postgres database.

Any ideas, Thanks in advance

Re: Slow statement when using JDBC

From
Maciek Sakrejda
Date:
My guess is the named (as opposed to anonymous) server-side prepared
statements kick in and the planning is happening sans parameters (see
the prepareThreshold connection parameter--see
http://jdbc.postgresql.org/documentation/head/connect.html#connection-parameters
), leading to a seq scan. Is it by chance only happening on the fifth
invocation of that statement? Or are you changing prepareThreshold? If
that's the case, you may want to bump prepareThreshold so that you
never get a named statement here. If this is happening right off the
bad and you're not mucking with the threshold, something more
interesting is going on and we'd need more details on your Java code.

---
Maciek Sakrejda | System Architect | Truviso

1065 E. Hillsdale Blvd., Suite 215
Foster City, CA 94404
www.truviso.com

Re: Slow statement when using JDBC

From
yazan suleiman
Date:
I apologize I didn't paste the original query.  The original query is more complex in fact.  Please, let me explain further:

select
DISTINCT
EVENT.ID, ORIGIN.ID AS ORIGINID,EVENT.PREFERRED_ORIGIN_ID AS PREFERRED_ORIGIN,
    EVENT.CONTRIBUTOR, ORIGIN.TIME, ORIGIN.LATITUDE, ORIGIN.LONGITUDE, ORIGIN.DEPTH,ORIGIN.EVTYPE, "
    ORIGIN.CATALOG, ORIGIN.AUTHOR OAUTHOR, ORIGIN.CONTRIBUTOR OCONTRIBUTOR,MAGNITUDE.ID AS MAGID,"
    MAGNITUDE.MAGNITUDE,MAGNITUDE.TYPE AS MAGTYPE "
   
from event.event join event.origin on event.id=origin.eventid
        left join event.magnitude on origin.id=event.magnitude.origin_id "
WHERE EXISTS(select origin_id from event.magnitude where  magnitude.magnitude>=?
 
and my code is:

PreparedStatement stmt = psqlConnection.prepareStatement(sql, ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
org.postgresql.PGStatement pgstmt = (org.postgresql.PGStatement)stmt;
 pgstmt.setPrepareThreshold(1);

boolean usingServerPrepare = pgstmt.isUseServerPrepare();
System.out.println("Execution: , Used server side: " + usingServerPrepare );
           
           
stmt.setDouble(1, new Double(7.5));
stmt.setFetchSize(200);
long old = System.currentTimeMillis();
ResultSet rs = stmt.executeQuery();
System.out.println("Execution time: "+(System.currentTimeMillis()-old)+" ms.");


In fact the code is coming from the postgres manual to keep it very simple.  Now the interesting part is that after your email, I ran a
SELECT * FROM X WHERE X.ID>7

and it ran in 7 ms, but when did

SELECT * FROM X WHERE X.ID>?

It ran in 319 ms.  While binding variables is slowing the query 10 times, the time difference is not comparable to when I run my original query.  200 ms to 31000 ms.  So my question is about execution plans.  is the execution plan is different when binding variables?  I read as much as I can find from the Postgres manual, but can't find something to hint about what I am doing wrong. 

By the way, I changed the value with for prepareThreshold but id didn't make any difference.  But the conclusion is binding variables based on the code I run is always slower.

Thank you



On Wed, Feb 2, 2011 at 12:32 PM, Maciek Sakrejda <msakrejda@truviso.com> wrote:
My guess is the named (as opposed to anonymous) server-side prepared
statements kick in and the planning is happening sans parameters (see
the prepareThreshold connection parameter--see
http://jdbc.postgresql.org/documentation/head/connect.html#connection-parameters
), leading to a seq scan. Is it by chance only happening on the fifth
invocation of that statement? Or are you changing prepareThreshold? If
that's the case, you may want to bump prepareThreshold so that you
never get a named statement here. If this is happening right off the
bad and you're not mucking with the threshold, something more
interesting is going on and we'd need more details on your Java code.

---
Maciek Sakrejda | System Architect | Truviso

1065 E. Hillsdale Blvd., Suite 215
Foster City, CA 94404
www.truviso.com

Re: Slow statement when using JDBC

From
"Kevin Grittner"
Date:
yazan suleiman  wrote:

> where magnitude.magnitude>=?

When you pass a literal there, the planner can get a good estimate of
how many rows match that criterion, and give you a good plan based on
that.  When it must prepare a plan with a parameter there, it must
assume that *any* value could be submitted, even one which would
require the entire table to be passed.  The best plan to use will
depend entirely on the actual value, but PostgreSQL currently fixes a
single plan when you prepare a statement on the server.

For queries like this, you want to be sure that the query is planned
each time based on the actual value.

-Kevin

Re: Slow statement when using JDBC

From
Maciek Sakrejda
Date:
> For queries like this, you want to be sure that the query is planned
> each time based on the actual value.

Which I think *is* the case until you hit prepareThreshold in JDBC,
no? This has come up on the list before, and as long as you're using
an unnamed server-side prepared statement, the planner will wait to
plan it until it has the actual parameters available. That's why the
planning savings of having the JDBC driver "upgrade" you to named
statements after a few executions can come back to bite you: a
parameter-agnostic plan will often be *much* costlier than re-planning
every single time.

However, since Yazan is seeing this right off the bat (and not after a
certain number of executions), I'm suspicious. It'd be nice to see
what's happening here at the protocol level. Yazan, can you configure
driver-level logging through the loglevel connection param and
DriverManager.setLogWriter()?

---
Maciek Sakrejda | System Architect | Truviso

1065 E. Hillsdale Blvd., Suite 215
Foster City, CA 94404
www.truviso.com

Re: Slow statement when using JDBC

From
yazan suleiman
Date:
I've done more research.  This problem appears at the database level.  I excluded JDBC by running explain anaylze on 2 different statement, the first is a prepared one  The prepared showed the same behavior.  Please see below:

prepare t2 (real) AS
select
EVENT.ID, ORIGIN.ID AS ORIGINID,EVENT.PREFERRED_ORIGIN_ID AS PREFERRED_ORIGIN,
EVENT.CONTRIBUTOR, ORIGIN.TIME, ORIGIN.LATITUDE, ORIGIN.LONGITUDE, ORIGIN.DEPTH,ORIGIN.EVTYPE, ORIGIN.CATALOG, ORIGIN.AUTHOR OAUTHOR,
ORIGIN.CONTRIBUTOR OCONTRIBUTOR,MAGNITUDE.ID AS MAGID,MAGNITUDE.MAGNITUDE,MAGNITUDE.TYPE AS MAGTYPE

from event.event join event.origin on event.id=origin.eventid left join event.magnitude on origin.id=event.magnitude.origin_id

WHERE magnitude.magnitude>=$1 group by EVENT.ID, ORIGIN.ID,EVENT.PREFERRED_ORIGIN_ID, EVENT.CONTRIBUTOR,ORIGIN.TIME, ORIGIN.LATITUDE,
ORIGIN.LONGITUDE, ORIGIN.DEPTH,ORIGIN.EVTYPE, ORIGIN.CATALOG,ORIGIN.AUTHOR, ORIGIN.CONTRIBUTOR,MAGNITUDE.ID,MAGNITUDE.MAGNITUDE,MAGNITUDE.TYPE
order by MAGNITUDE.MAGNITUDE DESC;

explain analyze execute t2 (7.2);

"Group  (cost=1178550.31..1264062.63 rows=2137808 width=80) (actual time=32708.691..32713.863 rows=3198 loops=1)"
"  ->  Sort  (cost=1178550.31..1183894.83 rows=2137808 width=80) (actual time=32708.688..32709.081 rows=3198 loops=1)"
"        Sort Key: magnitude.magnitude, event.id, origin.id, event.preferred_origin_id, event.contributor, origin."time", origin.latitude, origin.longitude, origin.depth, origin.evtype, origin.catalog, origin.author, origin.contributor, magnitude.id, magnitude.type"
"        Sort Method:  quicksort  Memory: 546kB"
"        ->  Hash Join  (cost=463838.18..858790.88 rows=2137808 width=80) (actual time=21930.470..32688.144 rows=3198 loops=1)"
"              Hash Cond: (origin.eventid = event.id)"
"              ->  Hash Join  (cost=349974.86..636067.40 rows=2137808 width=72) (actual time=18213.223..28346.664 rows=3198 loops=1)"
"                    Hash Cond: (magnitude.origin_id = origin.id)"
"                    ->  Seq Scan on magnitude  (cost=0.00..158388.38 rows=2137808 width=22) (actual time=8.078..6009.938 rows=3198 loops=1)"
"                          Filter: ((magnitude)::double precision >= $1)"
"                    ->  Hash  (cost=213417.05..213417.05 rows=6133105 width=54) (actual time=18204.688..18204.688 rows=6133105 loops=1)"
"                          Buckets: 131072  Batches: 8  Memory Usage: 72865kB"
"                          ->  Seq Scan on origin  (cost=0.00..213417.05 rows=6133105 width=54) (actual time=0.015..7370.984 rows=6133105 loops=1)"
"              ->  Hash  (cost=56912.92..56912.92 rows=3276192 width=12) (actual time=3716.931..3716.931 rows=3276192 loops=1)"
"                    Buckets: 262144  Batches: 2  Memory Usage: 70620kB"
"                    ->  Seq Scan on event  (cost=0.00..56912.92 rows=3276192 width=12) (actual time=0.011..1567.850 rows=3276192 loops=1)"
"Total runtime: 32783.141 ms"



explain analyze
select
EVENT.ID, ORIGIN.ID AS ORIGINID,EVENT.PREFERRED_ORIGIN_ID AS PREFERRED_ORIGIN,
EVENT.CONTRIBUTOR, ORIGIN.TIME, ORIGIN.LATITUDE, ORIGIN.LONGITUDE, ORIGIN.DEPTH,ORIGIN.EVTYPE, ORIGIN.CATALOG, ORIGIN.AUTHOR OAUTHOR,
ORIGIN.CONTRIBUTOR OCONTRIBUTOR,MAGNITUDE.ID AS MAGID,MAGNITUDE.MAGNITUDE,MAGNITUDE.TYPE AS MAGTYPE

from event.event join event.origin on event.id=origin.eventid left join event.magnitude on origin.id=event.magnitude.origin_id

WHERE magnitude.magnitude>=7.2 group by EVENT.ID, ORIGIN.ID,EVENT.PREFERRED_ORIGIN_ID, EVENT.CONTRIBUTOR,ORIGIN.TIME, ORIGIN.LATITUDE,
ORIGIN.LONGITUDE, ORIGIN.DEPTH,ORIGIN.EVTYPE, ORIGIN.CATALOG,ORIGIN.AUTHOR, ORIGIN.CONTRIBUTOR,MAGNITUDE.ID,MAGNITUDE.MAGNITUDE,MAGNITUDE.TYPE
order by MAGNITUDE.MAGNITUDE DESC;




"Group  (cost=136246.03..136743.55 rows=12438 width=80) (actual time=124.478..129.753 rows=3198 loops=1)"
"  ->  Sort  (cost=136246.03..136277.12 rows=12438 width=80) (actual time=124.474..124.897 rows=3198 loops=1)"
"        Sort Key: magnitude.magnitude, event.id, origin.id, event.preferred_origin_id, event.contributor, origin."time", origin.latitude, origin.longitude, origin.depth, origin.evtype, origin.catalog, origin.author, origin.contributor, magnitude.id, magnitude.type"
"        Sort Method:  quicksort  Memory: 546kB"
"        ->  Nested Loop  (cost=281.58..135400.09 rows=12438 width=80) (actual time=2.250..103.318 rows=3198 loops=1)"
"              ->  Nested Loop  (cost=281.58..127177.93 rows=12438 width=72) (actual time=2.234..67.745 rows=3198 loops=1)"
"                    ->  Bitmap Heap Scan on magnitude  (cost=281.58..31203.00 rows=12438 width=22) (actual time=2.155..5.464 rows=3198 loops=1)"
"                          Recheck Cond: (magnitude >= 7.2)"
"                          ->  Bitmap Index Scan on mag_index  (cost=0.00..278.47 rows=12438 width=0) (actual time=1.763..1.763 rows=3198 loops=1)"
"                                Index Cond: (magnitude >= 7.2)"
"                    ->  Index Scan using origin_id_key on origin  (cost=0.00..7.70 rows=1 width=54) (actual time=0.017..0.017 rows=1 loops=3198)"
"                          Index Cond: (origin.id = magnitude.origin_id)"
"              ->  Index Scan using event_key_index on event  (cost=0.00..0.65 rows=1 width=12) (actual time=0.009..0.010 rows=1 loops=3198)"
"                    Index Cond: (event.id = origin.eventid)"
"Total runtime: 130.206 ms"

Can I force it to follow a certain plan?

Thanks

On Wed, Feb 2, 2011 at 1:59 PM, Maciek Sakrejda <msakrejda@truviso.com> wrote:
> For queries like this, you want to be sure that the query is planned
> each time based on the actual value.

Which I think *is* the case until you hit prepareThreshold in JDBC,
no? This has come up on the list before, and as long as you're using
an unnamed server-side prepared statement, the planner will wait to
plan it until it has the actual parameters available. That's why the
planning savings of having the JDBC driver "upgrade" you to named
statements after a few executions can come back to bite you: a
parameter-agnostic plan will often be *much* costlier than re-planning
every single time.

However, since Yazan is seeing this right off the bat (and not after a
certain number of executions), I'm suspicious. It'd be nice to see
what's happening here at the protocol level. Yazan, can you configure
driver-level logging through the loglevel connection param and
DriverManager.setLogWriter()?

---
Maciek Sakrejda | System Architect | Truviso

1065 E. Hillsdale Blvd., Suite 215
Foster City, CA 94404
www.truviso.com

Re: Slow statement when using JDBC

From
Tom Lane
Date:
yazan suleiman <yazan.suleiman@gmail.com> writes:
> I've done more research.  This problem appears at the database level.  I
> excluded JDBC by running explain anaylze on 2 different statement, the first
> is a prepared one  The prepared showed the same behavior.  Please see below:

Try declaring the parameter the same type as the column it's being
compared to (which you didn't mention, but I'm guessing it's NUMERIC).

            regards, tom lane

Re: Slow statement when using JDBC

From
jwollman
Date:
There was no end to this thread so, as I have just run into this problem, I
thought I post.

I can validate Yazen problem using Prepared statements as not a JDBC problem
but actually a problem that exists in pre-9.2 versions of Postgres.

I work for a company that provides a product with an embedded database.  We
currently have Postgres version 9.0.3 and have run into the exact same
problem: Prepared Statements with Bind Variables run significantly slower
than statements without Bind Variables either via JDBC or in database
itself.

I took the slow running Prepared Statement with Bind Variables from our
application and replicated the same exact problem by running it against same
version 9.0.3 of Postgres in pg_Admin using Prepare foo as <prepared
statement>;  Execute foo ('xxxx', 'yyyy');
          The query took 40,000ms to run.
          (I also took same query and resolved bind variables to actual
values and ran in the same manner as a prepared statement. The query took
111ms to run.)

I also took the same query with Bind Variables and ran as Prepared Statement
using the same dataset but on version 9.2.4 of Postgres.
         The query took 77ms and used a reasonable explain plan.

Conclusion: this appears to be a database problem due to odd explain plan
for Prepared Statements with Bind Variables.  Kevin Grittner's comment in
this thread seems to be a good explanation.

Good news: that this was validated as a bug and was fixed by Tom Lane in
version 9.2 of Postgres. See:

http://stackoverflow.com/questions/12368338/how-to-make-postgresql-optimizer-to-build-execution-plan-after-binding-parameter

<http://stackoverflow.com/questions/12368338/how-to-make-postgresql-optimizer-to-build-execution-plan-after-binding-parameter>
 
Thanks,
Jon Wollman
Portfolio Data Architect, Rapid7



--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Slow-statement-when-using-JDBC-tp3368379p5765663.html
Sent from the PostgreSQL - jdbc mailing list archive at Nabble.com.


Re: Slow statement when using JDBC

From
David G Johnston
Date:
arunkmp wrote
> i tested same query in php using pear-Db connectivity its nomally 2 times
> faster than java conntivity.
>
> for both my postgresql version 9.1,
>
> can anyone explain me why its speeder in php.

Given that the reason for the poor JDBC performance has been explained this
likely isn't the best list to get answers as to why PHP doesn't seem to
exhibit the same problem...

It would probably also help to supply actual code since your use of the two
libraries could be a contributing factor.

And the versions of the drivers would also be good information to supply...

PHP is possibly bypassing the whole "prepare" aspect and simply providing
what appears to be a prepared statement interface while actually sending
complete queries, without parameters, to the database.  This would explain
the difference but is truly a guess since I have not used Pear/PHP.

David J.



--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Slow-statement-when-using-JDBC-tp3368379p5812174.html
Sent from the PostgreSQL - jdbc mailing list archive at Nabble.com.


Re: Slow statement when using JDBC

From
arunkmp
Date:
i tested same query in php using pear-Db connectivity its nomally 2 times
faster than java conntivity.

for both my postgresql version 9.1,

can anyone explain me why its speeder in php.




--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Slow-statement-when-using-JDBC-tp3368379p5812163.html
Sent from the PostgreSQL - jdbc mailing list archive at Nabble.com.


Re: Slow statement when using JDBC

From
Lachezar Dobrev
Date:
If the prepare phase is what might be the culprit, one might want to
try the JDBC using protocol version 2 and rerun any tests:

  jdbc:postgresql//host:port/database?protocolVersion=2

2014-07-21 8:32 GMT+03:00 David G Johnston <david.g.johnston@gmail.com>:
> arunkmp wrote
>> i tested same query in php using pear-Db connectivity its nomally 2 times
>> faster than java conntivity.
>>
>> for both my postgresql version 9.1,
>>
>> can anyone explain me why its speeder in php.
>
> Given that the reason for the poor JDBC performance has been explained this
> likely isn't the best list to get answers as to why PHP doesn't seem to
> exhibit the same problem...
>
> It would probably also help to supply actual code since your use of the two
> libraries could be a contributing factor.
>
> And the versions of the drivers would also be good information to supply...
>
> PHP is possibly bypassing the whole "prepare" aspect and simply providing
> what appears to be a prepared statement interface while actually sending
> complete queries, without parameters, to the database.  This would explain
> the difference but is truly a guess since I have not used Pear/PHP.
>
> David J.
>
>
>
> --
> View this message in context:
http://postgresql.1045698.n5.nabble.com/Slow-statement-when-using-JDBC-tp3368379p5812174.html
> Sent from the PostgreSQL - jdbc mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-jdbc


Re: Slow statement when using JDBC

From
"vjai.krishnan"
Date:
I am using postgresql 9.1 version. Executing a query without using prepared
statement via jdbc driver to postgresql database takes double the time than
that of psql client. Is there any specific reason for this slowness in using
jdbc driver?



--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Slow-statement-when-using-JDBC-tp3368379p5812195.html
Sent from the PostgreSQL - jdbc mailing list archive at Nabble.com.


Re: Slow statement when using JDBC

From
Dave Cramer
Date:
The best way to figure this out is to look at the server logs.

In order to figure this out we need more information. What exactly does your code do ?



Dave Cramer

dave.cramer(at)credativ(dot)ca
http://www.credativ.ca


On 21 July 2014 09:04, vjai.krishnan <vijay.kanbu@gmail.com> wrote:
I am using postgresql 9.1 version. Executing a query without using prepared
statement via jdbc driver to postgresql database takes double the time than
that of psql client. Is there any specific reason for this slowness in using
jdbc driver?



--
View this message in context: http://postgresql.1045698.n5.nabble.com/Slow-statement-when-using-JDBC-tp3368379p5812195.html
Sent from the PostgreSQL - jdbc mailing list archive at Nabble.com.


--
Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jdbc

Re: Slow statement when using JDBC

From
"vjai.krishnan"
Date:
Hi Dave,

            Thanks for your reply. I am using postgresql jdbc driver. Here
is my code that executes the query

            *Class.forName("org.postgresql.Driver");
            String pConnectionStr = "jdbc:postgresql://localhost:5432/mydb";
            conn = DriverManager.getConnection(pConnectionStr);
            String sql = "select place_id, (select count(*) from (select
unnest(ARRAY[5098092,10799979,7470935,10799979,5957205,23663129]) INTERSECT
select unnest(nameaddress_vector))s) as exactmatch from search_name where
name_vector @> ARRAY[4784827] and nameaddress_vector @>
ARRAY[17247,202321,10250,7791183,1099136,63895,202321,46909,23663130] order
by (case when importance = 0 OR importance IS NULL then
0.75-(search_rank::float/40) else importance end) DESC, exactmatch DESC
limit 2";
            stmt = conn.createStatement();
            stmt.executeQuery(sql)*;

            The above query takes 440ms in jdbc connection. But when i
execute the same query in psql client,it takes only 210ms.

Regards,
Vijay K



--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Slow-statement-when-using-JDBC-tp3368379p5812199.html
Sent from the PostgreSQL - jdbc mailing list archive at Nabble.com.


Re: Slow statement when using JDBC

From
Dave Cramer
Date:
Hi Vijay,

Can you show me how you time this ? 



Dave Cramer

dave.cramer(at)credativ(dot)ca
http://www.credativ.ca


On 21 July 2014 09:29, vjai.krishnan <vijay.kanbu@gmail.com> wrote:
Hi Dave,

            Thanks for your reply. I am using postgresql jdbc driver. Here
is my code that executes the query

            *Class.forName("org.postgresql.Driver");
            String pConnectionStr = "jdbc:postgresql://localhost:5432/mydb";
            conn = DriverManager.getConnection(pConnectionStr);
            String sql = "select place_id, (select count(*) from (select
unnest(ARRAY[5098092,10799979,7470935,10799979,5957205,23663129]) INTERSECT
select unnest(nameaddress_vector))s) as exactmatch from search_name where
name_vector @> ARRAY[4784827] and nameaddress_vector @>
ARRAY[17247,202321,10250,7791183,1099136,63895,202321,46909,23663130] order
by (case when importance = 0 OR importance IS NULL then
0.75-(search_rank::float/40) else importance end) DESC, exactmatch DESC
limit 2";
            stmt = conn.createStatement();
            stmt.executeQuery(sql)*;

            The above query takes 440ms in jdbc connection. But when i
execute the same query in psql client,it takes only 210ms.

Regards,
Vijay K



--
View this message in context: http://postgresql.1045698.n5.nabble.com/Slow-statement-when-using-JDBC-tp3368379p5812199.html
Sent from the PostgreSQL - jdbc mailing list archive at Nabble.com.


--
Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jdbc

Re: Slow statement when using JDBC

From
"vjai.krishnan"
Date:
Hi Dave,

             In java i am using System.currentTimeMillis() to calculate the
time difference and in psql client \timing option shows the time taken to
execute the query.

Regards,
Vijay K



--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Slow-statement-when-using-JDBC-tp3368379p5812324.html
Sent from the PostgreSQL - jdbc mailing list archive at Nabble.com.


Re: Slow statement when using JDBC

From
"vjai.krishnan"
Date:
I have found the reason. It is because in jdbc the communication is via only
tcp sockets.In psql client it uses unix sockets for local host.



--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Slow-statement-when-using-JDBC-tp3368379p5812721.html
Sent from the PostgreSQL - jdbc mailing list archive at Nabble.com.