Thread: Large difference between elapsed time and run time for queries

Large difference between elapsed time and run time for queries

From
"Scott Buchan"
Date:

Hello,

 

While running benchmarks for my database, I am seeing a large difference in the elapsed time (from stats collected in the logs) and run time (running explain analyze on the query using ./psql <database>) for each of my queries.  The database is being ran on a sunfire 880 with 4 750mhz processors with 8 G RAM running solaris 8

 

I am simulating 200 user connections each running 6 select queries on 1 indexed table with 50,000 records. The elapsed time for the queries average around 2.5 seconds while if I run the query using explain analyze while the test is running, the run time is around 300 ms although it takes much longer (few seconds) to display the results.  If I reduce the number of concurrent connections to 100 then the run time and elapsed time for the queries are the same.

 

I have tried numerous configurations in the postgresql.conf file.  I have set the shared_buffers with values ranging from 75 MB to 4000MB with no luck.  I have also tried increasing the sort_mem with no luck.

 

 

When the test is running, the cpu is well over 50% idle and iostat shows that the processes are not waiting for i/o and disk usage percentage is low.

 

Any help would be appreciated.

 

Thanks.

Re: Large difference between elapsed time and run time for queries

From
Tom Lane
Date:
"Scott Buchan" <sbuchan@technicacorp.com> writes:
> I am simulating 200 user connections each running 6 select queries on 1
> indexed table with 50,000 records. The elapsed time for the queries average
> around 2.5 seconds while if I run the query using explain analyze while the
> test is running, the run time is around 300 ms although it takes much longer
> (few seconds) to display the results.

How many rows are these queries returning?  AFAICS the differential must
be the cost of transmitting the data to the frontend, which of course
does not happen when you use explain analyze.  (I think, but am not
completely sure, that explain analyze also suppresses the CPU effort of
converting the data to text form, as would normally be done before
transmitting it.  But given that you don't see a problem at 100
connections, that's probably not where the issue lies.)

> The database is being ran on a sunfire 880 with 4 750mhz
> processors with 8 G RAM running solaris 8

We have seen some other weird performance problems on Solaris (their
standard qsort apparently is very bad, for example).  Might be that you
need to be looking at kernel behavior, not at Postgres.

            regards, tom lane

Re: Large difference between elapsed time and run time for queries

From
"Scott Buchan"
Date:
Thanks for the quick reply.

I just upgraded from 7.2 to 7.3 since 7.3 uses a different qsort
(BSD-licensed).  After running a few tests, I have noticed some performance
gains.

I think another problem that I was having was due to the way I was
performing the tests.  I was using the tool "The Grinder" to simulate 300
connections (through JDBC) to the database each running 6 queries without
any connection pooling.  Once I figure out how to use connection pooling
with the Grinder, I will try running the tests again.

Do you know of any other performance issues with using Solaris?

Thanks for the help,

Scott


-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Monday, March 10, 2003 3:08 PM
To: Scott Buchan
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Large difference between elapsed time and run time
for queries

"Scott Buchan" <sbuchan@technicacorp.com> writes:
> I am simulating 200 user connections each running 6 select queries on 1
> indexed table with 50,000 records. The elapsed time for the queries
average
> around 2.5 seconds while if I run the query using explain analyze while
the
> test is running, the run time is around 300 ms although it takes much
longer
> (few seconds) to display the results.

How many rows are these queries returning?  AFAICS the differential must
be the cost of transmitting the data to the frontend, which of course
does not happen when you use explain analyze.  (I think, but am not
completely sure, that explain analyze also suppresses the CPU effort of
converting the data to text form, as would normally be done before
transmitting it.  But given that you don't see a problem at 100
connections, that's probably not where the issue lies.)

> The database is being ran on a sunfire 880 with 4 750mhz
> processors with 8 G RAM running solaris 8

We have seen some other weird performance problems on Solaris (their
standard qsort apparently is very bad, for example).  Might be that you
need to be looking at kernel behavior, not at Postgres.

            regards, tom lane