Thread: What kind of performace can I expect and how to measure?

What kind of performace can I expect and how to measure?

From
"Joost Kraaijeveld"
Date:
Hi all,

My system is a PostgreSQL 7.4.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 20020903 (Red Hat Linux 8.0 3.2-7). It
hasa Pentium III-733 Mhz with 512 MB ram. It is connected to my workststation (dual XEON 1700 with 1 Gb RAM) with a 100
Mbswitched network. 

I have a table with 31 columns, all fixed size datatypes. It contains 88393 rows. Doing a "select * from table" with
PGAdminIII in it's SQL window, it takes a total of 9206 ms query runtime an a 40638 ms data retrievel runtime. 

Is this a reasonable time to get 88393 rows from the database?

If not, what can I do to find the bottleneck (and eventually make it faster)?

Groeten,

Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
e-mail: J.Kraaijeveld@Askesis.nl
web: www.askesis.nl

Re: What kind of performace can I expect and how to measure?

From
"Merlin Moncure"
Date:
Joost wrote:
> My system is a PostgreSQL 7.4.1 on i686-pc-linux-gnu, compiled by GCC
gcc
> (GCC) 20020903 (Red Hat Linux 8.0 3.2-7). It has a Pentium III-733 Mhz
> with 512 MB ram. It is connected to my workststation (dual XEON 1700
with
> 1 Gb RAM) with a 100 Mb switched network.
>
> I have a table with 31 columns, all fixed size datatypes. It contains
> 88393 rows. Doing a "select * from table" with PGAdmin III in it's SQL
> window, it takes a total of 9206 ms query runtime an a 40638 ms data
> retrievel runtime.
>
> Is this a reasonable time to get 88393 rows from the database?
>
> If not, what can I do to find the bottleneck (and eventually make it
> faster)?

The 9206 ms time is what the database actually spent gathering the data
and sending it to you.  This is non-negotiable unless you bump up
hardware, etc, or fetch less data.  This time usually scales linearly
(or close to it) with the size of the dataset you fetch.

The 40638 ms time is pgAdmin putting the data in the grid.  This time
spent here is dependant on your client and starts to get really nasty
with large tables.  Future versions of pgAdmin might be able to deal
better with large datasets (cursor based fetch is one proposed
solution).  In the meantime, I would suggest using queries to refine
your terms a little bit...(do you really need to view all 80k records at
once?).

Merlin


Re: What kind of performace can I expect and how to measure?

From
"Joost Kraaijeveld"
Date:
Hi Merlin,

> The 9206 ms time is what the database actually spent
> gathering the data and sending it to you.  This is non-negotiable unless you bump up
> hardware, etc, or fetch less data.  This time usually scales linearly
> (or close to it) with the size of the dataset you fetch.
>
> The 40638 ms time is pgAdmin putting the data in the grid.  This time
So it take PostgreSQL 9206 ms to get the data AND send it to the client. It than takes PGAdmin 40638 ms to display the
data?

> solution).  In the meantime, I would suggest using queries to refine
> your terms a little bit...(do you really need to view all 80k
> records at once?).
The application is build in Clarion, a 4 GL environment. We do not have any influence over the query it generates and
executes.


Groeten,

Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
e-mail: J.Kraaijeveld@Askesis.nl
web: www.askesis.nl

Re: What kind of performace can I expect and how to measure?

From
"Merlin Moncure"
Date:
> Hi Merlin,
>
> > The 9206 ms time is what the database actually spent
> > gathering the data and sending it to you.  This is non-negotiable
unless
> you bump up
> > hardware, etc, or fetch less data.  This time usually scales
linearly
> > (or close to it) with the size of the dataset you fetch.
> >
> > The 40638 ms time is pgAdmin putting the data in the grid.  This
time
> So it take PostgreSQL 9206 ms to get the data AND send it to the
client.
> It than takes PGAdmin 40638 ms to display the data?

That is correct.  This is not a problem with pgAdmin, or postgres, but a
problem with grids.   Conceptually, SQL tables are an in an unordered,
infinite space and grids require an ordered, finite space.  All 4GLs and
data managers have this problem.  The real solution is to refine your
query in a meaningful way (80k rows is more than a human being can deal
with in a practical sense).  If you can't do that, install an arbitrary
limit on the result set where performance breaks down, could be 10-100k
depending on various factors.

To simulate a finite, ordered, dataset, pgAdmin takes all the result
data and puts it in GUI controls are not designed to hold 100k rows
data...this is a design compromise to allow editing.

Merlin






Re: What kind of performace can I expect and how to

From
"Scott Marlowe"
Date:
On Mon, 2004-08-02 at 06:21, Joost Kraaijeveld wrote:
> Hi all,
>
> My system is a PostgreSQL 7.4.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 20020903 (Red Hat Linux 8.0 3.2-7).
Ithas a Pentium III-733 Mhz with 512 MB ram. It is connected to my workststation (dual XEON 1700 with 1 Gb RAM) with a
100Mb switched network. 
>
> I have a table with 31 columns, all fixed size datatypes. It contains 88393 rows. Doing a "select * from table" with
PGAdminIII in it's SQL window, it takes a total of 9206 ms query runtime an a 40638 ms data retrievel runtime. 

This means it took the backend about 9 seconds to prepare the data, and
40 or so seconds total (including the 9 I believe) for the client to
retrieve and then display it.

> Is this a reasonable time to get 88393 rows from the database?

Depends on your row size really.  I'm certain you're not CPU bound if
you've only got one hard drive.  Put that data on a 20 way RAID5 array
and I'm sure it would come back a little quicker.

> If not, what can I do to find the bottleneck (and eventually make it faster)?

The bottleneck is almost always IO to start with.  First, as another
drive and mirror it.  Then go to RAID 1+0, then add more and more
drives.

Read this document about performance tuning:

http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html