On Mon, 2004-08-02 at 06:21, Joost Kraaijeveld wrote:
> Hi all,
>
> My system is a PostgreSQL 7.4.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 20020903 (Red Hat Linux 8.0 3.2-7).
Ithas a Pentium III-733 Mhz with 512 MB ram. It is connected to my workststation (dual XEON 1700 with 1 Gb RAM) with a
100Mb switched network.
>
> I have a table with 31 columns, all fixed size datatypes. It contains 88393 rows. Doing a "select * from table" with
PGAdminIII in it's SQL window, it takes a total of 9206 ms query runtime an a 40638 ms data retrievel runtime.
This means it took the backend about 9 seconds to prepare the data, and
40 or so seconds total (including the 9 I believe) for the client to
retrieve and then display it.
> Is this a reasonable time to get 88393 rows from the database?
Depends on your row size really. I'm certain you're not CPU bound if
you've only got one hard drive. Put that data on a 20 way RAID5 array
and I'm sure it would come back a little quicker.
> If not, what can I do to find the bottleneck (and eventually make it faster)?
The bottleneck is almost always IO to start with. First, as another
drive and mirror it. Then go to RAID 1+0, then add more and more
drives.
Read this document about performance tuning:
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html