Re: access time performance problem - Mailing list pgsql-general

From scott.marlowe
Subject Re: access time performance problem
Date
Msg-id Pine.LNX.4.33.0210091041550.28259-100000@css120.ihs.com
Whole thread Raw
In response to access time performance problem  ("Louis-Marie Croisez" <louis-marie.croisez@etca.alcatel.be>)
List pgsql-general
Quick question, are you regularly vacuuming and analyzing your database?

Also, ext3 can definitely slow things down.  If your machine is stable and
on a UPS it may be worth your while to just run ext2.

Also, have you compared output from bonnie++ on the compaq against the
IBM (run it on the same drive that hosts the database of course.)  it's a
free program you can download to test your drive subsystem's performance.
A SCSI mirror set on 10k drives should be able to read at >30 Megs a
second and an IDE drive should be in the 5 to 15 Megs a second range.

Since Postgresql is designed more for integrity and transactions, it may
not be your best choice here.  I'm not sure what would be your best
choice, but Postgresql is not known for being a real time system with
performance guarantees on response times.

Also, what processor speeds are these two machines?  Just wondering.

On Wed, 9 Oct 2002, Louis-Marie Croisez wrote:

> I have an IBM Xseries 300 single cpu with RH installed, 512Mb RAM and SCSI drive with hardware mirroring.
> Postgresql database is on a partition with ext3 (journalized file system).
> My greatest table contains about 30.000 records.
>
> Postgresql in my project is used to feed/get data from an external hardware as quick as possible.
> The external device ask the IBM for its configuration data, and the goal is to do a fetch on the database and to send
backthe info 
> as quick as possible.
> The second scenario is when the external device wants to back up its configuration.
> A mean time of 50ms between database accesses is foreseen.
> For both scenario I have chosen auto-commit mode, because every record has to be on disc as quick as possible.
>
> I have remarked very bad database access time performances. I have then tried with another computer : a common
desktopPC (compaq), 
> IDE drive, less memory and less CPU speed. I got better database access time.
> Here is the results:
>
>                             delete_records        insert_records        update_records
> Compaq mean access time:    2.7ms                 4.5ms                 4.8ms
> IBM mean access time:       22.9ms                24.6ms                25.9ms
>
> When browsing newsgroups, I found that playing with wal_sync_method parameter could give better results.
> I tried with wal_sync_method=open_sync and here are the results:
>
>                             delete_records        insert_records        update_records
> Compaq mean access time:    1.0ms                 2.6ms                 2.6ms
> IBM mean access time:       4.0ms                 1.3ms                 1.3ms
>
> My first question is: how is it possible to have such gain in time for the IBM between the case wal_sync_method=fsync
andthe case 
> wal_sync_method=open_sync ?
>
> Another problem is the following:
> about every 1000 database access (not regular), the database accesses are hanged during approximately 2500ms.
> I suppose that this time is used by the OS to flush the memory cache to hard disk.
>
> My second question is: how is it possible to avoid such hanging of the database ? Is it possible to flush a part of
thecache while 
> working on another part of it, the goal being not to interrupt the whole process ?
>
> Thanx for your future comments.
>
> --Louis Croisez.
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>


pgsql-general by date:

Previous
From: Hector Galicia
Date:
Subject: query
Next
From: "Markus Gieppner"
Date:
Subject: High-end PGSQL / Business Intelligence