Re: Critical performance problems on large databases - Mailing list pgsql-general

From Bill Gribble
Subject Re: Critical performance problems on large databases
Date
Msg-id 1018526515.29603.34.camel@flophouse
Whole thread Raw
In response to Critical performance problems on large databases  (Gunther Schadow <gunther@aurora.regenstrief.org>)
Responses Re: Critical performance problems on large databases  ("Nigel J. Andrews" <nandrews@investsystems.co.uk>)
Re: Critical performance problems on large databases  (Francisco Reyes <lists@natserv.com>)
List pgsql-general
On Wed, 2002-04-10 at 17:39, Gunther Schadow wrote:
> PS: we are seriously looking into using pgsql as the core
> of a BIG medical record system, but we already know that
> if we can't get quick online responses (< 2 s) on
> large rasult sets (10000 records)  at least at the first
> page (~ 100 records) we are in trouble.

There are a few tricks to getting fast results for pages of data in
large tables.  I have an application in which we have a scrolling window
displaying data from a million-row table, and I have been able to make
it fairly interactively responsive (enough that it's not a problem).

We grab pages of a few screenfuls of data at a time using LIMIT /
OFFSET, enough to scroll smoothly over a short range.  For LIMIT /
OFFSET queries to be fast, I found it was necessary to CREATE INDEX,
CLUSTER and ORDER BY the key field.

Then the biggest slowdown is count(*), which we have to do in order to
fake up the scrollbar (so we know what proportion of the data has been
scrolled through).  I have not completely foxed this yet.  I want to
keep a separate mini-table of how many records are in the big table and
update it with a trigger (the table is mostly static).  ATM, I just try
hard to minimize the times I call count(*).

b.g.






pgsql-general by date:

Previous
From: "Papp, Gyozo"
Date:
Subject: Re: SPI_execp() failed in RI_FKey_cascade_del()
Next
From: "Nigel J. Andrews"
Date:
Subject: Re: Critical performance problems on large databases