On Wed, 2002-04-10 at 17:39, Gunther Schadow wrote:
> PS: we are seriously looking into using pgsql as the core
> of a BIG medical record system, but we already know that
> if we can't get quick online responses (< 2 s) on
> large rasult sets (10000 records) at least at the first
> page (~ 100 records) we are in trouble.
There are a few tricks to getting fast results for pages of data in
large tables. I have an application in which we have a scrolling window
displaying data from a million-row table, and I have been able to make
it fairly interactively responsive (enough that it's not a problem).
We grab pages of a few screenfuls of data at a time using LIMIT /
OFFSET, enough to scroll smoothly over a short range. For LIMIT /
OFFSET queries to be fast, I found it was necessary to CREATE INDEX,
CLUSTER and ORDER BY the key field.
Then the biggest slowdown is count(*), which we have to do in order to
fake up the scrollbar (so we know what proportion of the data has been
scrolled through). I have not completely foxed this yet. I want to
keep a separate mini-table of how many records are in the big table and
update it with a trigger (the table is mostly static). ATM, I just try
hard to minimize the times I call count(*).
b.g.