>
> While we are discussing speeding up the sort algorithms
that PostgreSQL uses, we should not forget about the real
> problem: thet sorting is _always_ done after fetching the
> whole set selected. In other words the optimiser does not
> optimise sorting.
That is true, and it is on the TODO list.
> It is a real showstopper for interactive use (ODBC, probably
> JDBC) as you often want to have a way to traverse the dataset
> interactively, using the dataset sorted by primary key for this.
>
> Even worse, MS Access and Delph both do it as a default when
> using their ready-made components to access a table. So opening
Yea, easy to do when you all the data in the table is stored in a flat
file that is sorted on the key. That is how they do it.
> a table from Access using PostODBC will first query the whole table,
> write it all out to a bunch of temp* and psort* files, create a new
> temp* file and only then give the first few rows of it back.
> And as it seems it leaks a lot of memory in the process.
> What makes it even worse is the fact that the whole process is
> repeated when moving backwards on the dataset. It gets a real PITA
> with a dataset that is little bigger (even a few thousand rows), as
> it is not only slow but can cause all kinds of unwanted effects,
> like filling up the disk and crashing the database , especially
> in case of multiple users.
I know Delphi has a problem when you add a row to a data set, then you
want to go backwards. You must re-execute the query to see the new row.
>
> What it should do, IMHO, is to optimise the query so that it will
> traverse the table by the primary key (ok, unique btree index) and
> return just the few rows wanted. For the beginning it would be a big
> improvement if it did so for at least single-table queries sorted by
> a column having a btree index.
True. On the TODO list.
- --
Bruce Momjian
maillist@candle.pha.pa.us
------------------------------