> SELECT * FROM "data" WHERE "data"."fk" = 238496 ORDER BY "data"."id" DESC
> LIMIT 1
>
> This query is quite slow. If I do a explain on it, it seems that it uses an
> Index Scan Backward.
>
> If I omit the order by on the query:
>
> SELECT * FROM "data" WHERE "data"."fk" = 238496 LIMIT 1
>
> It is very fast. And the explain says that it uses Index scan. This is also
> very fast if there aren't any matches. But I've read that I'm not guaranteed
> to get the correct match If I do not use a order by, postgres just returns
> its fastest possible match. Is this right? But will not the fastest possible
> match always be the first match in the index? Is there another way to make
> the order by query go faster?
Unfortunately (and AFAIK), you don't have any other solution as you want the
*latest* row; may be often clustering this table in this order would help a
bit.
Perhaps creating fragmented indexes could also help (1 >= data.fk < 50001, and so on)
From the docs:
By default, B-tree indexes store their entries in ascending order with nulls last. This means that a forward scan of an index on column
x produces output satisfying
ORDER BY x (or more verbosely,
ORDER BY x ASC NULLS LAST). The index can also be scanned backward, producing output satisfying
ORDER BY x DESC (or more verbosely,
ORDER BY x DESC NULLS FIRST, since
NULLS FIRST is the default for
ORDER BY DESC).
Doesn't this mean that when I'm not using the order by clause, and it uses a Index Scan, I will always get the latest value in return? Also I don't understand why the order by query is scanning backwards, when the record I want is in the other end?
Thanks!
Odd-R.