Andrei:
Hi to all,
I have the following 2 examples. Now, regarding on the offset if it is small(10) or big(>50000) what is the impact on
theperformance of the query?? I noticed that if I return more data's(columns) or if I make more joins then the query
runseven slower if the OFFSET is bigger. How can I
somehow improve the performance on this?
Merlin:
Offset is not suitable for traversal of large data sets. Better not use it at all!
There are many ways to deal with this problem, the two most direct being the view approach and the cursor approach.
cursor approach:
declare report_order with hold cursor for select * from report r, order o [...]
Remember to close the cursor when you're done. Now fetch time is proportional to the number of rows fetched, and
shouldbe very fast. The major drawback to this approach is that cursors in postgres (currently) are always
insensitive,so that record changes after you declare the cursor from other users are not visible to you. If this is a
bigdeal, try the view approach.
view approach:
create view report_order as select * from report r, order o [...]
and this:
prepare fetch_from_report_order(numeric, numeric, int4) as
select * from report_order where order_id >= $1 and
(order_id > $1 or report_id > $2)
order by order_id, report_id limit $3;
fetch next 1000 records from report_order:
execute fetch_from_report_order(o, f, 1000); o and f being the last key values you fetched (pass in zeroes to start it
off).
This is not quite as fast as the cursor approach (but it will be when we get a proper row constructor, heh), but it
moreflexible in that it is sensitive to changes from other users. This is more of a 'permanent' binding whereas cursor
isa binding around a particular task.
Good luck!
Merlin