Thread: OFFSET impact on Performance???

OFFSET impact on Performance???

From
"Andrei Bintintan"
Date:

Hi to all,

I have the following 2 examples. Now, regarding on the offset if it is small(10) or big(>50000) what is the impact on the performance of the query?? I noticed that if I return more data's(columns) or if I make more joins then the query runs even slower if the OFFSET is bigger. How can I somehow improve the performance on this?

Best regards,
Andy.

explain analyze
SELECT o.id
FROM report r
INNER JOIN orders o ON o.id=r.id_order AND o.id_status=6
ORDER BY 1 LIMIT 10 OFFSET 10

 
Limit  (cost=44.37..88.75 rows=10 width=4) (actual time=0.160..0.275 rows=10 loops=1)
  ->  Merge Join  (cost=0.00..182150.17 rows=41049 width=4) (actual time=0.041..0.260 rows=20 loops=1)
        Merge Cond: ("outer".id_order = "inner".id)
        ->  Index Scan using report_id_order_idx on report r  (cost=0.00..157550.90 rows=42862 width=4) (actual time=0.018..0.075 rows=20 loops=1)
        ->  Index Scan using orders_pkey on orders o  (cost=0.00..24127.04 rows=42501 width=4) (actual time=0.013..0.078 rows=20 loops=1)
              Filter: (id_status = 6)
Total runtime: 0.373 ms

explain analyze
SELECT o.id
FROM report r
INNER JOIN orders o ON o.id=r.id_order AND o.id_status=6
ORDER BY 1 LIMIT 10 OFFSET 1000000
Limit  (cost=31216.85..31216.85 rows=1 width=4) (actual time=1168.152..1168.152 rows=0 loops=1)
  ->  Sort  (cost=31114.23..31216.85 rows=41049 width=4) (actual time=1121.769..1152.246 rows=42693 loops=1)
        Sort Key: o.id
        ->  Hash Join  (cost=2329.99..27684.03 rows=41049 width=4) (actual time=441.879..925.498 rows=42693 loops=1)
              Hash Cond: ("outer".id_order = "inner".id)
              ->  Seq Scan on report r  (cost=0.00..23860.62 rows=42862 width=4) (actual time=38.634..366.035 rows=42864 loops=1)
              ->  Hash  (cost=2077.74..2077.74 rows=42501 width=4) (actual time=140.200..140.200 rows=0 loops=1)
                    ->  Seq Scan on orders o  (cost=0.00..2077.74 rows=42501 width=4) (actual time=0.059..96.890 rows=42693 loops=1)
                          Filter: (id_status = 6)
Total runtime: 1170.586 ms

Re: [SQL] OFFSET impact on Performance???

From
Richard Huxton
Date:
Andrei Bintintan wrote:
> Hi to all,
>
> I have the following 2 examples. Now, regarding on the offset if it
> is small(10) or big(>50000) what is the impact on the performance of
> the query?? I noticed that if I return more data's(columns) or if I
> make more joins then the query runs even slower if the OFFSET is
> bigger. How can I somehow improve the performance on this?

There's really only one way to do an offset of 1000 and that's to fetch
1000 rows and then some and discard the first 1000.

If you're using this to provide "pages" of results, could you use a cursor?

--
   Richard Huxton
   Archonet Ltd

Re: OFFSET impact on Performance???

From
"Merlin Moncure"
Date:
Andrei:
Hi to all,

I have the following 2 examples. Now, regarding on the offset if it is small(10) or big(>50000) what is the impact on
theperformance of the query?? I noticed that if I return more data's(columns) or if I make more joins then the query
runseven slower if the OFFSET is bigger. How can I  
somehow improve the performance on this?

Merlin:
Offset is not suitable for traversal of large data sets.  Better not use it at all!

There are many ways to deal with this problem, the two most direct being the view approach and the cursor approach.

cursor approach:
declare report_order with hold cursor for select * from report r, order o [...]
Remember to close the cursor when you're done.  Now fetch time is proportional to the number of rows fetched, and
shouldbe very fast.  The major drawback to this approach is that cursors in postgres (currently) are always
insensitive,so that record changes after you declare the cursor from other users are not visible to you.  If this is a
bigdeal, try the view approach. 

view approach:
create view report_order as select * from report r, order o [...]

and this:
prepare fetch_from_report_order(numeric, numeric, int4) as
    select * from report_order where order_id >= $1 and
        (order_id > $1 or report_id > $2)
        order by order_id, report_id limit $3;

fetch next 1000 records from report_order:
execute fetch_from_report_order(o, f, 1000);  o and f being the last key values you fetched (pass in zeroes to start it
off).

This is not quite as fast as the cursor approach (but it will be when we get a proper row constructor, heh), but it
moreflexible in that it is sensitive to changes from other users.  This is more of a 'permanent' binding whereas cursor
isa binding around a particular task. 

Good luck!
Merlin



Re: [SQL] OFFSET impact on Performance???

From
"Andrei Bintintan"
Date:
> If you're using this to provide "pages" of results, could you use a
> cursor?
What do you mean by that? Cursor?

Yes I'm using this to provide "pages", but If I jump to the last pages it
goes very slow.

Andy.

----- Original Message -----
From: "Richard Huxton" <dev@archonet.com>
To: "Andrei Bintintan" <klodoma@ar-sd.net>
Cc: <pgsql-sql@postgresql.org>; <pgsql-performance@postgresql.org>
Sent: Thursday, January 20, 2005 2:10 PM
Subject: Re: [SQL] OFFSET impact on Performance???


> Andrei Bintintan wrote:
>> Hi to all,
>>
>> I have the following 2 examples. Now, regarding on the offset if it
>> is small(10) or big(>50000) what is the impact on the performance of
>> the query?? I noticed that if I return more data's(columns) or if I
>> make more joins then the query runs even slower if the OFFSET is
>> bigger. How can I somehow improve the performance on this?
>
> There's really only one way to do an offset of 1000 and that's to fetch
> 1000 rows and then some and discard the first 1000.
>
> If you're using this to provide "pages" of results, could you use a
> cursor?
>
> --
>   Richard Huxton
>   Archonet Ltd
>


Re: [SQL] OFFSET impact on Performance???

From
Richard Huxton
Date:
Andrei Bintintan wrote:
>> If you're using this to provide "pages" of results, could you use a
>> cursor?
>
> What do you mean by that? Cursor?
>
> Yes I'm using this to provide "pages", but If I jump to the last pages
> it goes very slow.

DECLARE mycursor CURSOR FOR SELECT * FROM ...
FETCH FORWARD 10 IN mycursor;
CLOSE mycursor;

Repeated FETCHes would let you step through your results. That won't
work if you have a web-app making repeated connections.

If you've got a web-application then you'll probably want to insert the
results into a cache table for later use.

--
   Richard Huxton
   Archonet Ltd

Re: [SQL] OFFSET impact on Performance???

From
Alex Turner
Date:
I am also very interesting in this very question.. Is there any way to
declare a persistant cursor that remains open between pg sessions?
This would be better than a temp table because you would not have to
do the initial select and insert into a fresh table and incur those IO
costs, which are often very heavy, and the reason why one would want
to use a cursor.

Alex Turner
NetEconomist


On Thu, 20 Jan 2005 15:20:59 +0000, Richard Huxton <dev@archonet.com> wrote:
> Andrei Bintintan wrote:
> >> If you're using this to provide "pages" of results, could you use a
> >> cursor?
> >
> > What do you mean by that? Cursor?
> >
> > Yes I'm using this to provide "pages", but If I jump to the last pages
> > it goes very slow.
>
> DECLARE mycursor CURSOR FOR SELECT * FROM ...
> FETCH FORWARD 10 IN mycursor;
> CLOSE mycursor;
>
> Repeated FETCHes would let you step through your results. That won't
> work if you have a web-app making repeated connections.
>
> If you've got a web-application then you'll probably want to insert the
> results into a cache table for later use.
>
> --
>    Richard Huxton
>    Archonet Ltd
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
>       subscribe-nomail command to majordomo@postgresql.org so that your
>       message can get through to the mailing list cleanly
>

Re: [SQL] OFFSET impact on Performance???

From
Ron Mayer
Date:
Richard Huxton wrote:
>
> If you've got a web-application then you'll probably want to insert the
> results into a cache table for later use.
>

If I have quite a bit of activity like this (people selecting 10000 out
of a few million rows and paging through them in a web browser), would
it be good to have a single table with a userid column shared by all
users, or a separate table for each user that can be truncated/dropped?

I started out with one table; but with people doing 10s of thousand
of inserts and deletes per session, I had a pretty hard time figuring
out a reasonable vacuum strategy.

Eventually I started doing a whole bunch of create table tmp_XXXX
tables where XXXX is a userid; and a script to drop these tables - but
that's quite ugly in a different way.

With 8.0 I guess I'll try the single table again - perhaps what I
want may be to always have a I/O throttled vacuum running...  hmm.

Any suggestions?

Re: [SQL] OFFSET impact on Performance???

From
Richard Huxton
Date:
Alex Turner wrote:
> I am also very interesting in this very question.. Is there any way
> to declare a persistant cursor that remains open between pg sessions?

Not sure how this would work. What do you do with multiple connections?
Only one can access the cursor, so which should it be?

>  This would be better than a temp table because you would not have to
>  do the initial select and insert into a fresh table and incur those
> IO costs, which are often very heavy, and the reason why one would
> want to use a cursor.

I'm pretty sure two things mean there's less difference than you might
expect:
1. Temp tables don't fsync
2. A cursor will spill to disk beyond a certain size

--
   Richard Huxton
   Archonet Ltd

Re: [SQL] OFFSET impact on Performance???

From
Greg Stark
Date:
"Andrei Bintintan" <klodoma@ar-sd.net> writes:

> > If you're using this to provide "pages" of results, could you use a cursor?
> What do you mean by that? Cursor?
>
> Yes I'm using this to provide "pages", but If I jump to the last pages it goes
> very slow.

The best way to do pages for is not to use offset or cursors but to use an
index. This only works if you can enumerate all the sort orders the
application might be using and can have an index on each of them.

To do this the query would look something like:

SELECT * FROM tab WHERE col > ? ORDER BY col LIMIT 50

Then you take note of the last value used on a given page and if the user
selects "next" you pass that as the starting point for the next page.

This query takes the same amount of time no matter how many records are in the
table and no matter what page of the result set the user is on. It should
actually be instantaneous even if the user is on the hundredth page of
millions of records because it uses an index both for the finding the right
point to start and for the ordering.

It also has the advantage that it works even if the list of items changes as
the user navigates. If you use OFFSET and someone inserts a record in the
table then the "next" page will overlap the current page. Worse, if someone
deletes a record then "next" will skip a record.

The disadvantages of this are a) it's hard (but not impossible) to go
backwards. And b) it's impossible to give the user a list of pages and let
them skip around willy nilly.


(If this is for a web page then specifically don't recommend cursors. It will
mean you'll have to have some complex session management system that
guarantees the user will always come to the same postgres session and has some
garbage collection if the user disappears. And it means the URL is only good
for a limited amount of time. If they bookmark it it'll break if they come
back the next day.)

--
greg

Re: [SQL] OFFSET impact on Performance???

From
Richard Huxton
Date:
Ron Mayer wrote:
> Richard Huxton wrote:
>
>>
>> If you've got a web-application then you'll probably want to insert
>> the results into a cache table for later use.
>>
>
> If I have quite a bit of activity like this (people selecting 10000 out
> of a few million rows and paging through them in a web browser), would
> it be good to have a single table with a userid column shared by all
> users, or a separate table for each user that can be truncated/dropped?
>
> I started out with one table; but with people doing 10s of thousand
> of inserts and deletes per session, I had a pretty hard time figuring
> out a reasonable vacuum strategy.

As often as you can, and make sure your config allocates enough
free-space-map for them. Unless, of course, you end up I/O saturated.

> Eventually I started doing a whole bunch of create table tmp_XXXX
> tables where XXXX is a userid; and a script to drop these tables - but
> that's quite ugly in a different way.
>
> With 8.0 I guess I'll try the single table again - perhaps what I
> want may be to always have a I/O throttled vacuum running...  hmm.

Well, there have been some tweaks, but I don't know if they'll help in
this case.

--
   Richard Huxton
   Archonet Ltd

Re: [SQL] OFFSET impact on Performance???

From
Richard Huxton
Date:
Greg Stark wrote:
> "Andrei Bintintan" <klodoma@ar-sd.net> writes:
>
>
>>>If you're using this to provide "pages" of results, could you use a cursor?
>>
>>What do you mean by that? Cursor?
>>
>>Yes I'm using this to provide "pages", but If I jump to the last pages it goes
>>very slow.
>
>
> The best way to do pages for is not to use offset or cursors but to use an
> index. This only works if you can enumerate all the sort orders the
> application might be using and can have an index on each of them.
>
> To do this the query would look something like:
>
> SELECT * FROM tab WHERE col > ? ORDER BY col LIMIT 50
>
> Then you take note of the last value used on a given page and if the user
> selects "next" you pass that as the starting point for the next page.

Greg's is the most efficient, but you need to make sure you have a
suitable key available in the output of your select.

Also, since you are repeating the query you could get different results
as people insert/delete rows. This might or might not be what you want.

A similar solution is to partition by date/alphabet or similar, then
page those results. That can reduce your resultset to a manageable size.
--
   Richard Huxton
   Archonet Ltd

Re: [SQL] OFFSET impact on Performance???

From
Ragnar Hafstað
Date:
On Thu, 2005-01-20 at 11:59 -0500, Greg Stark wrote:

> The best way to do pages for is not to use offset or cursors but to use an
> index. This only works if you can enumerate all the sort orders the
> application might be using and can have an index on each of them.
>
> To do this the query would look something like:
>
> SELECT * FROM tab WHERE col > ? ORDER BY col LIMIT 50
>
> Then you take note of the last value used on a given page and if the user
> selects "next" you pass that as the starting point for the next page.

this will only work unchanged if the index is unique. imagine , for
example if you have more than 50 rows with the same value of col.

one way to fix this is to use ORDER BY col,oid

gnari



Re: [SQL] OFFSET impact on Performance???

From
Ragnar Hafstað
Date:
On Thu, 2005-01-20 at 19:12 +0000, Ragnar Hafstað wrote:
> On Thu, 2005-01-20 at 11:59 -0500, Greg Stark wrote:
>
> > The best way to do pages for is not to use offset or cursors but to use an
> > index. This only works if you can enumerate all the sort orders the
> > application might be using and can have an index on each of them.
> >
> > To do this the query would look something like:
> >
> > SELECT * FROM tab WHERE col > ? ORDER BY col LIMIT 50
> >
> > Then you take note of the last value used on a given page and if the user
> > selects "next" you pass that as the starting point for the next page.
>
> this will only work unchanged if the index is unique. imagine , for
> example if you have more than 50 rows with the same value of col.
>
> one way to fix this is to use ORDER BY col,oid

and a slightly more complex WHERE clause as well, of course

gnari



Re: [SQL] OFFSET impact on Performance???

From
"Andrei Bintintan"
Date:
Now I read all the posts and I have some answers.

Yes, I have a web aplication.
I HAVE to know exactly how many pages I have and I have to allow the user to
jump to a specific page(this is where I used limit and offset). We have this
feature and I cannot take it out.


>> > SELECT * FROM tab WHERE col > ? ORDER BY col LIMIT 50
Now this solution looks very fast, but I cannot implement it, because I
cannot jump from page 1 to page xxxx only to page 2. Because I know with
this type where did the page 1 ended. And we have some really complicated
where's and about 10 tables are involved in the sql query.

About the CURSOR I have to read more about them because this is my first
time when I hear about.
I don't know if temporary tables are a solution, really I don't think so,
there are a lot of users that are working in the same time at the same page.

So... still DIGGING for solutions.

Andy.

----- Original Message -----
From: "Ragnar Hafstað" <gnari@simnet.is>
To: <pgsql-performance@postgresql.org>
Cc: "Andrei Bintintan" <klodoma@ar-sd.net>; <pgsql-sql@postgresql.org>
Sent: Thursday, January 20, 2005 9:23 PM
Subject: Re: [PERFORM] [SQL] OFFSET impact on Performance???


> On Thu, 2005-01-20 at 19:12 +0000, Ragnar Hafstað wrote:
>> On Thu, 2005-01-20 at 11:59 -0500, Greg Stark wrote:
>>
>> > The best way to do pages for is not to use offset or cursors but to use
>> > an
>> > index. This only works if you can enumerate all the sort orders the
>> > application might be using and can have an index on each of them.
>> >
>> > To do this the query would look something like:
>> >
>> > SELECT * FROM tab WHERE col > ? ORDER BY col LIMIT 50
>> >
>> > Then you take note of the last value used on a given page and if the
>> > user
>> > selects "next" you pass that as the starting point for the next page.
>>
>> this will only work unchanged if the index is unique. imagine , for
>> example if you have more than 50 rows with the same value of col.
>>
>> one way to fix this is to use ORDER BY col,oid
>
> and a slightly more complex WHERE clause as well, of course
>
> gnari
>
>
>


Re: [SQL] OFFSET impact on Performance???

From
Greg Stark
Date:
Alex Turner <armtuk@gmail.com> writes:

> I am also very interesting in this very question.. Is there any way to
> declare a persistant cursor that remains open between pg sessions?
> This would be better than a temp table because you would not have to
> do the initial select and insert into a fresh table and incur those IO
> costs, which are often very heavy, and the reason why one would want
> to use a cursor.

TANSTAAFL. How would such a persistent cursor be implemented if not by
building a temporary table somewhere behind the scenes?

There could be some advantage if the data were stored in a temporary table
marked as not having to be WAL logged. Instead it could be automatically
cleared on every database start.

--
greg

Re: [SQL] OFFSET impact on Performance???

From
"Andrei Bintintan"
Date:
The problems still stays open.

The thing is that I have about 20 - 30 clients that are using that SQL query
where the offset and limit are involved. So, I cannot create a temp table,
because that means that I'll have to make a temp table for each session...
which is a very bad ideea. Cursors somehow the same. In my application the
Where conditions can be very different for each user(session) apart.

The only solution that I see in the moment is to work at the query, or to
write a more complex where function to limit the results output. So no
replace for Offset/Limit.

Best regards,
Andy.


----- Original Message -----
From: "Greg Stark" <gsstark@mit.edu>
To: <alex@neteconomist.com>
Cc: "Richard Huxton" <dev@archonet.com>; "Andrei Bintintan"
<klodoma@ar-sd.net>; <pgsql-sql@postgresql.org>;
<pgsql-performance@postgresql.org>
Sent: Tuesday, January 25, 2005 8:28 PM
Subject: Re: [PERFORM] [SQL] OFFSET impact on Performance???


>
> Alex Turner <armtuk@gmail.com> writes:
>
>> I am also very interesting in this very question.. Is there any way to
>> declare a persistant cursor that remains open between pg sessions?
>> This would be better than a temp table because you would not have to
>> do the initial select and insert into a fresh table and incur those IO
>> costs, which are often very heavy, and the reason why one would want
>> to use a cursor.
>
> TANSTAAFL. How would such a persistent cursor be implemented if not by
> building a temporary table somewhere behind the scenes?
>
> There could be some advantage if the data were stored in a temporary table
> marked as not having to be WAL logged. Instead it could be automatically
> cleared on every database start.
>
> --
> greg
>
>


Re: [SQL] OFFSET impact on Performance???

From
Alex Turner
Date:
As I read the docs, a temp table doesn't solve our problem, as it does
not persist between sessions.  With a web page there is no guarentee
that you will receive the same connection between requests, so a temp
table doesn't solve the problem.  It looks like you either have to
create a real table (which is undesirable becuase it has to be
physicaly synced, and TTFB will be very poor) or create an application
tier in between the web tier and the database tier to allow data to
persist between requests tied to a unique session id.

Looks like the solutions to this problem is not RDBMS IMHO.

Alex Turner
NetEconomist


On Wed, 26 Jan 2005 12:11:49 +0200, Andrei Bintintan <klodoma@ar-sd.net> wrote:
> The problems still stays open.
>
> The thing is that I have about 20 - 30 clients that are using that SQL query
> where the offset and limit are involved. So, I cannot create a temp table,
> because that means that I'll have to make a temp table for each session...
> which is a very bad ideea. Cursors somehow the same. In my application the
> Where conditions can be very different for each user(session) apart.
>
> The only solution that I see in the moment is to work at the query, or to
> write a more complex where function to limit the results output. So no
> replace for Offset/Limit.
>
> Best regards,
> Andy.
>
>
> ----- Original Message -----
> From: "Greg Stark" <gsstark@mit.edu>
> To: <alex@neteconomist.com>
> Cc: "Richard Huxton" <dev@archonet.com>; "Andrei Bintintan"
> <klodoma@ar-sd.net>; <pgsql-sql@postgresql.org>;
> <pgsql-performance@postgresql.org>
> Sent: Tuesday, January 25, 2005 8:28 PM
> Subject: Re: [PERFORM] [SQL] OFFSET impact on Performance???
>
>
> >
> > Alex Turner <armtuk@gmail.com> writes:
> >
> >> I am also very interesting in this very question.. Is there any way to
> >> declare a persistant cursor that remains open between pg sessions?
> >> This would be better than a temp table because you would not have to
> >> do the initial select and insert into a fresh table and incur those IO
> >> costs, which are often very heavy, and the reason why one would want
> >> to use a cursor.
> >
> > TANSTAAFL. How would such a persistent cursor be implemented if not by
> > building a temporary table somewhere behind the scenes?
> >
> > There could be some advantage if the data were stored in a temporary table
> > marked as not having to be WAL logged. Instead it could be automatically
> > cleared on every database start.
> >
> > --
> > greg
> >
> >
>
>

Re: [SQL] OFFSET impact on Performance???

From
Richard Huxton
Date:
Alex Turner wrote:
> As I read the docs, a temp table doesn't solve our problem, as it does
> not persist between sessions.  With a web page there is no guarentee
> that you will receive the same connection between requests, so a temp
> table doesn't solve the problem.  It looks like you either have to
> create a real table (which is undesirable becuase it has to be
> physicaly synced, and TTFB will be very poor) or create an application
> tier in between the web tier and the database tier to allow data to
> persist between requests tied to a unique session id.
>
> Looks like the solutions to this problem is not RDBMS IMHO.

It's less the RDBMS than the web application. You're trying to mix a
stateful setup (the application) with a stateless presentation layer
(the web). If you're using PHP (which doesn't offer a "real" middle
layer) you might want to look at memcached.

--
   Richard Huxton
   Archonet Ltd

Re: OFFSET impact on Performance???

From
David Brown
Date:
Although larger offsets have some effect, your real problem is the sort
(of 42693 rows).

Try:

SELECT r.id_order
FROM report r
WHERE r.id_order IN
   (SELECT id
   FROM orders
   WHERE id_status = 6
   ORDER BY 1
   LIMIT 10 OFFSET 1000)
ORDER BY 1

The subquery doesn't *have* to sort because the table is already ordered
on the primary key.
You can still add a join to orders outside the subselect without
significant cost.

Incidentally, I don't know how you got the first plan - it should
include a sort as well.

Andrei Bintintan wrote:

 > explain analyze
 > SELECT o.id
 > FROM report r
 > INNER JOIN orders o ON o.id=r.id_order AND o.id_status=6
 > ORDER BY 1 LIMIT 10 OFFSET 10
 >
 > Limit  (cost=44.37..88.75 rows=10 width=4) (actual time=0.160..0.275
rows=10 loops=1)
 >   ->  Merge Join  (cost=0.00..182150.17 rows=41049 width=4) (actual
time=0.041..0.260 rows=20 loops=1)
 >         Merge Cond: ("outer".id_order = "inner".id)
 >         ->  Index Scan using report_id_order_idx on report r
(cost=0.00..157550.90 rows=42862 width=4) (actual time=0.018..0.075
rows=20 loops=1)
 >         ->  Index Scan using orders_pkey on orders o
(cost=0.00..24127.04 rows=42501 width=4) (actual time=0.013..0.078
rows=20 loops=1)
 >               Filter: (id_status = 6)
 > Total runtime: 0.373 ms
 >
 > explain analyze
 > SELECT o.id
 > FROM report r
 > INNER JOIN orders o ON o.id=r.id_order AND o.id_status=6
 > ORDER BY 1 LIMIT 10 OFFSET 1000000
 > Limit  (cost=31216.85..31216.85 rows=1 width=4) (actual
time=1168.152..1168.152 rows=0 loops=1)
 >   ->  Sort  (cost=31114.23..31216.85 rows=41049 width=4) (actual
time=1121.769..1152.246 rows=42693 loops=1)
 >         Sort Key: o.id
 >         ->  Hash Join  (cost=2329.99..27684.03 rows=41049 width=4)
(actual time=441.879..925.498 rows=42693 loops=1)
 >               Hash Cond: ("outer".id_order = "inner".id)
 >               ->  Seq Scan on report r  (cost=0.00..23860.62
rows=42862 width=4) (actual time=38.634..366.035 rows=42864 loops=1)
 >               ->  Hash  (cost=2077.74..2077.74 rows=42501 width=4)
(actual time=140.200..140.200 rows=0 loops=1)
 >                     ->  Seq Scan on orders o  (cost=0.00..2077.74
rows=42501 width=4) (actual time=0.059..96.890 rows=42693 loops=1)
 >                           Filter: (id_status = 6)
 > Total runtime: 1170.586 ms

Re: [SQL] OFFSET impact on Performance???

From
PFC
Date:
> As I read the docs, a temp table doesn't solve our problem, as it does
> not persist between sessions.  With a web page there is no guarentee
> that you will receive the same connection between requests, so a temp
> table doesn't solve the problem.  It looks like you either have to
> create a real table (which is undesirable becuase it has to be
> physicaly synced, and TTFB will be very poor) or create an application
> tier in between the web tier and the database tier to allow data to
> persist between requests tied to a unique session id.
>
> Looks like the solutions to this problem is not RDBMS IMHO.
>
> Alex Turner
> NetEconomist

    Did you miss the proposal to store arrays of the found rows id's in a
"cache" table ? Is 4 bytes per result row still too large ?

    If it's still too large, you can still implement the same cache in the
filesystem !
    If you want to fetch 100.000 rows containing just an integer, in my case
(psycopy) it's a lot faster to use an array aggregate. Time to get the
data in the application (including query) :

select id from temp
    => 849 ms
select int_array_aggregate(id) as ids from temp
    => 300 ms

    So you can always fetch the whole wuery results (in the form of an
integer per row) and cache it in the filesystem. It won't work if you have
10 million rows though !