Re: SQL select query becomes slow when using limit (with no offset) - Mailing list pgsql-performance

From Kees van Dieren
Subject Re: SQL select query becomes slow when using limit (with no offset)
Date
Msg-id 13bb64e70908042201v2fe8d80fxf18bfdb3ba16ca81@mail.gmail.com
Whole thread Raw
In response to Re: SQL select query becomes slow when using limit (with no offset)  (Greg Stark <gsstark@mit.edu>)
Responses Re: SQL select query becomes slow when using limit (with no offset)
List pgsql-performance
Hi Folks,

Thanks for your response.

I have added the following index (suggested by other post):

CREATE INDEX events_events_cleared_eventtype
  ON events_events
  USING btree
  (eventtype_id, cleared)
  WHERE cleared = false;

Also with columns in reversed order.

No changes in response time noticed.

Index on cleared column already is there (indices are in sql file attached to initial post.). eventtype_id has a foreign key constraint, which adds an index automatically I believe?

The explain analyze results for both queries:
explain analyze select events_events.id FROM events_events
left join events_event_types on events_events.eventType_id=events_event_types.id
where events_event_types.severity=70
and not events_events.cleared
order by events_events.dateTime DESC LIMIT 100
>>>
"Limit  (cost=0.00..125.03 rows=100 width=16) (actual time=0.046..3897.094 rows=77 loops=1)"
"  ->  Nested Loop  (cost=0.00..120361.40 rows=96269 width=16) (actual time=0.042..3896.881 rows=77 loops=1)"
"        ->  Index Scan Backward using events_events_datetime_ind on events_events  (cost=0.00..18335.76 rows=361008 width=24) (actual time=0.025..720.345 rows=360637 loops=1)"
"              Filter: (NOT cleared)"
"        ->  Index Scan using events_event_types_pkey on events_event_types  (cost=0.00..0.27 rows=1 width=8) (actual time=0.003..0.003 rows=0 loops=360637)"
"              Index Cond: (events_event_types.id = events_events.eventtype_id)"
"              Filter: (events_event_types.severity = 70)"
"Total runtime: 3897.268 ms"

explain analyze select events_events.id FROM events_events
left join events_event_types on events_events.eventType_id=events_event_types.id
where events_event_types.severity=70
and not events_events.cleared
order by events_events.dateTime DESC
>>>
"Sort  (cost=20255.18..20495.85 rows=96269 width=16) (actual time=1084.842..1084.951 rows=77 loops=1)"
"  Sort Key: events_events.datetime"
"  Sort Method:  quicksort  Memory: 20kB"
"  ->  Hash Join  (cost=2.09..12286.62 rows=96269 width=16) (actual time=1080.789..1084.696 rows=77 loops=1)"
"        Hash Cond: (events_events.eventtype_id = events_event_types.id)"
"        ->  Seq Scan on events_events  (cost=0.00..9968.06 rows=361008 width=24) (actual time=0.010..542.946 rows=360637 loops=1)"
"              Filter: (NOT cleared)"
"        ->  Hash  (cost=1.89..1.89 rows=16 width=8) (actual time=0.077..0.077 rows=16 loops=1)"
"              ->  Seq Scan on events_event_types  (cost=0.00..1.89 rows=16 width=8) (actual time=0.010..0.046 rows=16 loops=1)"
"                    Filter: (severity = 70)"
"Total runtime: 1085.145 ms"

Any suggestions?

Thanks in advance!

Best regards,

Kees van Dieren


pgsql-performance@postgresql.org

2009/7/31 Greg Stark <gsstark@mit.edu>
On Fri, Jul 31, 2009 at 1:11 PM, Kees van Dieren<keesvandieren@gmail.com> wrote:
> It takes 155ms to run this query (returning 2 rows)
>
> Query plan: without limit:
> "Sort  (cost=20169.62..20409.50 rows=95952 width=16)"

Could you send the results of EXPLAIN ANALYZE for both queries?
Evidently the planner is expecting a lot more rows than the 2 rows
you're expecting but it's not clear where it's gone wrong.


--
greg
http://mit.edu/~gsstark/resume.pdf



--
Squins | IT, Honestly
Oranjestraat 23
2983 HL Ridderkerk
The Netherlands
Phone: +31 (0)180 414520
Mobile: +31 (0)6 30413841
www.squins.com
Chamber of commerce Rotterdam: 22048547

pgsql-performance by date:

Previous
From: richyen
Date:
Subject: CHECK constraint fails when it's not supposed to
Next
From: Matthew Wakeling
Date:
Subject: Re: GiST, caching, and consistency