Re: Message queue table - strange performance drop with changing limit size. - Mailing list pgsql-performance

From Greg Williamson
Subject Re: Message queue table - strange performance drop with changing limit size.
Date
Msg-id 99179.26367.qm@web46102.mail.sp1.yahoo.com
Whole thread Raw
In response to Message queue table - strange performance drop with changing limit size.  (Jesper Krogh <jesper@krogh.cc>)
Responses Re: Message queue table - strange performance drop with changing limit size.
List pgsql-performance
Jesper --

I apologize for top-quoting -- a challenged reader.

This doesn't directly address your question, but I can't help but notice that the estimates for rows is _wildly_ off
theactual number in each and every query. How often / recently have you run ANALYZE on this table ? 

Are the timing results consistent over several runs ? It is possible that caching effects are entering into the time
results.

Greg Williamson



----- Original Message ----
From: Jesper Krogh <jesper@krogh.cc>
To: pgsql-performance@postgresql.org
Sent: Fri, January 1, 2010 3:48:43 AM
Subject: [PERFORM] Message queue table - strange performance drop with changing limit size.

Hi.

I have a "message queue" table, that contains in the order of 1-10m
"messages". It is implemented using TheSchwartz:
http://search.cpan.org/~bradfitz/TheSchwartz-1.07/lib/TheSchwartz.pm

So when a "worker" picks the next job it goes into the "job" table an
select the top X highest priority messages with the "funcid" that it can
work on. The table looks like this:
db=# \d workqueue.job
                                  Table "workqueue.job"
    Column     |   Type   |                           Modifiers

---------------+----------+---------------------------------------------------------------
jobid         | integer  | not null default
nextval('workqueue.job_jobid_seq'::regclass)
funcid        | integer  | not null
arg           | bytea    |
uniqkey       | text     |
insert_time   | integer  |
run_after     | integer  | not null
grabbed_until | integer  | not null
priority      | smallint |
coalesce      | text     |
Indexes:
    "job_funcid_key" UNIQUE, btree (funcid, uniqkey)
    "funcid_coalesce_priority" btree (funcid, "coalesce", priority)
    "funcid_prority_idx2" btree (funcid, priority)
    "job_jobid_idx" btree (jobid)

efam=# explain ANALYZe select jobid from workqueue.job where job.funcid
in (3) order by priority asc limit 1000;
                                                                   QUERY
PLAN

------------------------------------------------------------------------------------------------------------------------------------------------
Limit  (cost=0.00..2008.53 rows=1000 width=6) (actual
time=0.077..765.169 rows=1000 loops=1)
   ->  Index Scan using funcid_prority_idx2 on job
(cost=0.00..7959150.95 rows=3962674 width=6) (actual time=0.074..763.664
rows=1000 loops=1)
         Index Cond: (funcid = 3)
Total runtime: 766.104 ms
(4 rows)

efam=# explain ANALYZe select jobid from workqueue.job where job.funcid
in (3) order by priority asc limit 50;
                                                                  QUERY
PLAN

----------------------------------------------------------------------------------------------------------------------------------------------
Limit  (cost=0.00..100.43 rows=50 width=6) (actual time=0.037..505.765
rows=50 loops=1)
   ->  Index Scan using funcid_prority_idx2 on job
(cost=0.00..7959150.95 rows=3962674 width=6) (actual time=0.035..505.690
rows=50 loops=1)
         Index Cond: (funcid = 3)
Total runtime: 505.959 ms
(4 rows)

efam=# explain ANALYZe select jobid from workqueue.job where job.funcid
in (3) order by priority asc limit 10;
                                                                 QUERY
PLAN

--------------------------------------------------------------------------------------------------------------------------------------------
Limit  (cost=0.00..20.09 rows=10 width=6) (actual time=0.056..0.653
rows=10 loops=1)
   ->  Index Scan using funcid_prority_idx2 on job
(cost=0.00..7959152.95 rows=3962674 width=6) (actual time=0.054..0.640
rows=10 loops=1)
         Index Cond: (funcid = 3)
Total runtime: 0.687 ms
(4 rows)

So what I see is that "top 10" takes < 1ms, top 50 takes over 500 times
more, and top 1000 only 1.5 times more than top 50.

What can the reason be for the huge drop between limit 10 and limit 50 be?

--
Jesper

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance





pgsql-performance by date:

Previous
From: Jesper Krogh
Date:
Subject: Message queue table - strange performance drop with changing limit size.
Next
From: Jesper Krogh
Date:
Subject: Re: Message queue table - strange performance drop with changing limit size.