Stephan Szabo <sszabo@megazone.bigpanda.com> writes:
> On Fri, 10 Dec 2004, Tomas [iso-8859-1] Skäre wrote:
>
> > I have a table that looks like this:
> >
> > Table "public.cjm_object"
> > Column | Type | Modifiers
> > -----------+-------------------+-----------
> > timestamp | bigint | not null
> > jobid | bigint | not null
> > objectid | bigint | not null
> > class | integer | not null
> > field | character varying | not null
>
> In 7.4.x and earlier, you need to cast the value you're comparing to into
> a bigint in order to make sure the indexes are used (in your timestamp
> case it appears to work because the value doesn't fit in a plain integer).
> 8.0 should handle this better.
Thanks, casting worked well for that query. Now, could someone please
help me to get this query faster? With the 283465 rows, it takes far
too long time, I think. This is on a 2GHz Celeron running Linux 2.6.
shared_buffers=1000, sort_mem=1024.
select c.* from cjm_object c
inner join
(select max(timestamp) as timestamp,objectid,field from cjm_object
group by objectid,field) t
using(timestamp,objectid,field)
where 1=1 and data is not null
order by objectid,field;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------
Merge Join (cost=145511.85..150759.75 rows=1 width=54) (actual time=17036.147..20968.811 rows=208246 loops=1)
Merge Cond: (("outer".objectid = "inner".objectid) AND ("outer"."?column7?" = "inner"."?column4?") AND
("outer"."timestamp"= "inner"."timestamp"))
-> Sort (cost=47007.75..47611.06 rows=241324 width=54) (actual time=5113.099..5586.094 rows=236710 loops=1)
Sort Key: c.objectid, (c.field)::text, c."timestamp"
-> Seq Scan on cjm_object c (cost=0.00..5862.65 rows=241324 width=54) (actual time=0.129..1788.125
rows=236710loops=1)
Filter: (data IS NOT NULL)
-> Sort (cost=98504.09..99212.75 rows=283465 width=48) (actual time=11922.081..12427.683 rows=255001 loops=1)
Sort Key: t.objectid, (t.field)::text, t."timestamp"
-> Subquery Scan t (cost=45534.39..51912.35 rows=283465 width=48) (actual time=5484.943..9289.061
rows=255001loops=1)
-> GroupAggregate (cost=45534.39..49077.70 rows=283465 width=25) (actual time=5484.925..8178.531
rows=255001loops=1)
-> Sort (cost=45534.39..46243.05 rows=283465 width=25) (actual time=5484.285..6324.067
rows=283465loops=1)
Sort Key: objectid, field
-> Seq Scan on cjm_object (cost=0.00..5862.65 rows=283465 width=25) (actual
time=0.124..852.749rows=283465 loops=1)
Total runtime: 21161.144 ms
Quick explanation of the query:
Each row in the table is a field, which is part of an object. Ex:
timestamp objectid field data
1 1 name test
1 1 type something
1 2 name test2
1 2 type whatever
Timestamp is when the entry was inserted in the databas. When updating
a single field for an object, a new line with the new value is added,
data set to NULL if the field is deleted. So the above content could
now be:
timestamp objectid field data
1 1 name test
1 1 type something
1 2 name test2
1 2 type whatever
2 1 name newname
2 1 type <NULL>
Now, the query picks out the highest timestamp for each
(objectid,field) and then selects all columns for each match,
filtering out NULL data and ordering per objectid.
Is there any way to make this query faster? I've tried rewriting it,
putting the subquery as EXISTS condition, but it doesn't make it
faster. I've tried to create different indices, but they don't seem to
be used in this query.
Greetings,
Tomas