On Sat, Jan 22, 2005 at 10:18:00PM -0500, Tom Lane wrote:
> "Jim C. Nasby" <decibel@decibel.org> writes:
> > (SELECT b.bucket_id AS rrs_bucket_id, s.*
> > FROM rrs.bucket b
> > JOIN page_log.log s
> > ON (
> > b.prev_end_time < log_time
> > AND b.end_time >= log_time )
> > WHERE b.rrs_id = '1'
> > AND b.end_time <= '2005-01-21 20:23:00+00'
> > AND b.end_time > '1970-01-01 00:00:00+00'
> > ) a
>
> > Basically, it seems that it doesn't understand that each row in log will
> > match up with at most one row in bucket. There is a unique index on
> > bucket(rrs_id, end_time), so it should be able to tell this.
>
> Why should it be able to tell that?
Indexes:
"rrs_bucket__rrs_id__end_time" unique, btree (rrs_id, end_time)
Err, crap, I guess that wouldn't work, because of prev_end_time not
being in there...
In english, each bucket defines a specific time period, and no two
buckets can over-lap (though there's no constraints defined to actually
prevent that). So reality is that each row in page_log.log will in fact
only match one row in bucket (at least for each value of rrs_id).
Given that, would the optimizer make a better choice if it knew that
(since it means a much smaller result set). Is there any way to tell the
optimizer this is the case?
Maybe what I ultimately need is a timestamp with interval datatype, that
specifies an interval that's fixed in time.
--
Jim C. Nasby, Database Consultant decibel@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828
Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"