UNION ALL was an excellent idea! It didn't cut much time off, but at least no
resources are devoted to eliminating the nonexistant duplicate rows.
I've had days to think about this. It seems as though the ORDER BY part of the
first query is the culprit. When I run this query by itself, I can see that it
would comprise the bulk of the UNION query time.
select
a.username as "User",
a.trans_date as "Date",
tl.longtype as "Type",
a.trans_data as "Query Data",
a.trans_charge as "Charged",
a.user_reference_id as "Reference ID"
from a_trans_log a
join addtypelong tl on a.trans_type = tl.shorttype
where a.trans_date >= '12/31/01'::TIMESTAMP
order by a.trans_date desc, a.trans_data limit 20;
By removing the ORDER BY a.trans_data, it cut the query down to the "almost
instant" level... EXPLAIN shows me that it uses the indeces! I guess I need to
drop that part of the ORDER BY, or make an index for it to use... Bah.
Alas...
Unless someone knows different, I don't believe that I can use the LIMIT
statement in each of the subqueries. The app needs to page through the
UNIONized table. As the app pages through LIMIT 20,20 LIMIT 20,40 LIMIT 20,60,
etc. It needs to look at the whole sorted UNIONized table. (am I making ANY
sense?)
Even by dropping the order on column 4, it still takes 6 seconds to assemble
the data, sort it and limit it... Could I create a cross-table index
specifically for this query? I doubt its a) possible, b) (even if possible) a
good idea.
CG
--- Stephan Szabo <sszabo@megazone23.bigpanda.com> wrote:
>
> Are there going to possibly be equal rows in the two parts that you need
> to merge into one row? If not, try union all which should get rid of a
> sort and unique I think.
__________________________________________________
Do You Yahoo!?
Try FREE Yahoo! Mail - the world's greatest free email!
http://mail.yahoo.com/