At 00:45 18.06.2003, nikolaus@dilger.cc said:
--------------------[snip]--------------------
>Thanks for providing the additional information that
>the table has 2.3 million rows.
>
>See during the first execution you spend most of the
>time scanning the index id_mdata_dictid_string. And
>since that one is quite large it takes 1500 msec to
>read the index from disk into memory.
>
>For the second execution you read the large index from
>memory. Therfore it takes only 10 msec.
>
>Once you change the data you need to read from disk
>again and the query takes a long time.
--------------------[snip]--------------------
I came to the same conclusion - I installed a cron script that performs a
select against that index on a regular basis (3 minutes). After that even
the most complex queries against this huge table go like whoosssshhh ;-)
Would be interesting what one could do to _not_ have to take this basically
clumsy approach...
--
>O Ernest E. Vogelsinger
(\) ICQ #13394035
^ http://www.vogelsinger.at/