Edmund Dengler wrote:
> Greetings!
>
> We have a table with more than 250 million rows. I am trying to delete the
> first 100,000 rows (based on a bigint primary key), and I had to cancel
> after 4 hours of the system not actually finishing the delete. I wrote a
> script to delete individual rows 10,000 at a time using transactions, and
> am finding each individual delete takes on the order of 0.1 seconds to 2-3
> seconds. There are 4 indexes on the table, one of which is very "hashlike"
> (ie, distribution is throught the index for sequential rows).
I don't suppose it's off checking foreign-keys in a lot of tables is it?
--
Richard Huxton
Archonet Ltd