Andrew McMillan wrote:
>On Thu, 2003-02-06 at 04:28, Wim wrote:
>
>
>>Hello All,
>>
>> I have a database with the inventory of my backbone routers. The 30
>>routers are stored in table.
>>Every 5 minutes, I insert the interface counters into a table for all
>>the routers.
>>When my table (counters_table) has less than ,let's say, 100000 records,
>>the insert is done within the 50 seconds for all the routers.
>>I noticed that, when my table contains more than 500000 records, the
>>insert takes about 230 seconds...
>>
>>My DB runs on a Pentium III 512MB RAM and one CPU 1.13GHz
>>
>>I used EXPLAIN to test my select queries, but everyting seems fine
>>(right use of indexes...)
>>
>>
>>How can I speed up the insert (and improve the performance).
>>
>>
>
>Are you telling us the full story? Is there any parallel process
>running against these tables which does updates or deletes? What
>constraints do you have on the table? Triggers? Indexes?
>
>If you have (e.g.) a constraint which causes a lookup against a field in
>a similarly growing table that is not indexed, you are likely to see
>this sort of behaviour.
>
>If you have processes that are updating/deleting within the table in
>parallel then you probably want to vacuum the table (much) more often.
>
I Think I'll try that in the first place,
I do:
BEGIN
SELECT routers FROM routers_table WHERE blabla;
UPDATE routers_table SET timestamp=blabla;
INSERT INTO routers_counters VALUES blablabla;
END
COMMIT
How much should I vacuum the table? After every run of the script or 2
or 3 times/day?
>
>50 seconds seems ridiculously long for an insert in such a table - I
>have tables with millions of rows and see nothing like that sort of
>slowdown.
>
>Cheers,
> Andrew.
>
>