Costin Manda wrote:
> On Wed, 06 Apr 2005 15:54:29 +0100
> Richard Huxton <dev@archonet.com> wrote:
>
>
>>> I mean from 5 to 5 minutes
>>>DROP TABLE
>>>CREATE TABLE
>>>INSERT 70000 rows in table
>>
>>I thought you were trying an inserting / updating if it failed? You
>>shouldn't have any duplicates if the table was already empty. Or have I
>>misunderstood?
>
>
>
> Ok, let's start over :)
>
> The script does the following thing:
> 1. read the count of rows in two tables from the mssql database
> 2. read the count of rows of the 'mirror' tables in postgres
> these are tables that get updated rarely and have a maximum of 100000
> records together
> 3. if the counts differ, delete from the mirror table everything and
> reinsert everything.
> 4. THEN do the inserts that get updated on error
>
> I thought the problem lied with step 4, but now I see that step 3 was
> the culprit and that , indeed, I did not do drop table, create table but
> delete from and inserts. I think that recreating these two tables should
> solve the problem, isn't it?
Hmm - try TRUNCATE rather than DELETE. Also, you might drop the indexes,
re-insert the data then recreate the indexes - that can be faster for
bulk loading.
--
Richard Huxton
Archonet Ltd