Stephen Livesey wrote:
>
> I am very new to PostgreSQL and have installed v7.03 on a Red Hat Linux
> Server (v6.2), I am accessing the files using JDBC from a Windows 2000 PC.
>
> I have created a small file as follows:
> CREATE TABLE expafh (
> postcode CHAR(8) NOT NULL,
> postcode_record_no INT,
> street_name CHAR(30),
> town CHAR(31),
> PRIMARY KEY(postcode) )
>
> I am now writing 1.7million records to this file.
>
> The first 100,000 records took 15mins.
> The next 100,000 records took 30mins
> The last 100,000 records took 4hours.
>
> In total, it took 43 hours to write 1.7million records.
>
> Is this sort of degradation normal using a PostgreSQL database?
AFAICT, no.
> I have never experienced this sort of degradation with any other database
> and I have done exactly the same test (using the same hardware) on the
> following databases:
> DB2 v7 in total took 10hours 6mins
> Oracle 8i in total took 3hours 20mins
> Interbase v6 in total took 1hr 41min
> MySQL v3.23 in total took 54mins
>
> Any Help or advise would be appreciated.
Did you "vacuum analyse" your DB ? This seems to be essential to PG
performance, for various reasons.
Do you have a unique index on your primary key ?
HTH,
Emmanuel Charpentier