> > I am very new to PostgreSQL and have installed v7.03 on a Red Hat
Linux
> > Server (v6.2), I am accessing the files using JDBC from a Windows
2000 PC.
> >
> > I have created a small file as follows:
> > CREATE TABLE expafh (
> > postcode CHAR(8) NOT NULL,
> > postcode_record_no INT,
> > street_name CHAR(30),
> > town CHAR(31),
> > PRIMARY KEY(postcode) )
> >
> > I am now writing 1.7million records to this file.
> >
> > The first 100,000 records took 15mins.
> > The next 100,000 records took 30mins
> > The last 100,000 records took 4hours.
> >
> > In total, it took 43 hours to write 1.7million records.
> >
> > Is this sort of degradation normal using a PostgreSQL database?
>
> AFAICT, no.
Funny you should say that, but I seem to be experiencing a similar
things when doing pgdump/restore. When restoring a table with around
40M records, everythings starts nice and fast. However, by the time
it's half done, inserts take forever. It took about 16 hours to
restore all 40M records (3 GB or so + indices). Obviously, since it
was a "restore", there was no vacuuming during the operation, but it
still seemed to get unusually slow toward the end... Judging by the
increase rates in file sizes, anyway, which I accept, isn't the most
reliable way to judge this matter by...
But, I guess this is nothing comparable to your problem. I was
inserting 40M records, and it took 16 hours.
You are inserting 1.7M records, and it's taking 43 hours.
Are you using autocommit? Are you using fsync? You should probably
periodically commit the data every few thousand inserts if you don't
have autocommit enabled.
Are you running out of memory? What amount of shared memory do you
have enabled? How much is Postgres allowed to use? I know this is
fairly obvious stuff, but it is, as such, quite easy to overlook...
> > I have never experienced this sort of degradation with any other
database
> > and I have done exactly the same test (using the same hardware) on
the
> > following databases:
> > DB2 v7 in total took 10hours 6mins
> > Oracle 8i in total took 3hours 20mins
> > Interbase v6 in total took 1hr 41min
> > MySQL v3.23 in total took 54mins
> >
> > Any Help or advise would be appreciated.
The only thing that comes to mind is that if you're doing a bulk
insert, you should probably drop all indices that aren't unique or for
primary keys, and re-create them once your insert all your data...
Regards.
Gordan