Re: low performance - Mailing list pgsql-bugs

From Andreas Wernitznig
Subject Re: low performance
Date
Msg-id 20010821222412.2684c069.andreas@insilico.com
Whole thread Raw
In response to low performance  (Andreas Wernitznig <andreas@insilico.com>)
Responses Re: Re: low performance  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-bugs
I am aware of the performance drawbacks because of indices and triggers. In fact I have a trigger and an index on the
mostpopulated table. 
It is not possible in my case to remove the primary keys during insert, because the database structure and foreign keys
validatemy data during import.  

The problem is, that sometimes the performance is good, and sometimes the database is awfully slow.
If it is slow, postgres is eating up all CPU time and it takes at least 150 times longer to insert the data.
I don't know why and what to do against that.

Andreas

On Mon, 20 Aug 2001 19:39:31 -0400
Jonas Lindholm <jlindholm@rcn.com> wrote:

> Do you have any index on the tables ? Any triggers ?
>
> If you want to insert 1 million rows you should drop the index, insert the data and then recreate the index.
> You should also try the COPY command to insert the data.
>
> You should also avoid having anyone to connect to the database when you insert a lot of rows, and 1 million rows are
alot of rows for any database. 
>
> I've been able to insert, in one table, 17 million record in ~3 hours on a Compaq SMP 750 Mhz with 512MB
> by dropping the index, using several COPY commands at the same time loading different parts of the data and then
creatingthe index again. 
> At the time of the inserts no other processes than the COPY's was connected to the database.
>
> /Jonas Lindholm
>
>
> Andreas Wernitznig wrote:
>
> > I am running the precomplied binary of Postgreql 7.1.2 on a Redhat 7.1 (on a Dual Celeron System with 256MB, kernel
2.4.4and 2.4.5) System. 
> > (The installation of the new 7.1.3 doesn't seem to solve the problem)
> >
> > I am connecting to the DB with a Perl Program (using Perl 5.6.0 with DBD-Pg-1.01 and DBI-1.19).
> > The program inserts some million rows into a db with about 30 tables. The processing takes (if everyting works
fine)about 10 hours to complete. Usually the my Perl-Script and the database share the available CPU time 50:50. 
> > But sometimes the database is very slow eating up most (>98%) of the available CPU time.
> > (Of cause I know VACUUM and VACUUM ANALYZE, this is not the problem).
> >
> > The only thing that seems to help then, is killing the perl script, stopping postgresql, running "ipcclean", and
startagain from the beginning. If it works from the beginning, the database is ususally very fast until all data are
processed.
> >
> > But if someone else connects (using psql), sometimes the database gets very slow until it is using all the CPU
time.
> >
> > There are no error messages at postgres-startup.
> > I already increased the number of buffers to 2048 (doesn't help)
> >
> > I cannot reproduce these problems, sometimes the db is fast, sometimes very slow. The perl script doesn't seem to
bethe problem, because I wrote all SQL Commands to a file and processed them later ("psql dbname postgres < SQL-File"). 
> > Same thing: sometimes slow sometimes fast.
> >
> > Andreas
>

pgsql-bugs by date:

Previous
From: grant
Date:
Subject: Re: Left Join/Outer Join
Next
From: Tom Lane
Date:
Subject: Re: Re: low performance