Re: INSERT performace. - Mailing list pgsql-admin

From Eduardo Caillava
Subject Re: INSERT performace.
Date
Msg-id 000801c1a9a0$441e1c10$db00a8c0@219
Whole thread Raw
In response to INSERT performace.  ("Marc Mitchell" <marcm@eisolution.com>)
List pgsql-admin
We have 7.1.3 PostgreSQL running on a NetVista PIII 800 with 512 MB ram 'n 2
80 GB IDE 10krmp Seagate disks, using a linked script for boot in
/etc/rc.d/rc2.d with "hdparm -X66 -u1 -d1 -m16 -c3"

We cannot use "copy" because of "strange char..."

So...we make at week (WITH 8K ROWS), from a güinbox running VFoxPro, around
900.000 rows inserts in 25/30 mins. max, and create complex index: three
numeric fields + one varchar: (rep,ord,afi,nomb) en abouth 20 or 30 seconds.

Can u use it ?

Hope it help you

Eduardo Caillava
Tucumán
Argentina

----- Original Message -----
From: "Ferdinand Smit" <ferdinand@telegraafnet.nl>
To: <pgsql-admin@postgresql.org>
Sent: Wednesday, January 30, 2002 6:42 AM
Subject: Re: [ADMIN] INSERT performace.


> Hi,
>
> > > > > > We have a fresh database and have begun to observe performance
> > > > > > degradation for INSERTs as a table went from empty to
> > > > > > 100,000-ish rows.  Initial INSERTs were sub second while after
> > > > > > 30k rows, they were 1-3 seconds.
> > > > >
> > > > > we just hit this problem when moving old database to new one
> > > > > (new schema).  we had to insert approx. 1.5 million rows, and
> > > > > from initial 0.02s/insert after several thousands of inserts it
> > > > > came to 20 seconds per insert.  what we did was removing foreign
> > > > > keys from table which we made inserts to.  it helped. we manage
> > > > > to put 300k records in ca. 2-3 hours.
> > > >
> > > > If possible, use the COPY command.  We did 90K rows in about 40sec
> > > > using this puppy on a Solaris U5 (took over 130sec for MySQL on
> > > > the same box in case the performance geeks in the crowd are
> > > > interested).
> > >
> > > We were transfering a mysql-database to a new linux-server (PIII-800
> > > dual). I don't now how mutch rows, but the dump was 8 Gb (not
> > > zipped).  It took us 4 hours to import, and 5 hours to create the
> > > indexes.
> >
> > How were you inserting the data?  Were you doing multiple inserts per
> > transactions?  Copy?  That sounds really slow to me.
>
> The database mainly contains integers, which represent the behavior of
> internet users on our site, so it's very compact data.
> We used multiple insert with mysql but i did'nt find that option in
postgres.
>
> > > By testing we created a postgres database to on an other server
> > > (same type).  The copy command did'nt work, because of 'strange
> > > characters', so we used normal inserts. It took us 12 hours to
> > > import, and 10 hours to create the indexes.
> >
> > Have you tried to escape the data before you inserted it?  That
> > should've solve things.
>
> No, how do you do that ?
>
> > http://www.postgresql.org/idocs/index.php?sql-copy.html
> >
> > > Although, i like postgres more, mysql is still faster with hugh
> > > (simple) data.
> >
> > I've never found that to be the case in only a few instances
> > actually... and typically with small data sets that are less than 1M
> > rows.  vacuum analyze and turn fsync off.  :~)  -sc
>
> Of course, fsync was off and i increased the memory usage. Vacuum is not
> usable with a total import ;-)
>
> Ferdinand
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
> http://archives.postgresql.org
>


pgsql-admin by date:

Previous
From: Raphael Bauduin
Date:
Subject: Re: OT: Decent GUI-based reportwriter for Linux?
Next
From: Tom Lane
Date:
Subject: Re: Listing Triggers