Thread: degradation in performance

degradation in performance

From
"Alain Reymond"
Date:
Good afternoon,

I created a database with Postgres 7.3.4 under Linux RedHat 7.3 on a 
Dell PowerEdge server. 

One of the table is 
resultats(numbil, numpara, mesure, deviation)
with an index on numbil.

Each select on numbil returns up to 60 rows (that means 60 rows for 
one numbil with 60 different numpara) for example
(200000,1,500,3.5)
(200000,2,852,4.2)
(200000,12,325,2.8)
(200001,1,750,1.5)
(200001,2,325,-1.5)
(200001,8,328,1.2)
etc..

This table contains now more than 6.500.000 rows and grows from 
6000 rows a day. I have approximatively 1.250.000 rows a year. So I 
have 5 years of data online.
Now, an insertion of 6000 lasts very loooong, up to one hour...
I tried to insert 100.000 yesterday evening and it was not done in 8 
hours.

Do you have any idea how I can improve speed - apart from splitting 
the table every 2 or 3 years which is the the aim of a database!

I thank you for your suggestions.

Regards.

Alain Reymond
CEIA
Bd Saint-Michel 119
1040 Bruxelles
Tel: +32 2 736 04 58
Fax: +32 2 736 58 02
alain.reymond@ceia.com
PGP key sur http://pgpkeys.mit.edu:11371




Re: degradation in performance

From
Martin Knipper
Date:
Am 21.09.2004 13:27 schrieb Alain Reymond:

> 
> Do you have any idea how I can improve speed - apart from splitting 
> the table every 2 or 3 years which is the the aim of a database!
> 

Drop the index before you insert the data and recreate it afterwards.
Use the "copy from ..." command instead of "insert into".

Greetings,

Martin


-- 
Martin Knipper
www  : http://www.mk-os.de
Mail : knipper@mk-os.de



Re: degradation in performance

From
Stephan Szabo
Date:
On Tue, 21 Sep 2004, Alain Reymond wrote:

> I created a database with Postgres 7.3.4 under Linux RedHat 7.3 on a
> Dell PowerEdge server.

You should probably upgrade to the end of the 7.3 branch at the least
(7.3.7).

> One of the table is
> resultats(numbil, numpara, mesure, deviation)
> with an index on numbil.
>
> Each select on numbil returns up to 60 rows (that means 60 rows for
> one numbil with 60 different numpara) for example
> (200000,1,500,3.5)
> (200000,2,852,4.2)
> (200000,12,325,2.8)
> (200001,1,750,1.5)
> (200001,2,325,-1.5)
> (200001,8,328,1.2)
> etc..
>
> This table contains now more than 6.500.000 rows and grows from
> 6000 rows a day. I have approximatively 1.250.000 rows a year. So I
> have 5 years of data online.
> Now, an insertion of 6000 lasts very loooong, up to one hour...
> I tried to insert 100.000 yesterday evening and it was not done in 8
> hours.

Some questions... Are you doing the inserts each in their own transaction
or are you putting them in a single transaction or batching some number
per transaction? Have you considered using copy for importing large blocks
of data? Is this table basically only taking inserts (no delete or
update)? Does this table have foreign key references to another table or
have any triggers?

You might see if reindexing the table or running vacuum full verbose(*)
helps.

(*) - I don't remember how 7.3 handled cluster, but if vacuum full verbose
says there's lots of removable row entries, clustering the table might be
faster.


Re: degradation in performance

From
Pierre-Frédéric Caillaud
Date:
6000 inserts, each in its own transaction, will be very long.Group your inserts in one transaction and it'll be faster
(maybe1-2  
 
minutes).Have your program generate a tab-delimited text file and load it with  
COPY, you should be down to a few seconds.


On Tue, 21 Sep 2004 13:27:43 +0200, Alain Reymond <alain.reymond@ceia.com>  
wrote:

> Good afternoon,
>
> I created a database with Postgres 7.3.4 under Linux RedHat 7.3 on a
> Dell PowerEdge server.
>
> One of the table is
> resultats(numbil, numpara, mesure, deviation)
> with an index on numbil.
>
> Each select on numbil returns up to 60 rows (that means 60 rows for
> one numbil with 60 different numpara) for example
> (200000,1,500,3.5)
> (200000,2,852,4.2)
> (200000,12,325,2.8)
> (200001,1,750,1.5)
> (200001,2,325,-1.5)
> (200001,8,328,1.2)
> etc..
>
> This table contains now more than 6.500.000 rows and grows from
> 6000 rows a day. I have approximatively 1.250.000 rows a year. So I
> have 5 years of data online.
> Now, an insertion of 6000 lasts very loooong, up to one hour...
> I tried to insert 100.000 yesterday evening and it was not done in 8
> hours.
>
> Do you have any idea how I can improve speed - apart from splitting
> the table every 2 or 3 years which is the the aim of a database!
>
> I thank you for your suggestions.
>
> Regards.
>
> Alain Reymond
> CEIA
> Bd Saint-Michel 119
> 1040 Bruxelles
> Tel: +32 2 736 04 58
> Fax: +32 2 736 58 02
> alain.reymond@ceia.com
> PGP key sur http://pgpkeys.mit.edu:11371
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 8: explain analyze is your friend
>