inserting into brand new database faster than old database - Mailing list pgsql-performance

From Missner, T. R.
Subject inserting into brand new database faster than old database
Date
Msg-id F67BE6C5DEBB8340A749DE01B476FEC103E4D3BC@idc1exc0002.corp.global.level3.com
Whole thread Raw
Responses Re: inserting into brand new database faster than old database
Re: inserting into brand new database faster than old database
Re: inserting into brand new database faster than old database
List pgsql-performance
Hello,

I have been a happy postgresql developer for a few years now.  Recently
I have discovered a very strange phenomenon in regards to inserting
rows.

My app inserts millions of records a day, averaging about 30 rows a
second. I use autovac to make sure my stats and indexes are up to date.
Rarely are rows ever deleted.  Each day a brand new set of tables is
created and eventually the old tables are dropped. The app calls
functions which based on some simple logic perform the correct inserts.


The problem I am seeing is that after a particular database gets kinda
old, say a couple of months, performance begins to degrade.  Even after
creating brand new tables my insert speed is slow in comparison ( by a
magnitude of 5 or more ) with a brand new schema which has the exact
same tables.  I am running on an IBM 360 dual processor Linux server
with a 100 gig raid array spanning 5 scsi disks.  The machine has 1 gig
of ram of which 500 meg is dedicated to Postgresql.

Just to be clear, the question I have is why would a brand new db schema
allow inserts faster than an older schema with brand new tables?  Since
the tables are empty to start, vacuuming should not be an issue at all.
Each schema is identical in every way except the db name and creation
date.

Any ideas are appreciated.

Thanks,

T.R. Missner

pgsql-performance by date:

Previous
From: Bill Chandler
Date:
Subject: Terrible performance after deleting/recreating indexes
Next
From: Eugene
Date:
Subject: Forcing HashAggregation prior to index scan?