Thanks to all for the responses. Based on all the recommendations, I am going to try a batched commit approach; along
withdata purging policies so that the data storage does not grow beyond certain thresholds.
- J
-----Original Message-----
From: Craig Ringer [mailto:craig@postnewspapers.com.au]
Sent: Wednesday, November 04, 2009 5:12 PM
To: Merlin Moncure
Cc: Jay Manni; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] High Frequency Inserts to Postgres Database vs Writing to a File
Merlin Moncure wrote:
> Postgres can handle multiple 1000 insert/sec but your hardware most
> likely can't handle multiple 1000 transaction/sec if fsync is on.
commit_delay or async commit should help a lot there.
http://www.postgresql.org/docs/8.3/static/wal-async-commit.html
http://www.postgresql.org/docs/8.3/static/runtime-config-wal.html
Please do *not* turn fsync off unless you want to lose your data.
> If you are bulk inserting 1000+ records/sec all day long, make sure
> you have provisioned enough storage for this (that's 86M records/day),
plus any index storage, room for dead tuples if you ever issue UPDATEs, etc.
--
Craig Ringer
--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.