New to PostgreSQL, performance considerations - Mailing list pgsql-performance

From Daniel van Ham Colchete
Subject New to PostgreSQL, performance considerations
Date
Msg-id 8a0c7af10612101141n2c7727c3r2f92345753960808@mail.gmail.com
Whole thread Raw
Responses Re: New to PostgreSQL, performance considerations  (Shane Ambler <pgsql@007Marketing.com>)
List pgsql-performance
Hi yall,

although I've worked with databases for more than 7 years now, I'm
petty new to PostgreSQL.

I have an application using SQLite3 as an embedded SQL solution
because it's simple and it can handle the load that *most* of my
clients have.

Because of that '*most*' part, because of the client/server way and
because of the license, I'm think about start using PostgreSQL.

My app uses only three tables: one has low read and really high write
rates, a second has high read and low write and the third one is
equally high on both.

I need a db that can handle something like 500 operations/sec
continuously. It's something like 250 writes/sec and 250 reads/sec. My
databases uses indexes.

Each table would have to handle 5 million rows/day. So I'm thinking
about creating different tables (clusters?) to different days to make
queries return faster. Am I right or there is no problem in having a
150 million (one month) rows on a table?

All my data is e-mail traffic: user's quarentine, inbond traffic,
outbond traffic, sender, recipients, subjects, attachments, etc...

What do you people say, is it possible with PostgreSQL? What kind of
hardware would I need to handle that kind of traffic?

On a first test, at a badly tunned AMD Athlon XP 1800+ (ergh!) I could
do 1400 writes/sec locally after I disabled fsync. We have UPSs, in
the last year we only had 1 power failure.

Thank you all for your tips.

Best regards,
Daniel Colchete

pgsql-performance by date:

Previous
From: Tom Lane
Date:
Subject: Re: Low throughput of binary inserts from windows to linux
Next
From: Shane Ambler
Date:
Subject: Re: New to PostgreSQL, performance considerations