Re: improving write performance for logging application - Mailing list pgsql-performance

From Ian Westmacott
Subject Re: improving write performance for logging application
Date
Msg-id 1136382865.24450.7.camel@spectre.intellivid.com
Whole thread Raw
In response to improving write performance for logging application  (Steve Eckmann <eckmann@computer.org>)
Responses Re: improving write performance for logging application  (Steve Eckmann <eckmann@computer.org>)
Re: improving write performance for logging  (Ron <rjpeace@earthlink.net>)
List pgsql-performance
We have a similar application thats doing upwards of 2B inserts
per day.  We have spent a lot of time optimizing this, and found the
following to be most beneficial:

1)  use COPY (BINARY if possible)
2)  don't use triggers or foreign keys
3)  put WAL and tables on different spindles (channels if possible)
4)  put as much as you can in each COPY, and put as many COPYs as
    you can in a single transaction.
5)  watch out for XID wraparound
6)  tune checkpoint* and bgwriter* parameters for your I/O system

On Tue, 2006-01-03 at 16:44 -0700, Steve Eckmann wrote:
> I have questions about how to improve the write performance of PostgreSQL for logging data from a real-time
simulation.We found that MySQL 4.1.3 could log about 1480 objects/second using MyISAM tables or about 1225
objects/secondusing InnoDB tables, but PostgreSQL 8.0.3 could log only about 540 objects/second. (test system:
quad-Itanium2,8GB memory, SCSI RAID, GigE connection from simulation server, nothing running except system processes
anddatabase system under test) 
>
> We also found that we could improve MySQL performance significantly using MySQL's "INSERT" command extension allowing
multiplevalue-list tuples in a single command; the rate for MyISAM tables improved to about 2600 objects/second.
PostgreSQLdoesn't support that language extension. Using the COPY command instead of INSERT might help, but since rows
arebeing generated on the fly, I don't see how to use COPY without running a separate process that reads rows from the
applicationand uses COPY to write to the database. The application currently has two processes: the simulation and a
datacollector that reads events from the sim (queued in shared memory) and writes them as rows to the database,
bufferingas needed to avoid lost data during periods of high activity. To use COPY I think we would have to split our
datacollector into two processes communicating via a pipe. 
>
> Query performance is not an issue: we found that when suitable indexes are added PostgreSQL is fast enough on the
kindsof queries our users make. The crux is writing rows to the database fast enough to keep up with the simulation. 
>
> Are there general guidelines for tuning the PostgreSQL server for this kind of application? The suggestions I've
foundinclude disabling fsync (done), increasing the value of wal_buffers, and moving the WAL to a different disk, but
thesearen't likely to produce the 3x improvement that we need. On the client side I've found only two suggestions:
disableautocommit and use COPY instead of INSERT. I think I've effectively disabled autocommit by batching up to
severalhundred INSERT commands in each PQexec() call, and it isn’t clear that COPY is worth the effort in our
application.
>
> Thanks.
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: Don't 'kill -9' the postmaster
--
Ian Westmacott <ianw@intellivid.com>
Intellivid Corp.


pgsql-performance by date:

Previous
From: "Virag Saksena"
Date:
Subject: Avoiding cartesian product
Next
From: Steve Eckmann
Date:
Subject: Re: improving write performance for logging application