Re: improving write performance for logging

From: Ron
Subject: Re: improving write performance for logging
Date: ,
Msg-id: 7.0.1.0.2.20060104092139.01c37da0@earthlink.net
(view: Whole thread, Raw)
In response to: Re: improving write performance for logging application  (Ian Westmacott)
Responses: Re: improving write performance for logging  (Ian Westmacott)
List: pgsql-performance

Tree view

improving write performance for logging application  (Steve Eckmann, )
 Re: improving write performance for logging application  (Tom Lane, )
  Re: improving write performance for logging application  (dlang, )
   Re: improving write performance for logging application  (Steve Eckmann, )
  Re: improving write performance for logging application  (Steve Eckmann, )
   Re: improving write performance for logging application  (Tom Lane, )
    Re: improving write performance for logging application  (Steve Eckmann, )
 Re: improving write performance for logging application  ("Steinar H. Gunderson", )
  Re: improving write performance for logging application  (Steve Eckmann, )
   Re: improving write performance for logging application  (Kelly Burkhart, )
    Re: improving write performance for logging application  (Steve Eckmann, )
 Re: improving write performance for logging application  (Ian Westmacott, )
  Re: improving write performance for logging application  (Steve Eckmann, )
  Re: improving write performance for logging  (Ron, )
   Re: improving write performance for logging  (Ian Westmacott, )
    Re: improving write performance for logging  ("Jim C. Nasby", )
     Re: improving write performance for logging  (Ian Westmacott, )
      Re: improving write performance for logging  ("Jim C. Nasby", )
       Re: improving write performance for logging  (Ian Westmacott, )
        Re: improving write performance for logging  (Tom Lane, )
      Re: improving write performance for logging  (Michael Stone, )

2B is a lot of inserts.  If you had to guess,
what do you think is the maximum number of inserts you could do in a day?

How large is each record being inserted?

How much can you put in a COPY and how many COPYs
can you put into a transactions?

What values are you using for bgwriter* and checkpoint*?

What HW on you running on and what kind of performance do you typically get?

Inquiring minds definitely want to know ;-)
Ron


At 08:54 AM 1/4/2006, Ian Westmacott wrote:
>We have a similar application thats doing upwards of 2B inserts
>per day.  We have spent a lot of time optimizing this, and found the
>following to be most beneficial:
>
>1)  use COPY (BINARY if possible)
>2)  don't use triggers or foreign keys
>3)  put WAL and tables on different spindles (channels if possible)
>4)  put as much as you can in each COPY, and put as many COPYs as
>     you can in a single transaction.
>5)  watch out for XID wraparound
>6)  tune checkpoint* and bgwriter* parameters for your I/O system
>
>On Tue, 2006-01-03 at 16:44 -0700, Steve Eckmann wrote:
> > I have questions about how to improve the
> write performance of PostgreSQL for logging
> data from a real-time simulation. We found that
> MySQL 4.1.3 could log about 1480 objects/second
> using MyISAM tables or about 1225
> objects/second using InnoDB tables, but
> PostgreSQL 8.0.3 could log only about 540
> objects/second. (test system: quad-Itanium2,
> 8GB memory, SCSI RAID, GigE connection from
> simulation server, nothing running except
> system processes and database system under test)
> >
> > We also found that we could improve MySQL
> performance significantly using MySQL's
> "INSERT" command extension allowing multiple
> value-list tuples in a single command; the rate
> for MyISAM tables improved to about 2600
> objects/second. PostgreSQL doesn't support that
> language extension. Using the COPY command
> instead of INSERT might help, but since rows
> are being generated on the fly, I don't see how
> to use COPY without running a separate process
> that reads rows from the application and uses
> COPY to write to the database. The application
> currently has two processes: the simulation and
> a data collector that reads events from the sim
> (queued in shared memory) and writes them as
> rows to the database, buffering as needed to
> avoid lost data during periods of high
> activity. To use COPY I think we would have to
> split our data collector into two processes communicating via a pipe.
> >
> > Query performance is not an issue: we found
> that when suitable indexes are added PostgreSQL
> is fast enough on the kinds of queries our
> users make. The crux is writing rows to the
> database fast enough to keep up with the simulation.
> >
> > Are there general guidelines for tuning the
> PostgreSQL server for this kind of application?
> The suggestions I've found include disabling
> fsync (done), increasing the value of
> wal_buffers, and moving the WAL to a different
> disk, but these aren't likely to produce the 3x
> improvement that we need. On the client side
> I've found only two suggestions: disable
> autocommit and use COPY instead of INSERT. I
> think I've effectively disabled autocommit by
> batching up to several hundred INSERT commands
> in each PQexec() call, and it isn’t clear
> that COPY is worth the effort in our application.
> >
> > Thanks.
> >
> >
> > ---------------------------(end of broadcast)---------------------------
> > TIP 2: Don't 'kill -9' the postmaster
>--
>Ian Westmacott <>
>Intellivid Corp.
>
>
>---------------------------(end of broadcast)---------------------------
>TIP 4: Have you searched our list archives?
>
>                http://archives.postgresql.org





pgsql-performance by date:

From: Ian Westmacott
Date:
Subject: Re: improving write performance for logging
From: Mark Liberman
Date:
Subject: Help in avoiding a query 'Warm-Up' period/shared buffer cache