Re: improving write performance for logging application - Mailing list pgsql-performance

From Tom Lane
Subject Re: improving write performance for logging application
Date
Msg-id 27720.1136332812@sss.pgh.pa.us
Whole thread Raw
In response to improving write performance for logging application  (Steve Eckmann <eckmann@computer.org>)
Responses Re: improving write performance for logging application  (dlang <dlang@invendra.net>)
Re: improving write performance for logging application  (Steve Eckmann <eckmann@computer.org>)
List pgsql-performance
Steve Eckmann <eckmann@computer.org> writes:
> We also found that we could improve MySQL performance significantly
> using MySQL's "INSERT" command extension allowing multiple value-list
> tuples in a single command; the rate for MyISAM tables improved to
> about 2600 objects/second. PostgreSQL doesn't support that language
> extension. Using the COPY command instead of INSERT might help, but
> since rows are being generated on the fly, I don't see how to use COPY
> without running a separate process that reads rows from the
> application and uses COPY to write to the database.

Can you conveniently alter your application to batch INSERT commands
into transactions?  Ie

    BEGIN;
    INSERT ...;
    ... maybe 100 or so inserts ...
    COMMIT;
    BEGIN;
    ... lather, rinse, repeat ...

This cuts down the transactional overhead quite a bit.  A downside is
that you lose multiple rows if any INSERT fails, but then the same would
be true of multiple VALUES lists per INSERT.

            regards, tom lane

pgsql-performance by date:

Previous
From: Steve Eckmann
Date:
Subject: improving write performance for logging application
Next
From: "Steinar H. Gunderson"
Date:
Subject: Re: improving write performance for logging application