Re: improving write performance for logging application - Mailing list pgsql-performance

From Steve Eckmann
Subject Re: improving write performance for logging application
Date
Msg-id 43BC65FC.3000903@computer.org
Whole thread Raw
In response to Re: improving write performance for logging application  (Kelly Burkhart <kelly@kkcsm.net>)
List pgsql-performance
Kelly Burkhart wrote:
On 1/4/06, Steve Eckmann <eckmann@computer.org> wrote:
Thanks, Steinar. I don't think we would really run with fsync off, but I need to document the performance tradeoffs. You're right that my explanation was confusing; probably because I'm confused about how to use COPY! I could batch multiple INSERTS using COPY statements, I just don't see how to do it without adding another process to read from STDIN, since the application that is currently the database client is constructing rows on the fly. I would need to get those rows into some process's STDIN stream or into a server-side file before COPY could be used, right?

Steve,

You can use copy without resorting to another process.  See the libpq documentation for 'Functions Associated with the copy Command".  We do something like this:

char *mbuf;

// allocate space and fill mbuf with appropriately formatted data somehow

PQexec( conn, "begin" );
PQexec( conn, "copy mytable from stdin" );
PQputCopyData( conn, mbuf, strlen(mbuf) );
PQputCopyEnd( conn, NULL );
PQexec( conn, "commit" );

-K
Thanks for the concrete example, Kelly. I had read the relevant libpq doc but didn't put the pieces together.

Regards,  Steve

pgsql-performance by date:

Previous
From: Steve Eckmann
Date:
Subject: Re: improving write performance for logging application
Next
From: Markus Schaber
Date:
Subject: Re: Invulnerable VACUUM process thrashing everything