Re: Performance considerations for very heavy INSERT traffic - Mailing list pgsql-performance

From Vivek Khera
Subject Re: Performance considerations for very heavy INSERT traffic
Date
Msg-id 0C638D58-7AA9-4668-BE41-A9CC05DA3DA8@khera.org
Whole thread Raw
In response to Re: Performance considerations for very heavy INSERT traffic  (Brandon Black <blblack@gmail.com>)
List pgsql-performance

On Sep 12, 2005, at 6:02 PM, Brandon Black wrote:

        - using COPY instead of INSERT ?
                (should be easy to do from the aggregators)

Possibly, although it would kill the current design of returning the database transaction status for a single client packet back to the client on transaction success/failure.   The aggregator could put several clients' data into a series of delayed multi-row copy statements.


buffer through the file system on your aggregator.  once you "commit" to local disk file, return back to your client that you got the data.  then insert into the actual postgres DB in large batches of inserts inside a single Postgres transaction.

we have our web server log certain tracking requests to a local file.  with file locks and append mode, it is extremely quick and has little contention delays. then every so often, we lock the file, rename  it, release the lock, then process it at our leisure to do the inserts to Pg in one big transaction.

Vivek Khera, Ph.D.

+1-301-869-4449 x806



pgsql-performance by date:

Previous
From: Vivek Khera
Date:
Subject: Re: CHECK vs REFERENCES
Next
From: Vivek Khera
Date:
Subject: Re: Performance considerations for very heavy INSERT traffic