Depends - we don't know enough about your needs. Some questions:
Is this constant data or just capturing a burst?
Are you feeding it through one connection or several in parallel?
Did you tune your memory configs in postgresql.conf or are they still at the
minimalized defaults?
How soon does the data need to be available for query? (Obviously there will
be up to a 1200 record delay just due to the transaction.)
What generates the timestamp? Ie. is it an insert into foo values (now(),
packetname, data) or is the app providing the timestamp?
More info about the app will help.
Cheers,
Steve
On Saturday 10 May 2003 8:25 am, Adam Siegel wrote:
> I have realtime data flowing at a rate of 500, 512 byte packets per second.
> I want to log the info in a database table with two other columns, one for
> a timestamp and one for a name of the packet. The max rate I can achieve
> is 350 inserts per second on a sun blade 2000. The inserts are grouped in
> a transaction and I commit every 1200 records. I am storing the binary
> data in a bytea. I am using the libpq conversion function. Not sure if
> that is slowing me down. But I think it is the insert not the conversion.
>
> Any thoughts on how to achive this goal?
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster