Re: realtime data inserts - Mailing list pgsql-general

From Ericson Smith
Subject Re: realtime data inserts
Date
Msg-id 1053117234.15138.23.camel@localhost.localdomain
Whole thread Raw
In response to Re: realtime data inserts  ("alex b." <mailinglists1@gmx.de>)
Responses Re: realtime data inserts  (Ron Johnson <ron.l.johnson@cox.net>)
List pgsql-general
You probably want to have a process that constantly stores the data in a
text file. Every "n" minutes, you will cause the logger to rotate the
text file, then process that batch.

Over here, we are able to dump around 5,000 records per second in one of
our tables using that methodology.

- Ericson Smith
eric@did-it.com

On Fri, 2003-05-16 at 16:27, alex b. wrote:
> Doug McNaught wrote:
> > Ron Johnson <ron.l.johnson@cox.net> writes:
> >
> >
> >>On Sat, 2003-05-10 at 21:46, Tom Lane wrote:
> >>
> >>>Ron Johnson <ron.l.johnson@cox.net> writes:
> >>>
> >>>>On Sat, 2003-05-10 at 11:00, Tom Lane wrote:
> >>>>
> >>>>>Have you thought about using COPY?
> >>>
> >>>>Generate a temporary file, and then system("COPY /tmp/foobar ...") ?
> >>>
> >>>No, copy from stdin.  No need for a temp file.
> >>
> >>But wouldn't that only work if the input stream is acceptable to
> >>COPY ?
> >
> >
> > Yes, but you could always pipe it through a script or C program to
> > make it so...
>
> lets say I have an about 1kb/s continuus datastream comming in for many
> hours and I'd like to store this data in my db using COPY table FROM stdin.
>
> At what time should I COMMIT or close the stream to feed the database
> and COPY FROM again?
>
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faqs/FAQ.html
--
Ericson Smith <eric@did-it.com>


pgsql-general by date:

Previous
From: "alex b."
Date:
Subject: Re: realtime data inserts
Next
From: Network Administrator
Date:
Subject: Re: MSSQL -> PostgreSQL