On Fri, 11 Mar 2005, Tom Lane wrote:
[ snip ]
> COPY would be my recommendation. For a no-programming-effort solution
> you could just pipe the output of pg_dump --data-only -t mytable
> into psql. Not sure if it's worth developing a custom application to
> replace that.
I'm a programming-effort kind of guy so I'll try COPY.
>
>> My web app does lots of inserts that aren't read until a session is
>> complete. The plan is to put the heavy insert session onto a ramdisk based
>> pg-db and transfer the relevant data to the master pg-db upon session
>> completion. Currently running 7.4.6.
>
> Unless you have a large proportion of sessions that are abandoned and
> hence never need be transferred to the main database at all, this seems
> like a dead waste of effort :-(. The work to put the data into the main
> database isn't lessened at all; you've just added extra work to manage
> the buffer database.
The insert heavy sessions average 175 page hits generating XML, 1000
insert/updates which comprise 90% of the insert/update load, of which 200
inserts need to be transferred to the master db. The other sessions are
read/cache bound. I hoping to get a speed-up from moving the temporary
stuff off the master db and using 1 transaction instead of 175 to the disk
based master db.
Thanks,
Jelle