Re: Questions about 2 databases. - Mailing list pgsql-performance

From Richard_D_Levine@raytheon.com
Subject Re: Questions about 2 databases.
Date
Msg-id OFEB461F61.135A85B8-ON05256FC1.00720265@ftw.us.ray.com
Whole thread Raw
In response to Questions about 2 databases.  (jelle <jellej@pacbell.net>)
List pgsql-performance
> this seems
> like a dead waste of effort :-(.  The work to put the data into the main
> database isn't lessened at all; you've just added extra work to manage
> the buffer database.

True from the view point of the server, but not from the throughput in the
client session (client viewpoint).  The client will have a blazingly fast
session with the buffer database.  I'm assuming the buffer database table
size is zero or very small.  Constraints will be a problem if there are
PKs, FKs that need satisfied on the server that are not adequately testable
in the buffer.  Might not be a problem if the full table fits on the RAM
disk, but you still have to worry about two clients inserting the same PK.

Rick



                         
                      Tom Lane
                         
                      <tgl@sss.pgh.pa.us>                To:       jellej@pacbell.net
                         
                      Sent by:                           cc:       pgsql-performance@postgresql.org
                         
                      pgsql-performance-owner@pos        Subject:  Re: [PERFORM] Questions about 2 databases.
                         
                      tgresql.org
                         

                         

                         
                      03/11/2005 03:33 PM
                         

                         

                         




jelle <jellej@pacbell.net> writes:
> 1) on a single 7.4.6 postgres instance does each database have it own WAL
>     file or is that shared? Is it the same on 8.0.x?

Shared.

> 2) what's the high performance way of moving 200 rows between similar
>     tables on different databases? Does it matter if the databases are
>     on the same or seperate postgres instances?

COPY would be my recommendation.  For a no-programming-effort solution
you could just pipe the output of pg_dump --data-only -t mytable
into psql.  Not sure if it's worth developing a custom application to
replace that.

> My web app does lots of inserts that aren't read until a session is
> complete. The plan is to put the heavy insert session onto a ramdisk
based
> pg-db and transfer the relevant data to the master pg-db upon session
> completion. Currently running 7.4.6.

Unless you have a large proportion of sessions that are abandoned and
hence never need be transferred to the main database at all, this seems
like a dead waste of effort :-(.  The work to put the data into the main
database isn't lessened at all; you've just added extra work to manage
the buffer database.

                                     regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match




pgsql-performance by date:

Previous
From: "Lou O'Quin"
Date:
Subject: Re: Query performance
Next
From: Arshavir Grigorian
Date:
Subject: Postgres on RAID5