Re: Running Postgres Daemons with same data files - Mailing list pgsql-admin

From Glenn Wiorek
Subject Re: Running Postgres Daemons with same data files
Date
Msg-id 017f01c3c33d$c4a9f580$143264c8@jmlafferty.com
Whole thread Raw
In response to Re: Running Postgres Daemons with same data files  (Bhartendu Maheshwari <bhartendum@jataayusoft.com>)
List pgsql-admin
He might also want to take a look at SUN's grid -  the developement kit and
most of the components are free.   Is also supports Linux.  I bet with two
Postgres servers in sync via replication and the GRID setup to do the switch
over should one fail it would be as low cost HA he could get.


----- Original Message -----
From: "Patrick Spinler" <pspinler@yahoo.com>
To: "Postgres Mailing List" <pgsql-admin@postgresql.org>
Cc: "Bhartendu Maheshwari" <bhartendum@jataayusoft.com>
Sent: Saturday, December 13, 2003 11:19 PM
Subject: Re: [ADMIN] Running Postgres Daemons with same data files


>
> Just to add a tidbit on top of this, there are commercial RDBMS's that
> do, or at least say they do, clustered database servers.  (cough cough
> oracle, cough cough ingres)
>
> The products are expensives as all get out, a bastard to set up, and
> require truely kickass SAN storage and clustered hardware to be
> meaningful.  Not, in general, something to be set up on a lark.
>
> You can get an awful lot of what you seem to want with replication.
> Have you considered doing so ?  For instance, I expect there are
> replication products for postgres that would allow a disconnected client
> to push transactions up to a master server, then resyncronise itself
> with the master.
>
> In terms of HA, in addition to replication solutions, you can also do a
> fair to middling job of failover by having a primary and hot backup
> database server on a heartbeat/STONITH setup, using a shared, redundant
> SAN as a data store.  The hot backup would mount the SAN drives on
> primary failure, takeover the IP, and start the postmaster.
>
> Of course, just the SAN setup for such a beast is going to run to a
> minimum of 5 figures, and any transaction that might be in progres will
> be borked. Oh, and this complicates the mobile operations requirement
> quite a bit too. :-(
>
> If you feel really really really masocistic, uh, I mean ambitious, you
> could start looking at what it would take to port and run a RDBMS server
> in a truely clustered, high availablibility environment.  It will not be
> simple.  You have issues of distributed cache coherency, distributed
> locking, distributed filesystems/datastory, load balancing on a
> by-transaction basis (wouldn't want to send the second part of the same
> transaction to a different server ....), recovery from server node
> failure, adding server nodes to a running instance, and lots of other
> things I haven't thought of.
>
> This is complicated by the fact that there really isn't any mature linux
> cluster solution that gives decent support for everything you've asked
> for yet.  The closest project I'm aware of is SSIC-Linux.  Look it up on
> sourceforge if you're curious.
>
> Good luck,
> -- Pat
>
> p.s.  Yes, I was brought up and spoiled on VMS clusters.  Still nothing
> beats good 'ol RDB for a clustered database solution.
>
> Bhartendu Maheshwari wrote:
> > Dear Hal, Frank, Oli and all,
> >
> > I understand what you all trying to say, I know this is not good way of
> > designing, but we are planning for using the database for the keeping
> > mobile transactions and at the same time we need to provided the HA
> > solutions. The one solution i derive from the discussion that uses one
> > server and multiple clients but the issue in this if the system in which
> > database server was running get down the its all the way no use of HA
> > and load balancing, since without data the other one can't do anything.
> >
> > What I have in mind is the following implementation:-
> >
> > Step 1 :- Mount the data files from NAS server.
> > Step 2 :- start the postgres with the mounted data.
> > Step 3 :- Lock the data files by one server.
> > Step 4 :- do the database operation.
> > Step 5 :- Commit in the database files.
> > Step 6 :- Unlock the database files
> > Step 7 :- Now the other one can do the same way.
> >
> > Or if anybody have other solution for this, please suggest. How can I
> > commit the data into database files and flushing cache with the latest
> > data files contents? Is there any command to refresh both.
> >
> > Thank you Hal for
> >
*************************************************************************
> > If you really, really do need an HA solution, then I'd hunt around for
> > someone to add to your team who has extensive experience in this kind of
> > thing, since it's all too easy otherwise to unwittingly leave in lots of
> > single points of failure.
> >
*************************************************************************
> >
> > If you really have someone to help me in this regards, please I need his
> > help in this issue and want to derive a common techniques so that
> > everyone can use this.
> >
> > regards
> > bhartendu
> >
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 7: don't forget to increase your free space map settings
>
>



pgsql-admin by date:

Previous
From: "Arun Gananathan"
Date:
Subject: postgres Rules
Next
From: Grzegorz Dostatni
Date:
Subject: Postgresql on Windows.