Re: Running Postgres Daemons with same data files - Mailing list pgsql-admin

From Bhartendu Maheshwari
Subject Re: Running Postgres Daemons with same data files
Date
Msg-id 1071049881.2741.12.camel@bharat
Whole thread Raw
In response to Re: Running Postgres Daemons with same data files  ("Uwe C. Schroeder" <uwe@oss4u.com>)
List pgsql-admin
Dear UC,

You are right for the HA solution but at the same time we are also
implementing the load balancing solution so we can't have for one Node 2
different processing entity and database as well. We try to provide
solution for HA, load balancing both and in that there are 2 different
processing machine but sharing the common database so that both get the
latest and synchronized data files.

You are right if the NAS is down then everything get down but the
probability for the NAS is down is very less and by this we are able to
provide service for 99% cases and if you are 99% handle cases then you
are providing good service, isn't?

About the cache to file write :- If the database is writting all the
stuff to the files after each transaction then both have one
synchronized set of data file whoever want can acquire the lock and use
and then unlock it. The MySQL have command "flush tables" to enforce the
database to write all the cache contents to the files, Is there anything
similar in postgres? This will definitely degrade the performance of my
system but its much more fast since I have 2 processing unit.

Anyway if somebody have some other solution for the same please help me.
One I got have one common postmaster running on one PC and the two nodes
connect to that server to get the data. Any other please let me know.

regards
bhartendu

On Wed, 2003-12-10 at 10:38, Uwe C. Schroeder wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On Tuesday 09 December 2003 08:21 pm, Bhartendu Maheshwari wrote:
> > Dear Hal, Frank, Oli and all,
> >
> > I understand what you all trying to say, I know this is not good way of
> > designing, but we are planning for using the database for the keeping
> > mobile transactions and at the same time we need to provided the HA
> > solutions. The one solution i derive from the discussion that uses one
> > server and multiple clients but the issue in this if the system in which
> > database server was running get down the its all the way no use of HA
> > and load balancing, since without data the other one can't do anything.
>
> Is the NAS server redundant ? If not it's not HA anyways.
> If there is a problem with the NAS or the network itself (say someone
> accidentially cuts a bunch of network wires) - what do you do ?
> I don't see a big difference between the one server or the other server or the
> network going down. Unless ALL components in your network are redundant and
> have failover capabilities (for example one NAS automatically replacing the
> other one if it fails) you don't have high availibility.
>
> What exactly do you mean by "mobile transactions" ?
> The easiest way that probably comes close to what you intend to do is to have
> a second server stand by and take over (i.e. mount the NAS storage and start
> a postmaster as well as taking over the IP address of the original machine)
> the moment the primary server fails. That will still disrupt all queries
> currently in progress, but at least things can be used immediately after the
> failure.
> Still, the NAS storage is a huge point of failure. What you failed to realize
> in the list below is that via the network to a remote storage a lot of caches
> and buffers are involved. I bet you won't be able to tell exactly when what
> piece of data has been physically written to the disk. Even if you close the
> files some information could still hang around in some buffer until the
> storage array feels it's time to actually write that stuff.
>
> What you are trying to achive is the classic "replication" approach. Replicate
> the database to a second server and have that one take over if the first one
> fails. Look into the replication projects on gborg - that's more likely to
> give you a workable solution.
>
>
>
> >
> > What I have in mind is the following implementation:-
> >
> > Step 1 :- Mount the data files from NAS server.
> > Step 2 :- start the postgres with the mounted data.
> > Step 3 :- Lock the data files by one server.
> > Step 4 :- do the database operation.
> > Step 5 :- Commit in the database files.
> > Step 6 :- Unlock the database files
> > Step 7 :- Now the other one can do the same way.
> >
> > Or if anybody have other solution for this, please suggest. How can I
> > commit the data into database files and flushing cache with the latest
> > data files contents? Is there any command to refresh both.
> >
> > Thank you Hal for
> > *************************************************************************
> > If you really, really do need an HA solution, then I'd hunt around for
> > someone to add to your team who has extensive experience in this kind of
> > thing, since it's all too easy otherwise to unwittingly leave in lots of
> > single points of failure.
> > *************************************************************************
> >
> > If you really have someone to help me in this regards, please I need his
> > help in this issue and want to derive a common techniques so that
> > everyone can use this.
> >
> > regards
> > bhartendu
> >
> > On Tue, 2003-12-09 at 18:20, Halford Dace wrote:
> > > Hello Bhartendu,
> > >
> > > It happens that I was just talking to Sam on irc, and he's gone to
> >
> > lunch,
> >
> > > so I'll have a shot at this.
> > >
> > > This should never work for any respectable DBMS.  The DBMS is what
> > > manages access to the data files.  The DBMS does the locking and
> > > concurrency control, and state information about transactions in
> >
> > progress
> >
> > > is held within the DBMS.  Since PostgreSQL uses far more sophisticated
> > > transaction mechanisms than table level locking, it's not as simple as
> > > locking files.  You're pretty much guaranteeding yourself serious data
> > > corruption problems if you try this, since two DBMS instances will try
> >
> > to
> >
> > > maintain independent transaction state information, and end up
> >
> > mangling
> >
> > > each other's data.  Seriously.
> > >
> > > Further, since you're relying on a single storage point, you're not
> > > actually implementing HA at all.  You're also going to have nasty
> >
> > issues
> >
> > > with write synchronisation with NAS.  It's strongly recommended that
> >
> > DBMS
> >
> > > servers run databases only on physically local storage, otherwise
> >
> > there
> >
> > > are too many layers of data shuffling between the DBMS server and the
> > > physical disk.  Data will get lost and corrupted sooner or later.
> > >
> > > I'd suggest that you take a serious look at what your actual
> >
> > availability
> >
> > > requirements are.  What are the potential costs of downtime?  What
> >
> > will
> >
> > > you do if the NAS switch fails for instance, in the case you're trying
> >
> > to
> >
> > > construct?  It happens.  And most organisations don't carry spare ones
> > > lying around, because they're expensive things to have sitting idle.
> > >
> > > General rules with almost any proper RDBMS you care to name:  Use
> >
> > local
> >
> > > storage, not NAS.  You get a lot more bang for the buck in the
> > > availability stakes by using good-quality, well maintained hardware
> >
> > and
> >
> > > software than by trying to do exotic things with replication (more
> >
> > about
> >
> > > this below).  You can consider using disk mirroring (RAID 1) or RAID 5
> >
> > in
> >
> > > order to reduce the probability of having to do time-consuming
> >
> > restores.
> >
> > > Why do you need sophisticated HA?  IMVHO the only people who _really_
> >
> > need
> >
> > > it are people like nuclear power stations, air traffic control (if
> >
> > only!),
> >
> > > hospitals and the like.  It's nice for global businesses too, which
> >
> > have
> >
> > > to provide global business services 24/7.  How were you planning to do
> >
> > the
> >
> > > failover switching?
> > >
> > > In terms of replication, this can be done (with difficulty still) but
> > > always (always!) between two database servers each of which keeps a
> >
> > local
> >
> > > copy of the data, with something erserv sitting between them
> >
> > synchronising
> >
> > > transactions.  You might want to look at that.
> > >
> > > But seriously -- most applications don't need HA solutions.
> >
> > PostgreSQL
> >
> > > running on decent, well-maintained hardware and software is perfectly
> > > capable of achieving 99%+ uptime, which is more than most applicaitons
> > > need.  (And I don't say that idly, we're running it on antique, creaky
> >
> > SGI
> >
> > > Challenges and achieving that kind of uptime.  If we were to put it on
> > > really good new boxes we'd exceed that easily).
> > >
> > > If you really, really do need an HA solution, then I'd hunt around for
> > > someone to add to your team who has extensive experience in this kind
> >
> > of
> >
> > > thing, since it's all too easy otherwise to unwittingly leave in lots
> >
> > of
> >
> > > single points of failure.  (Have you considered multiple independent
> > > UPSes?  Communications lines? NAS switches like I said (and you
> >
> > shouldn't
> >
> > > be using NAS for PG data!), application servers (whatever your
> >
> > application
> >
> > > may be) etc.?)
> > >
> > > Good luck!
> > >
> > > Hal
> > >
> > > On Tue, 9 Dec 2003, Bhartendu Maheshwari wrote:
> > > > Dear Sam,
> > > >
> > > > Thank you for the quick response.
> > > >
> > > > Can I you tell me why its not possible, it is possible with mysql
> >
> > then
> >
> > > > why not with postgres. Actually I am working on High Avaibility
> > > > framework, and its our need, we can't make a separate database
> >
> > server. I
> >
> > > > want to read/write and then close the file, very simple isn't? So
> >
> > how
> >
> > > > can I achieve in postgres? Please help me.
> > > >
> > > > regards
> > > > bhartendu
> > >
> > > ---------------------------(end of
> >
> > broadcast)---------------------------
> >
> > > TIP 7: don't forget to increase your free space map settings
> >
> > ---------------------------(end of broadcast)---------------------------
> > TIP 4: Don't 'kill -9' the postmaster
>
> - --
>     UC
>
> - --
> Open Source Solutions 4U, LLC    2570 Fleetwood Drive
> Phone:  +1 650 872 2425        San Bruno, CA 94066
> Cell:   +1 650 302 2405        United States
> Fax:    +1 650 872 2417
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.2.1 (GNU/Linux)
>
> iD8DBQE/1qo8jqGXBvRToM4RAil5AJ9SbQ3oCdH7WhVMzsJSEGNFgyGO/gCgwdGj
> 3+Tp56/pgQz4gLxlGTO0M4k=
> =fyS0
> -----END PGP SIGNATURE-----
>



pgsql-admin by date:

Previous
From: Sai Hertz And Control Systems
Date:
Subject: Re: Upgrading from 7.2.4 (RH 8) to 7.4 (RH9)
Next
From: Vasilis Ventirozos
Date:
Subject: How can i know the users that are loged in