Re: Running Postgres Daemons with same data files - Mailing list pgsql-admin

From Bhartendu Maheshwari
Subject Re: Running Postgres Daemons with same data files
Date
Msg-id 1071119035.4503.2.camel@bharat
Whole thread Raw
In response to Re: Running Postgres Daemons with same data files  (John Gibson <gib@edgate.com>)
Responses Re: Running Postgres Daemons with same data files
Re: Running Postgres Daemons with same data files
List pgsql-admin
Dear All,

I got all your points, thanks for such a great discussion, Now the last
thing I want is how can I close the data files and flush the cache into
the data files. How can I do this in postgresql????

I will also try with RAID and other suggested way you all suggested.

regards
bhartendu

On Thu, 2003-12-11 at 00:46, John Gibson wrote:
> Bhartendu,
>
> In my humble opinion, you would be well served if you listened to all
> the nice people on this list.
>
> Use a local disk subsystems with RAID-type storage, use replication to
> have a  second "standby" system available if the first one fails.
>
> The path you seem anxious to trod will get very muddy and slippery.
>
> Good Luck.
>
> ...john
>
> Bhartendu Maheshwari wrote:
>
> >Dear UC,
> >
> >You are right for the HA solution but at the same time we are also
> >implementing the load balancing solution so we can't have for one Node 2
> >different processing entity and database as well. We try to provide
> >solution for HA, load balancing both and in that there are 2 different
> >processing machine but sharing the common database so that both get the
> >latest and synchronized data files.
> >
> >You are right if the NAS is down then everything get down but the
> >probability for the NAS is down is very less and by this we are able to
> >provide service for 99% cases and if you are 99% handle cases then you
> >are providing good service, isn't?
> >
> >About the cache to file write :- If the database is writting all the
> >stuff to the files after each transaction then both have one
> >synchronized set of data file whoever want can acquire the lock and use
> >and then unlock it. The MySQL have command "flush tables" to enforce the
> >database to write all the cache contents to the files, Is there anything
> >similar in postgres? This will definitely degrade the performance of my
> >system but its much more fast since I have 2 processing unit.
> >
> >Anyway if somebody have some other solution for the same please help me.
> >One I got have one common postmaster running on one PC and the two nodes
> >connect to that server to get the data. Any other please let me know.
> >
> >regards
> >bhartendu
> >
> >
> >
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org




pgsql-admin by date:

Previous
From: John Gibson
Date:
Subject: Re: Running Postgres Daemons with same data files
Next
From: Jim Cochrane
Date:
Subject: Re: Postrgres data restoration problem