Don't do that!!!! This absolutely cannot be done in any sane way.
Instead, consider locating the database itself on one server, and then
having multiple other servers running the client application.
On Tue, 9 Dec 2003, Bhartendu Maheshwari wrote:
> Dear All,
>
> I am working in Linux 8.0 and running postgresql 7.2. I am trying to
> access same data files from the two postgres daemons. I mean there are
> two PC's running with postgres and one NAS server where data files are
> kept. To run postgres on the machines I first mount the NAS file system
> and then run it like
>
> shell>postmaster -D $(PATH_TO_DATA_FILE)
>
> Daemons are running well but there is problems with synchronization of
> the data files i.e when i insert some tuples in the tables its not
> immediately writing then to file its kept in the cache only and when I
> try to see from the other machine its displaying the old tuples.
>
> I want after every transaction or query the database update the data
> files, how can I do this? And always read from the data files for select
> operations. My main aim is no cache operation to use the postgres as
> file system(for every operation file operations only), I know it will
> degrade the database performance but its the requirement OR if there is
> any other way by which I can achieve this, please tell me.
>
> regards
> bhartendu
>
>
>
>
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: the planner will ignore your desire to choose an index scan if your
> joining column's datatypes do not match
>
--
Sam Barnett-Cormack
Software Developer | Student of Physics & Maths
UK Mirror Service (http://www.mirror.ac.uk) | Lancaster University