Re: Running Postgres Daemons with same data files - Mailing list pgsql-admin

From Frank Finner
Subject Re: Running Postgres Daemons with same data files
Date
Msg-id 20031209134705.1edcad79.postgresql@finner.de
Whole thread Raw
In response to Re: Running Postgres Daemons with same data files  (Bhartendu Maheshwari <bhartendum@jataayusoft.com>)
List pgsql-admin
On Tue, 9 Dec 2003 18:03:31 +0530
pgsql-admin-owner@postgresql.org wrote:

> Dear Sam,
>
> Thank you for the quick response.
>
> Can I you tell me why its not possible, it is possible with mysql then
> why not with postgres. Actually I am working on High Avaibility
> framework, and its our need, we can't make a separate database server.
> I want to read/write and then close the file, very simple isn't? So
> how can I achieve in postgres? Please help me.
>
> regards
> bhartendu
>

It seems MySQL tries every attempt to crash a database.

Seriously: What do you think will happen to the database files, if you
try to insert/update on the same files at the same time from two
different engines? Exactly: This would not only crash a database, this
would shatter it to thousands of pieces, not recoverable.

Even if you do only SELECTS from one engine, this cannot be synchronized
in any way. And by the way: What reason do you have for another engine
just doing selects? Why not use a client connecting to the one and only
engine, which can do everything (INSERTs, UPDATEs, SELECTs) at (nearly)
the same time? This is not XBase, where you have to look after locking
yourself - let the engine do the dirty work like file locking and
determine, who´s next to do an operation. Just connect to it with
several clients and work with these.

Regards, Frank.

pgsql-admin by date:

Previous
From: Bhartendu Maheshwari
Date:
Subject: Re: Running Postgres Daemons with same data files
Next
From: Halford Dace
Date:
Subject: Re: Running Postgres Daemons with same data files