as I see it, there are two reasons. Independence and security.
It is more secure because you are treating the "middle tier" as a proxy
server for your data. If you can control the physical connections to your
database, it will be naturally more secure than if anyone can try to open a
port on the server.
As for independence, it makes it less painful if you change a component of
your setup. Say for some reason Postgresql makes everyone pay a 10K license
to use their product and you decide to change SQL servers. Instead of
recoding your client piece to talk to a new database, you only need to
recode your middle tier to talk to a new database server. That way, you
update only a handful of installs as opposed to hundreds or thousands of
client installs.
For the most part I compare it to object oriented programming.
Fundamentally it is a better way to program, but you aren't forced to do it
that way either.
Adam Lang
Systems Engineer
Rutgers Casualty Insurance Company
----- Original Message -----
From: "keke abe" <keke@mac.com>
To: <pgsql-interfaces@postgresql.org>
Sent: Thursday, November 02, 2000 2:03 PM
Subject: Re: [INTERFACES] Connecting remotely - multi tier
> Adam Lang wrote:
>
> > Ok... so if I am writing a distributed application in windows that will
use
> > a Postgresql backend, I should have the client interface another
"server"
> > application, which will inturn access/retrieve informaton from the
database?
>
> I'd like to know if this kind of layering is mandatory or not. Is it
really
> unacceptable to expose the Posgresql backend to the rest of the world? Is
> there anything that I should be aware of if I let the clients to talk to
> the backend directly.
>
> regards,
> abe