Re: [long] Best practice? Web application: single PostgreSQL - Mailing list pgsql-general
From | Chris Travers |
---|---|
Subject | Re: [long] Best practice? Web application: single PostgreSQL |
Date | |
Msg-id | 00d301c3db28$3e6ea800$7f00053d@winxp Whole thread Raw |
In response to | Best practice? Web application: single PostgreSQL user vs. multiple users ("Keith G. Murphy" <keithmur@mindspring.com>) |
List | pgsql-general |
Hi Keith, et. al > I think we might be talking about two different things- unless you are equating > persistant connections to connection pooling. Perhaps that is correct but let > me example a little more what I was talking about. It might be a little > off-topic but I want to be clear in the archives. > Ok, but connection pooling necessitates "account pooling" which is the first approach. > My particular environment is Linux, Apache, mod_perl and PostgreSQL. The user > and group that the web server runs as has no permissions to anything. When a > client's app need a DB, we create an account in PG for them and grant > permissions to the web user for the objects it needs access to. This sounds very much like the "one account per role" compromise suggested earlier. > a previous connection to **that** client's database (via a global > variable). If not (or if that connection is "busy") a new one is created. The > maximum number of connections would be equal to the maximum number of Apache > listeners. That's assuming you don't use other application logic to control > it. For those who aren't familiar with mod_perl, the benefit is that not > only does your script run as an object in the Apache server but the database > connection objects will persist. When you don't do the persistent connections, > the difference is that your application object in Apache will have to open the > database connection everytime. For local connections to PostgreSQL, I really > haven't seen that much of a difference but in my "test" case, I'm > pretty sure I wasn't implementing the persistent-ness of the script correctly > and the application was not being hit with any significant load. > Ok, I see what you are talking about here. Does this mean, then, that the mod_perl scripts must have access to be able to log into the database as any user even without the user supplying credentials? i.e. are the credentials for the database itself provided by the user or by configuring the app? It sounds to me like you then have to check the username and password against a table in your database, requiring a pre-existing connection. Of course if all users at a client have the same rights, then your approach is similar to mine. If not, then the db provides little security if the app breaks. > I thought connection pooling was more generic- any connection from the web > server/application business logic could be reused for any db connection. Please > correct me if I'm wrong here. > You are right, but I generally think that the account pooling approach is mostly important if you are also pooling connections. At least with one session per user, you can do better enforcement. For example, here is how my largest application handles it. HERMES (http://hermesweb.sourceforge.net) calls a PHP script (using its own PAM model) to authenticate the user based on username and password, though I should change this to make it more general. Currently two modes are supported on PostgreSQL: DB Native and Shared Account. DB Native is the preferred access method. In this method, the username and password supplied to the application correspond to the database account. The authentication is handled by logging into the database with the username and password supplied. Permissions are then enforced by the database level permissions. The only drawback to this approack is that the fact that HERMES uses its own permission catalogs that allow administrators to assign the consistant permissions to related groups of tables. In this mode, these permissions need to be compiled, or activated, into database permissions before they take effect, but users may have whatever permissions that might have (including create table and temp permissions). In shared account mode, things are handled differently (requiring only 2 accounts). The user provides a username and password. The username and password are passed to the login function, which logs in with a shared account and runs the authenticate(username, password) stored proceedure (which runs as security definer). This function creates a temporary table of authenticated users from the current connection (allowing a su-like function, but not currently supported). The shared user does NOT have permission to write to this table. Permissions are then enforced via triggers which check current_user against a list of db users exempted from trigger-based permissions enforcement. The authenticated username is then used to check insert, update, or delete permissions directly against the HERMES permission catalogs. Select Permissions are enforced by moving the tables (when installing the shared hosting layer) into a shadow schema and revoking permission from the shared user to select from these tables. Views are then created in place of the tables which contain oid's as well the normal columns, and perform a similar check to the ones that the insert/update/delete triggers do. Update, insert, and delete functions pass back to the previous table either by oid or by primary key (unsupported at the moment, but being worked on). The major restrictions here include a performance hit, and the fact that the shared user must not have create table or temp permissions in the current database. However, as a list of db users which bypass the trigger permissions are maintained, automated backup tools can still be used. The other user MUST have temp permissions (and preferably create table perms too). Of course, the choice of approaches also requires that user creation, permissions activation, etc. are all handled by stored proceedures, though most fo the logic will probably be moved back into triggers. Best Wishes, Chris Travers
pgsql-general by date: