Thread: Best practice? Web application: single PostgreSQL user vs. multiple users

Best practice? Web application: single PostgreSQL user vs. multiple users

From
"Keith G. Murphy"
Date:
I'm trying to get a feel for what most people are doing or consider best
practice.

Given a mod_perl application talking to a PostgreSQL database on the
same host, where different users are logging onto the web server using
LDAP for authentication, do most people

1) have the web server connecting to the database using its own user
account (possibly through ident), and controlling access to different
database entities strictly through the application itself

2) have the web server connecting to the database actually using the
user's account (possibly using LDAP authentication against PostgreSQL),
and controlling access to different database entities through GRANT, etc.

Obviously, (2) leads to more database connections, and you still have to
have the application do some work in terms of which forms are available
to which users, etc.  But I'm a little worried about whether it's best
security practice.



Re: Best practice? Web application: single PostgreSQL

From
"John Sidney-Woollett"
Date:
Keith G. Murphy said:
> 2) have the web server connecting to the database actually using the
> user's account (possibly using LDAP authentication against PostgreSQL),
> and controlling access to different database entities through GRANT, etc.

My experience with java web/app servers indicates that for most setups
using a pool of connections is preferable to using a single connection per
connected user - it scales much better.

What you could consider is one or more pools which map to the "roles" that
your (web) app supports. For example, if a user needs "minimal rights"
access to db resources, then your cgi (request handler) accesses the data
using a connection from the "minimal rights" connection pool. A user
needing "greater rights" would have the cgi access the database from the
"greater rights" pool.

Normally, I place the database functions/tables/objects into different
logical schemas, then I create one or more specific users (for the web/app
server only) which equates to a logical role, and I grant specific rights
on the different schema objects to those users.

Your mileage may vary.

John Sidney-Woollett

Re: Best practice? Web application: single PostgreSQL

From
"Keith G. Murphy"
Date:
John Sidney-Woollett wrote:

> Keith G. Murphy said:
>
>>2) have the web server connecting to the database actually using the
>>user's account (possibly using LDAP authentication against PostgreSQL),
>>and controlling access to different database entities through GRANT, etc.
>
>
> My experience with java web/app servers indicates that for most setups
> using a pool of connections is preferable to using a single connection per
> connected user - it scales much better.
>
> What you could consider is one or more pools which map to the "roles" that
> your (web) app supports. For example, if a user needs "minimal rights"
> access to db resources, then your cgi (request handler) accesses the data
> using a connection from the "minimal rights" connection pool. A user
> needing "greater rights" would have the cgi access the database from the
> "greater rights" pool.
>
That sounds like an excellent compromise.  How do you typically handle
the mechanics of authentication from web server to PostgreSQL on the
connect, using this scheme?
--
Why waste time learning when ignorance is instantaneous?
         -- Hobbes


Re: Best practice? Web application: single PostgreSQL

From
"John Sidney-Woollett"
Date:
Keith G. Murphy said:
> That sounds like an excellent compromise.  How do you typically handle
> the mechanics of authentication from web server to PostgreSQL on the
> connect, using this scheme?

Sorry but I can't help you out here, I'm too much of a newbie with
Postgres - I was hoping that someone else would answer your part 1! :)

John

Re: Best practice? Web application: single PostgreSQL

From
"Keith G. Murphy"
Date:
John Sidney-Woollett wrote:

> Keith G. Murphy said:
>
>>That sounds like an excellent compromise.  How do you typically handle
>>the mechanics of authentication from web server to PostgreSQL on the
>>connect, using this scheme?
>
>
> Sorry but I can't help you out here, I'm too much of a newbie with
> Postgres - I was hoping that someone else would answer your part 1! :)
>
> John
>
Perhaps I can answer my own question.  I could use ident and a map that
lists the web server username as able to map to the different "role"
usernames.  Unfortunately, that still would allow the web server account
to "fake" role names.

If the "real" PostgreSQL accounts do not coincide to the
browser-authenticated usernames, I don't see a good way to use PAM/LDAP
or another mechanism to require that PostgreSQL itself makes sure that
the given username and password are valid.  Not saying that's a big
problem, but...

Hmmm, mightn't it be kind of nice if there were PAM or krb5 maps in
addition to ident maps?
--
Why waste time learning when ignorance is instantaneous?
         -- Hobbes


Re: Best practice? Web application: single PostgreSQL

From
"John Sidney-Woollett"
Date:
Keith G. Murphy said:
> Perhaps I can answer my own question.  I could use ident and a map that
> lists the web server username as able to map to the different "role"
> usernames.

Someone else also mentioned and I personally agree that it's better to
authenticate in the application layer (using whatever technology takes
your fancy), and then use the webserver's generic/pooled connection to
communicate with the database.

Your user and role mapping info could be stored within the database, or
accessed from an LDAP server, or some such.

> Unfortunately, that still would allow the web server account
> to "fake" role names.

Make the application layer robust and secure and it may not be so much of
a problem.

John

Re: Best practice? Web application: single PostgreSQL

From
"scott.marlowe"
Date:
On Tue, 13 Jan 2004, Keith G. Murphy wrote:

> I'm trying to get a feel for what most people are doing or consider best
> practice.
>
> Given a mod_perl application talking to a PostgreSQL database on the
> same host, where different users are logging onto the web server using
> LDAP for authentication, do most people
>
> 1) have the web server connecting to the database using its own user
> account (possibly through ident), and controlling access to different
> database entities strictly through the application itself
>
> 2) have the web server connecting to the database actually using the
> user's account (possibly using LDAP authentication against PostgreSQL),
> and controlling access to different database entities through GRANT, etc.
>
> Obviously, (2) leads to more database connections, and you still have to
> have the application do some work in terms of which forms are available
> to which users, etc.  But I'm a little worried about whether it's best
> security practice.

I do 1.  different language (PHP) same basic thing though.  All security
is handled by ACLS I build myself in Postgresql and interrogate via my own
application.


Re: Best practice? Web application: single PostgreSQL

From
Tom Lane
Date:
"Keith G. Murphy" <keithmur@mindspring.com> writes:
> Hmmm, mightn't it be kind of nice if there were PAM or krb5 maps in
> addition to ident maps?

ISTM the whole point of PAM is that you plug in your desired security
policy outside of the application.  You shouldn't be asking for more
security frammishes from Postgres, you should be off coding a PAM module
that does things exactly the way you want.

            regards, tom lane

Re: Best practice? Web application: single PostgreSQL

From
"Keith G. Murphy"
Date:
Tom Lane wrote:

> "Keith G. Murphy" <keithmur@mindspring.com> writes:
>
>>Hmmm, mightn't it be kind of nice if there were PAM or krb5 maps in
>>addition to ident maps?
>
>
> ISTM the whole point of PAM is that you plug in your desired security
> policy outside of the application.  You shouldn't be asking for more
> security frammishes from Postgres, you should be off coding a PAM module
> that does things exactly the way you want.
>

I believe I see what you mean.  Given the original premise, I imagine
you could have the PAM module do something like:

(1) Authenticate via LDAP using the user's username and password

(2) Look up the "role" name (real PostgreSQL username) via LDAP, using
the username

(3) Tell PostsgreSQL that the user is authenticated under role name.

I really hadn't thought much about how the PAM module might work.
--
Why waste time learning when ignorance is instantaneous?
         -- Hobbes


Re: Best practice? Web application: single PostgreSQL

From
Bruno Wolff III
Date:
On Tue, Jan 13, 2004 at 11:15:30 -0600,
  "Keith G. Murphy" <keithmur@mindspring.com> wrote:
> Perhaps I can answer my own question.  I could use ident and a map that
> lists the web server username as able to map to the different "role"
> usernames.  Unfortunately, that still would allow the web server account
> to "fake" role names.

If you can't trust the web server account then you probably want to use
a system where cgi-bin programs are run as different users.

If you have untrusted users who can supply their own cgi-bin programs
then using a common uid which all cgi-bin programs run under isn't
secure.

> If the "real" PostgreSQL accounts do not coincide to the
> browser-authenticated usernames, I don't see a good way to use PAM/LDAP
> or another mechanism to require that PostgreSQL itself makes sure that
> the given username and password are valid.  Not saying that's a big
> problem, but...

I don't think using information received from the browser to authenticate
versus the postgres server works when you can't be assured that the
cgi-bin program doing the checking is trustworthy.

Re: Best practice? Web application: single PostgreSQL

From
"scott.marlowe"
Date:
On Tue, 13 Jan 2004, Keith G. Murphy wrote:

> John Sidney-Woollett wrote:
>
> > Keith G. Murphy said:
> >
> >>2) have the web server connecting to the database actually using the
> >>user's account (possibly using LDAP authentication against PostgreSQL),
> >>and controlling access to different database entities through GRANT, etc.
> >
> >
> > My experience with java web/app servers indicates that for most setups
> > using a pool of connections is preferable to using a single connection per
> > connected user - it scales much better.
> >
> > What you could consider is one or more pools which map to the "roles" that
> > your (web) app supports. For example, if a user needs "minimal rights"
> > access to db resources, then your cgi (request handler) accesses the data
> > using a connection from the "minimal rights" connection pool. A user
> > needing "greater rights" would have the cgi access the database from the
> > "greater rights" pool.
> >
> That sounds like an excellent compromise.  How do you typically handle
> the mechanics of authentication from web server to PostgreSQL on the
> connect, using this scheme?

I create individual databases for unrelated projects (like say, phonebook
and sales_projections and then connect to each database as a different
artificial user often named for the database.  Then I usually wrap that in
an include file I just add at the top of each page that connects and has
the password (on systems using password authentication) or that connects
without a password if I'm on a system using trust.

Then, any access by users is handled by ACLs I just build in a table in
that database.

We authenticate with auth_ldap, so we always know the user's name / groups
etc...


Re: Best practice? Web application: single PostgreSQL

From
"scott.marlowe"
Date:
On Tue, 13 Jan 2004, Keith G. Murphy wrote:

> John Sidney-Woollett wrote:
>
> > Keith G. Murphy said:
> >
> >>2) have the web server connecting to the database actually using the
> >>user's account (possibly using LDAP authentication against PostgreSQL),
> >>and controlling access to different database entities through GRANT, etc.
> >
> >
> > My experience with java web/app servers indicates that for most setups
> > using a pool of connections is preferable to using a single connection per
> > connected user - it scales much better.
> >
> > What you could consider is one or more pools which map to the "roles" that
> > your (web) app supports. For example, if a user needs "minimal rights"
> > access to db resources, then your cgi (request handler) accesses the data
> > using a connection from the "minimal rights" connection pool. A user
> > needing "greater rights" would have the cgi access the database from the
> > "greater rights" pool.
> >
> That sounds like an excellent compromise.  How do you typically handle
> the mechanics of authentication from web server to PostgreSQL on the
> connect, using this scheme?

Just an addition, we do all our groups in LDAP too.  Generally ACLs point
back to groups, not users.  that way if billy bob moves from finance to HR
we just change his group memberships, not all the ACLs in all the
databases.


Re: Best practice? Web application: single PostgreSQL

From
netadmin@vcsn.com
Date:
On Tue, 13 Jan 2004, Keith G. Murphy wrote:

> I'm trying to get a feel for what most people are doing or consider best
> practice.
>
> Given a mod_perl application talking to a PostgreSQL database on the
> same host, where different users are logging onto the web server using
> LDAP for authentication, do most people
>
> 1) have the web server connecting to the database using its own user
> account (possibly through ident), and controlling access to different
> database entities strictly through the application itself
>
> 2) have the web server connecting to the database actually using the
> user's account (possibly using LDAP authentication against PostgreSQL),
> and controlling access to different database entities through GRANT, etc.
>
> Obviously, (2) leads to more database connections, and you still have to
> have the application do some work in terms of which forms are available
> to which users, etc.  But I'm a little worried about whether it's best
> security practice.

I do #1- most connections are not persistent though I have done those
before as well.  Security-wise I'm been reviewing the pros and cons
of this and so far I really can't make much of an argument for #2.
Just the opposite in fact.  The one thing I will be doing though,
for more security, is create a special webuser for that client.

---
Keith C. Perry
Director of Networks & Applications
Visions Communications Support Network, Inc.
netadmin@vcsn.com
http://vcsn.com
---

Re: Best practice? Web application: single PostgreSQL

From
Keith Murphy
Date:
scott.marlowe wrote:

> On Tue, 13 Jan 2004, Keith G. Murphy wrote:
>
>
>>I'm trying to get a feel for what most people are doing or consider best
>>practice.
>>
>>Given a mod_perl application talking to a PostgreSQL database on the
>>same host, where different users are logging onto the web server using
>>LDAP for authentication, do most people
>>
>>1) have the web server connecting to the database using its own user
>>account (possibly through ident), and controlling access to different
>>database entities strictly through the application itself
>>
>>2) have the web server connecting to the database actually using the
>>user's account (possibly using LDAP authentication against PostgreSQL),
>>and controlling access to different database entities through GRANT, etc.
>>
>>Obviously, (2) leads to more database connections, and you still have to
>>have the application do some work in terms of which forms are available
>>to which users, etc.  But I'm a little worried about whether it's best
>>security practice.
>
>
> I do 1.  different language (PHP) same basic thing though.  All security
> is handled by ACLS I build myself in Postgresql and interrogate via my own
> application.
>
Thanks to all for the discussion so far.  I have been doing option 1 so
far as well, but was a bit uncomfortable allowing the web server account
to have complete control over the database, and wondered what the
alternatives might be.

At some point, I may try rolling my own PAM module (as Tom Lane
suggested) that uses the user's browser-authenticated username and
password to map to a PostgreSQL username that constitutes a "role"
(assuming that's possible).  The benefit I can see to such a scheme is
that it means anyone who might manage to run a process under the web
server's account would still have to have the appropriate username and
password to do anything to the database.  And the limited number of
"role" PostgreSQL usernames would mean fewer connections than using the
original usernames, as John Sidney-Woollett pointed out.



Re: Best practice? Web application: single PostgreSQL

From
Alex Satrapa
Date:
Keith Murphy wrote:
> At some point, I may try rolling my own PAM module (as Tom Lane
> suggested) that uses the user's browser-authenticated username and
> password to map to a PostgreSQL username that constitutes a "role"
> (assuming that's possible).

One option is to add an extra layer of indirection: the web server
interacts with a "transaction server" through eg: XML-RPC or CORBA.

The list of transactions ("interactions") you can perform is controlled,
each transaction can be logged, and each transaction handler can have
its own access rights to the postgresql database.

The transaction server can be hardened by only allowing access from the
web server. This does mean that anyone breaking into your web server can
potentially alter data by interacting with the transaction server - but
only to the extent allowed by the existing transactions. They can't make
wholesale changes to your database such as "select * from
credit_card_details; truncate invoices; drop table accounts_receivable;"
etc.

Regards
Alex Satrapa


Re: Best practice? Web application: single PostgreSQL

From
Shridhar Daithankar
Date:
On Tuesday 13 January 2004 22:13, Keith G. Murphy wrote:
> John Sidney-Woollett wrote:
> > What you could consider is one or more pools which map to the "roles"
> > that your (web) app supports. For example, if a user needs "minimal
> > rights" access to db resources, then your cgi (request handler) accesses
> > the data using a connection from the "minimal rights" connection pool. A
> > user needing "greater rights" would have the cgi access the database from
> > the "greater rights" pool.
>
> That sounds like an excellent compromise.  How do you typically handle
> the mechanics of authentication from web server to PostgreSQL on the
> connect, using this scheme?

Umm.. I doubt role specific pooling is required. You can use set session
authorization to switch authorization of any connection.

Of course you need to connet to database as super user though. That is real
downside. If your webapp. breaks, the database can not provide any security.

Shridhar


Re: Best practice? Web application: single PostgreSQL user vs. multiple users

From
"Chris Travers"
Date:
Hi Keith and others,

Personally I am rather humble regarding my ability to write unassailable
programs from a security perspective, so I tend to use individual database
accounts for individual users.  I know that under certain environments this
may not scale well and connection pooling may be required, but generally I
am not comfortable with that approach.

Basically my logic is that the most important security is where the
information is stored.  For this reason I try to "push security back"
towards the database manager as far as possible.  Then everythign else can
be given as few permissions as possible (for example, the web application
itself has no access to the database apart from the user).  Usernames and
passwords can be stored separately in order to reduce the cost of compromise
(f. ex. HERMES stores the username in a cookie but the password as a session
variable), etc.  These strategies are simply not possible under the
connection pooling scenario.

For me it comes down to the following question:  What is the cost of
enforcing security yourself?  If there is a security flaw in PostgreSQL,
there is NOTHING that will keep you safe, but why multiply single points of
security failure?

However you have another problem in the scenario you describe-- that is that
the actual authentication occurs via LDAP.  This changes the assumptions and
security environment a bit.  If I had complete control over such an
environment, I would do one of the following things:

1:  Use Kerberos to authenticate and LDAP to store profiles.  LDAP and
PostgreSQL authentication are now handled by Kerberos and you can pass the
authentication token via the web app in mod_perl (not sure how to do it in
PHP though).  For many browsers, this would allow for single signon and
transparent logins.

2:  Have multiple accounts for different roles and store these in the LDAP
user profiles.  This creates a large number of headaches (are the role
fields properly secured, for example).

3:  Finally you could have a user profiles table which contained the allowed
accounts to use.

Best Wishes,
Chris Travers


Re: Best practice? Web application: single PostgreSQL

From
"Chris Travers"
Date:
> I do #1- most connections are not persistent though I have done those
> before as well.  Security-wise I'm been reviewing the pros and cons
> of this and so far I really can't make much of an argument for #2.
> Just the opposite in fact.  The one thing I will be doing though,
> for more security, is create a special webuser for that client.
>

My concern with connection pooling is that the application itself must run
with permission to do anything that any user may do in the application.  If
you have a way of using postgresql native accounts, you can actually force
the application to have no rights to the database at all unless the
credentials are supplied by a user.  The application then has NO rights to
anything that the user doesn't, and this may allow for better security.

The real drawback for multiple accounts is that each account needs to be
maintained.  In environments where this is an issue, I usually create
functions to do this so that my information does not get out of sync.
However, I can imagine this being a concern, security wise.

Best Wishes,
Chris Travers


Re: Best practice? Web application: single PostgreSQL

From
"Keith C. Perry"
Date:
Quoting Chris Travers <chris@travelamericas.com>:

> > I do #1- most connections are not persistent though I have done those
> > before as well.  Security-wise I'm been reviewing the pros and cons
> > of this and so far I really can't make much of an argument for #2.
> > Just the opposite in fact.  The one thing I will be doing though,
> > for more security, is create a special webuser for that client.
> >
>
> My concern with connection pooling is that the application itself must run
> with permission to do anything that any user may do in the application.  If
> you have a way of using postgresql native accounts, you can actually force
> the application to have no rights to the database at all unless the
> credentials are supplied by a user.  The application then has NO rights to
> anything that the user doesn't, and this may allow for better security.

I think we might be talking about two different things- unless you are equating
persistant connections to connection pooling.  Perhaps that is correct but let
me example a little more what I was talking about.  It might be a little
off-topic but I want to be clear in the archives.

My particular environment is Linux, Apache, mod_perl and PostgreSQL.  The user
and group that the web server runs as has no permissions to anything.  When a
client's app need a DB, we create an account in PG for them and grant
permissions to the web user for the objects it needs access to.

In the mod_perl script, if I do persistent, the first thing that is done is a
check for a previous connection to **that** client's database (via a global
variable).  If not (or if that connection is "busy") a new one is created.  The
maximum number of connections would be equal to the maximum number of Apache
listeners.  That's assuming you don't use other application logic to control
it.  For those who aren't familiar with mod_perl, the benefit is that not
only does your script run as an object in the Apache server but the database
connection objects will persist.  When you don't do the persistent connections,
the difference is that your application object in Apache will have to open the
database connection everytime.  For local connections to PostgreSQL, I really
haven't seen that much of a difference but in my "test" case, I'm
pretty sure I wasn't implementing the persistent-ness of the script correctly
and the application was not being hit with any significant load.

I thought connection pooling was more generic- any connection from the web
server/application business logic could be reused for any db connection.  Please
correct me if I'm wrong here.

Depending on the application, the
authorization/authentication would be wrapped in an SSL transport (HTTPS instead
of HTTP).  After that each connection is "tracked" (via the Apache
cookie-tracking and and pgAuth module I wrote) by always checking the validity
of the client's browser cookie.  The authorization pairs for the client's
application are stored in a user table in their database and the cookie gets
linked to that and is valid only for that browser session (i.e. as long as the
browser instance is open and you did not "log out").  No direct information
about the database is revealed to the client interface (web browser) at all.

> The real drawback for multiple accounts is that each account needs to be
> maintained.  In environments where this is an issue, I usually create
> functions to do this so that my information does not get out of sync.
> However, I can imagine this being a concern, security wise.

Its the pros and cons of PG having its own accounts.  Definitely a security
benefit, which generally means a headache somewhere else- in this case double
account unless, like you said, have way to keep things in sync.  For something
secure, I don't mind the headache...  Advil is my friend  :)

> Best Wishes,
> Chris Travers
>


--
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com

____________________________________
This email account is being host by:
VCSN, Inc : http://vcsn.com

Re: Best practice? Web application: single PostgreSQL

From
Martin Marques
Date:
Mensaje citado por "scott.marlowe" <scott.marlowe@ihs.com>:

> > 1) have the web server connecting to the database using its own user
> > account (possibly through ident), and controlling access to different
> > database entities strictly through the application itself
[snip]

> I do 1.  different language (PHP) same basic thing though.  All security
> is handled by ACLS I build myself in Postgresql and interrogate via my own
> application.

In the case of PHP there are very nice ACL stuff in PEAR, like PEAR::Auth.

I personally do all the auth stuff by myself (table with users, table with
function, table with permissions, etc.), just like Scott. :-)

--
select 'mmarques' || '@' || 'unl.edu.ar' AS email;
---------------------------------------------------------
Martín Marqués          |   Programador, DBA
Centro de Telemática    |     Administrador
               Universidad Nacional
                    del Litoral
---------------------------------------------------------

Re: [long] Best practice? Web application: single PostgreSQL

From
"Chris Travers"
Date:
Hi Keith, et. al
> I think we might be talking about two different things- unless you are
equating
> persistant connections to connection pooling.  Perhaps that is correct but
let
> me example a little more what I was talking about.  It might be a little
> off-topic but I want to be clear in the archives.
>
Ok, but connection pooling necessitates "account pooling" which is the first
approach.

> My particular environment is Linux, Apache, mod_perl and PostgreSQL.  The
user
> and group that the web server runs as has no permissions to anything.
When a
> client's app need a DB, we create an account in PG for them and grant
> permissions to the web user for the objects it needs access to.

This sounds very much like the "one account per role" compromise suggested
earlier.

>  a previous connection to **that** client's database (via a global
> variable).  If not (or if that connection is "busy") a new one is created.
The
> maximum number of connections would be equal to the maximum number of
Apache
> listeners.  That's assuming you don't use other application logic to
control
> it.  For those who aren't familiar with mod_perl, the benefit is that not
> only does your script run as an object in the Apache server but the
database
> connection objects will persist.  When you don't do the persistent
connections,
> the difference is that your application object in Apache will have to open
the
> database connection everytime.  For local connections to PostgreSQL, I
really
> haven't seen that much of a difference but in my "test" case, I'm
> pretty sure I wasn't implementing the persistent-ness of the script
correctly
> and the application was not being hit with any significant load.
>
Ok, I see what you are talking about here.

Does this mean, then, that the mod_perl scripts must have access to be able
to log into the database as any user even without the user supplying
credentials? i.e. are the credentials for the database itself provided by
the user or by configuring the app?  It sounds to me like you then have to
check the username and password against a table in your database, requiring
a pre-existing connection.

Of course if all users at a client have the same rights, then your approach
is similar to mine.  If not, then the db provides little security if the app
breaks.

> I thought connection pooling was more generic- any connection from the web
> server/application business logic could be reused for any db connection.
Please
> correct me if I'm wrong here.
>
You are right, but I generally think that the account pooling approach is
mostly important if you are also pooling connections.  At least with one
session per user, you can do better enforcement.  For example, here is how
my largest application handles it.

HERMES (http://hermesweb.sourceforge.net) calls a PHP script (using its own
PAM model) to authenticate the user based on username and password, though I
should change this to make it more general.  Currently two modes are
supported on PostgreSQL:  DB Native and Shared Account.

DB Native is the preferred access method.  In this method, the username and
password supplied to the application correspond to the database account.
The authentication is handled by logging into the database with the username
and password supplied.  Permissions are then enforced by the database level
permissions.  The only drawback to this approack is that the fact that
HERMES uses its own permission catalogs that allow administrators to assign
the consistant permissions to related groups of tables.  In this mode, these
permissions need to be compiled, or activated, into database permissions
before they take effect, but users may have whatever permissions that might
have (including create table and temp permissions).

In shared account mode, things are handled differently (requiring only 2
accounts).  The user provides a username and password.  The username and
password are passed to the login function, which logs in with a shared
account and runs the authenticate(username, password) stored proceedure
(which runs as security definer).  This function creates a temporary table
of authenticated users from the current connection (allowing a su-like
function, but not currently supported).  The shared user does NOT have
permission to write to this table.

Permissions are then enforced via triggers which check current_user against
a list of db users exempted from trigger-based permissions enforcement.  The
authenticated username is then used to check insert, update, or delete
permissions directly against the HERMES permission catalogs.

Select Permissions are enforced by moving the tables (when installing the
shared hosting layer) into a shadow schema and revoking permission from the
shared user to select from these tables.  Views are then created in place of
the tables which contain oid's as well the normal columns, and perform a
similar check to the ones that the insert/update/delete triggers do.
Update, insert, and delete functions pass back to the previous table either
by oid or by primary key (unsupported at the moment, but being worked on).

The major restrictions here include a performance hit, and the fact that the
shared user must not have create table or temp permissions in the current
database.  However, as a list of db users which bypass the trigger
permissions are maintained, automated backup tools can still be used.  The
other user MUST have temp permissions (and preferably create table perms
too).

Of course, the choice of approaches also requires that user creation,
permissions activation, etc. are all handled by stored proceedures, though
most fo the logic will probably be moved back into triggers.

Best Wishes,
Chris Travers