Thread: Re: dblink: Add SCRAM pass-through authentication

Re: dblink: Add SCRAM pass-through authentication

From
Jacob Champion
Date:
On Wed, Feb 12, 2025 at 11:54 AM Matheus Alcantara
<matheusssilv97@gmail.com> wrote:
> Currently dblink_connstr_check and dblink_security_check only check if the
> password is present on connection options, in case of not superuser.

They also check for GSS delegation. I think SCRAM passthrough should
ideally be considered a second form of credentials delegation, from
the perspective of those functions.

> I added
> this logic because the password is not required for SCRAM but I agree with you
> that it sounds strange. Maybe these functions could check if the SCRAM is
> being used and then skip the password validation if needed internally?

As long as the end result is to enforce that the credentials must come
from the end user, I think that would be fine in theory.

> I also agree that we should enforce the use of the SCRAM on the remote for
> safety. To do this I think that we could set require_auth=scram-sha-256 on
> connection options if SCRAM pass-through is being used, with this we will get a
> connection error. WYT?

We would need to verify that the user mapping can't overwrite that
with its own (less trusted) `require_auth` setting. (I think that
should be true already, but I'm not 100% sure.)

Hardcoding to scram-sha-256 would also prohibit the use of GSS or
standard password auth, now that I think about it. The docs currently
have a note about being able to choose... Should we add the other
permitted authentication types, i.e. `password,md5,scram-sha-256,gss`?
Or should we prohibit the use of other auth types if you've set
use_scram_passthrough? Or maybe there's an easier way to enforce the
use of the SCRAM keys, that better matches the current logic in
dblink_security_check?

> I can create a new patch to fix this on postgres_fdw too once we define the
> approach to this here on dblink.

Sounds good to me.

Thanks,
--Jacob



Re: dblink: Add SCRAM pass-through authentication

From
Matheus Alcantara
Date:
On Thu, Feb 13, 2025 at 8:25 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:
> > I also agree that we should enforce the use of the SCRAM on the remote for
> > safety. To do this I think that we could set require_auth=scram-sha-256 on
> > connection options if SCRAM pass-through is being used, with this we will get a
> > connection error. WYT?
>
> We would need to verify that the user mapping can't overwrite that
> with its own (less trusted) `require_auth` setting. (I think that
> should be true already, but I'm not 100% sure.)
>
Yeah, this is true. The user mapping and the fdw options can overwrite this.
I'll work on a fix for this.

> Hardcoding to scram-sha-256 would also prohibit the use of GSS or
> standard password auth, now that I think about it. The docs currently
> have a note about being able to choose... Should we add the other
> permitted authentication types, i.e. `password,md5,scram-sha-256,gss`?
> Or should we prohibit the use of other auth types if you've set
> use_scram_passthrough? Or maybe there's an easier way to enforce the
> use of the SCRAM keys, that better matches the current logic in
> dblink_security_check?
>
But would it be possible to use SCRAM pass-through feature using another auth
method? We need the scram keys (that is saved on MyProcPort during
scram_exchange) to perform the pass-through which I think that we would not
have with another auth type? That being said I think that we could prohibit the
usage of other auth types when use_scram_passthrough is set, what do you think?

--
Matheus Alcantara



Re: dblink: Add SCRAM pass-through authentication

From
Jacob Champion
Date:
On Wed, Feb 19, 2025 at 12:02 PM Matheus Alcantara
<matheusssilv97@gmail.com> wrote:
> > Hardcoding to scram-sha-256 would also prohibit the use of GSS or
> > standard password auth, now that I think about it. The docs currently
> > have a note about being able to choose... Should we add the other
> > permitted authentication types, i.e. `password,md5,scram-sha-256,gss`?
> > Or should we prohibit the use of other auth types if you've set
> > use_scram_passthrough? Or maybe there's an easier way to enforce the
> > use of the SCRAM keys, that better matches the current logic in
> > dblink_security_check?
> >
> But would it be possible to use SCRAM pass-through feature using another auth
> method?

No, but if you want the same foreign server to be accessible by users
who log in with different authentication types, forcing a single
require_auth setting will defeat that. I don't have a strong opinion
about how important maintaining that functionality is, but the code
seems to allow it today.

--

Some thoughts on v2-0001:

I like the conceptual simplification of get_connect_string().

> +   /* first time, allocate or get the custom wait event */
> +   if (dblink_we_connect == 0)
> +       dblink_we_connect = WaitEventExtensionNew("DblinkConnectPgServer");

Replacing two existing wait events with one new one is a user-facing
change (see the documented list of events at [1]). Maybe we want that,
but it hasn't been explained. I think that change should be made
independently of a refactoring patch (or else defended in the commit
message).

> +   if (foreign_server)
> +   {
> +       serverid = foreign_server->serverid;
> +       user_mapping = GetUserMapping(userid, serverid);
> +
> +       connstr = get_connect_string(foreign_server, user_mapping);
> +   }

Is there any other valid value for user_mapping that a caller can
choose? If not, I'd like to see the GetUserMapping call pushed back
down into get_connect_string(), unless there's another reason for
pulling it up that I'm missing.

> +   static const PQconninfoOption *options = NULL;
> +
> +   /*
> +    * Get list of valid libpq options.
> +    *
> +    * To avoid unnecessary work, we get the list once and use it throughout
> +    * the lifetime of this backend process.  We don't need to care about
> +    * memory context issues, because PQconndefaults allocates with malloc.
> +    */
> +   if (!options)
> +   {
> +       options = PQconndefaults();
> +       if (!options)           /* assume reason for failure is OOM */
> +           ereport(ERROR,
> +                   (errcode(ERRCODE_FDW_OUT_OF_MEMORY),
> +                    errmsg("out of memory"),
> +                    errdetail("Could not get libpq's default connection options.")));
> +   }

It looks like `options` might be an unused variable in connect_pg_server()?

> -       if (PQstatus(conn) == CONNECTION_BAD)
> +   if (!conn || PQstatus(conn) != CONNECTION_OK)

I don't think this should be changed in a refactoring patch.
PQstatus() can handle a NULL conn pointer.

> -       if (rconn)
> -           pfree(rconn);

Looks like this code in dblink_connect() was dropped.

Thanks!
--Jacob

[1] https://www.postgresql.org/docs/current/dblink.html



Re: dblink: Add SCRAM pass-through authentication

From
Matheus Alcantara
Date:
Hi, thanks for all the reviews. Attached v3 with some fixes.

> They also check for GSS delegation. I think SCRAM passthrough should
> ideally be considered a second form of credentials delegation, from
> the perspective of those functions.
>
Changed dblink_connstr_check and dblink_security_check to receive information
if scram passthrough is being used and then perform the validations. I just
don't know if the order that I put the checks on these functions is the better
or not, any input is welcome.

> We would need to verify that the user mapping can't overwrite that
> with its own (less trusted) `require_auth` setting. (I think that
> should be true already, but I'm not 100% sure.)
>
When adding the require_auth on user mapping options we get an error `invalid
option "require_auth"`, but it was possible to add the require_auth on foreign
data wrapper server options. This new patch also add a validation on
dblink_connstr_check and dblink_security_check to ensure that require_auth is
is present on connection properties with scram-sha-256 as a value (we check only
the connstr so I think that perform the validation only on the first security
check function dblink_connstr_check would be enough, since I don't think that
the connstr will be changed after this check, but I added on both functions
just for sanity).

> > > Hardcoding to scram-sha-256 would also prohibit the use of GSS or
> > > standard password auth, now that I think about it. The docs currently
> > > have a note about being able to choose... Should we add the other
> > > permitted authentication types, i.e. `password,md5,scram-sha-256,gss`?
> > > Or should we prohibit the use of other auth types if you've set
> > > use_scram_passthrough? Or maybe there's an easier way to enforce the
> > > use of the SCRAM keys, that better matches the current logic in
> > > dblink_security_check?
> > >
> > But would it be possible to use SCRAM pass-through feature using another auth
> > method?
>
> No, but if you want the same foreign server to be accessible by users
> who log in with different authentication types, forcing a single
> require_auth setting will defeat that. I don't have a strong opinion
> about how important maintaining that functionality is, but the code
> seems to allow it today.
>
The hard coded require_auth=scram-sha-256 will only be added if
use_scram_passthrough is set and also if the client that connect into the
server also uses the scram auth type, so that we can have the scram keys. So in
case the user try to authenticate using a different auth type on the same
foreign server that has use_scram_passthrough the require_auth=scram-sha-256
will not be added because MyProcPort->has_scram_keys is false.


> --
>
> Some thoughts on v2-0001:
>
> I like the conceptual simplification of get_connect_string().
>
> > +   /* first time, allocate or get the custom wait event */
> > +   if (dblink_we_connect == 0)
> > +       dblink_we_connect = WaitEventExtensionNew("DblinkConnectPgServer");
>
> Replacing two existing wait events with one new one is a user-facing
> change (see the documented list of events at [1]). Maybe we want that,
> but it hasn't been explained. I think that change should be made
> independently of a refactoring patch (or else defended in the commit
> message).
>
You're right, I've fixed it by changing the connect_pg_server to
receive the wait
event info.

> > +   if (foreign_server)
> > +   {
> > +       serverid = foreign_server->serverid;
> > +       user_mapping = GetUserMapping(userid, serverid);
> > +
> > +       connstr = get_connect_string(foreign_server, user_mapping);
> > +   }
>
> Is there any other valid value for user_mapping that a caller can
> choose? If not, I'd like to see the GetUserMapping call pushed back
> down into get_connect_string(), unless there's another reason for
> pulling it up that I'm missing.
>
I agree, I've just declared outside of get_connect_string because on 0002 we
also need the user mapping for UseScramPassthrough function, so I think that it
would make the review more easier.

+        useScramPassthrough = MyProcPort->has_scram_keys &&
UseScramPassthrough(foreign_server, user_mapping);

> > +   static const PQconninfoOption *options = NULL;
> > +
> > +   /*
> > +    * Get list of valid libpq options.
> > +    *
> > +    * To avoid unnecessary work, we get the list once and use it throughout
> > +    * the lifetime of this backend process.  We don't need to care about
> > +    * memory context issues, because PQconndefaults allocates with malloc.
> > +    */
> > +   if (!options)
> > +   {
> > +       options = PQconndefaults();
> > +       if (!options)           /* assume reason for failure is OOM */
> > +           ereport(ERROR,
> > +                   (errcode(ERRCODE_FDW_OUT_OF_MEMORY),
> > +                    errmsg("out of memory"),
> > +                    errdetail("Could not get libpq's default connection options.")));
> > +   }
>
> It looks like `options` might be an unused variable in connect_pg_server()?
>
It's right. Fixed.

> > -       if (PQstatus(conn) == CONNECTION_BAD)
> > +   if (!conn || PQstatus(conn) != CONNECTION_OK)
>
> I don't think this should be changed in a refactoring patch.
> PQstatus() can handle a NULL conn pointer.
>
Fixed

> > -       if (rconn)
> > -           pfree(rconn);
>
> Looks like this code in dblink_connect() was dropped.
>
Oops, fixed on connect_pg_server since this logic was moved to this function.

## Some other changes

I also added a new TAP test case to ensure that we return an error if
require_auth is overwritten with another value.

## Questions:

- The new dblink_connstr_has_scam_require_auth function is very similar with
dblink_connstr_has_pw, we may create a common function for these or let it
duplicated?

--
Matheus Alcantara

Attachment