Fwd: BUG #10680: LDAP bind password leaks to log on failed authentication - Mailing list pgsql-bugs

From Steven Siebert
Subject Fwd: BUG #10680: LDAP bind password leaks to log on failed authentication
Date
Msg-id CAC3nzeg_yfK0vTUXqbut-uPnaXoRNLejkWYD+TBfZ8OaXQ8f2w@mail.gmail.com
Whole thread Raw
In response to Re: BUG #10680: LDAP bind password leaks to log on failed authentication  (Steven Siebert <smsiebe@gmail.com>)
List pgsql-bugs
I apologize if this is a double-post...I'm not sure if my message made it
to the distro, it's not in the mail list archive.  I don't mean to seem
like I've gone silent...I might have received a bounce and I didn't catch
it.

V/R,

Steve



---------- Forwarded message ----------
From: Steven Siebert <smsiebe@gmail.com>
Date: Sun, Oct 12, 2014 at 8:31 PM
Subject: Re: [BUGS] BUG #10680: LDAP bind password leaks to log on failed
authentication
To: Tom Lane <tgl@sss.pgh.pa.us>
Cc: Bruce Momjian <bruce@momjian.us>, Magnus Hagander <magnus@hagander.net>,
Stephen Frost <sfrost@snowman.net>, pgsql-bugs <pgsql-bugs@postgresql.org>


Tom,

Your response is truly insightful - and completely valid.  Please, allow me
to explain our perspective...


> I still say that this is an ill-considered, unmaintainable, and
> fundamentally insecure approach to solving the wrong problem.
>

As you mentioned, the issue at hand isn't a complete solution, I certainly
agree.  The objections and scenarios you have raised are solid based on the
way the log is now, where items of interest to security auditors (ie
connection attempts (success and fail)) are interleaved with messages that
may reveal sensitive information about the data or the vulnerability of the
server.  As you suggested, the underlying problem we (US Government) have
is that our data must follow access control policies of lease
privileged/need to know.  Like I mentioned before, our use case is that
specific logging data identified in the NIST database STIG must make it to
a centralized repository (ie splunk), where the data must not contain
information not required to do the auditors job. But, with this particular
issue, even though I implement filtering at the log record level, filtering
out all the other messages containing possible data leakages or just simply
stuff that auditors wouldn't care about -- this is a specific log record we
need to track -- and it contains sensitive information...and it can be
fixed.

Could there be a spillage with other audit data?  Sure...there are
procedures in place to sanitize and ensure that problem doesn't happen
again - application bugs happen...and at the end of the day, all customers
must accept risk.  When this does happen...they call the devs (cue
superhero music) and we prevent it from happening again. It's just that, in
this case, one of my developers identified a risk before we deployed
production and I'm trying to fix this "bug" prior to accreditation...before
it bites...because we know about it....and I have fingers that can type
code.

Then there is another, simpler, view...should it be "OK" to routinely print
out passwords in log events...especially those log events that would want
to be seen by people other than the database admnistrators?   Consolidating
logging and making use of tools like splunk, apache flume, logstash is
common place now in enterprise and government...and now an emergence of
cloud logging services like loggly or logentries makes spillages like this
even scarier -- and can even make people decide against postgresql...


> As a single example of what's wrong with it, suppose that you
> fat-finger some syntax detail of an LDAP line in pg_hba.conf.
> When you issue "pg_ctl reload", will the postmaster log the broken
> line in the postmaster log?  I sure hope so, because not doing so
> would be a major usability fail.
>
Will it obscure the RADIUS secret?
> No, because the syntax error will prevent it from correctly
> identifying which part of the line is the secret, if indeed it
> even realizes that the line might contain a secret.
>

Without going down the rabbit hole discussing each particular scenario, I
think this is a good example how designing with security as a requirement
and not as an aspect/afterthought, could help in the design decision of a
component.  There is an alternative here that can satisfy both camps,
following well-known administrative precedent of fstab (don't confidently
update your fstab and then restart your server before you validate with
mount -a or mount -vf...another story for another day).  Rather than
logging the entire hba raw line to the log when there is such errors -
which has an inherent inconvenience of requiring the administrator to
restart (multiple times if there is an error?) the (production?)
application and have a RUNTIME failure to diagnose, why not provide a
utility making use of the postgresql codebase (even finding additional use
in unit testing code) to validate the hba file format/structure of the
file, and even validating the connection itself, providing a more verbose
error at the command line (where the admin obviously has permission).  With
this approach, you don't need to change your hba file...you also don't need
to log raw lines that knowingly contain sensitive information to a audit
message, you can log an line number only and even tell them to use the
utility to validate and get more information.  Bonus points for having a
tool that allows an admin to verify settings before restarting a server.


> The right problem to be solving, to my mind, is that you feel a need
> to give access to the postmaster log to untrusted people.  Now maybe
> that's just a problem of wrong administrative procedures, but let's
> consider what we might do in PG to improve your ability to do that
> safely.  Perhaps what we should be entertaining is a proposal to have
> multiple log channels, some containing more security-relevant messages
> and others less so.  Then you could give people the ability to read only
> the non-security-relevant messages.  If we arranged for *all* messages
> relevant to pg_hba.conf to go into a secure log, it'd be a lot easier to
> convince ourselves that we would not leak any security-critical info
> than if we take the approach this patch proposes.
>
>
I appreciate you humoring our use case rather than disregarding it as a
invalid administrative procedure =)

I think log channels is a great idea!  But, in the case, it wouldn't help,
right?  This is the sole message that gets logged when auth_failed is
called -- something we specifically must whitelist through our filter...as
required by NIST.  This message, specifically, is a problem, thus my focus
on the individual issue.  We solve the log channels problem essentially the
same way suggested by postgres.org...parsing the log (
http://wiki.postgresql.org/images/9/9d/Logging_pgopen_withnotes.pdf).  In
our opinion, the way postgresql does logging is adequate, because we can
filter and we can route and we can do whatever we need with 1st-3rd party
tools.  In fact, I don't meant to prevent any awesome new features that
could come out of this discussion -- but the logging we have now works,
even at cloud scale (tools have been developed to work around it).  The
problem we have is simply how this one log message gets formed. I don't
need a new logging system to be created to solve this one issue =)

Thanks,

S

pgsql-bugs by date:

Previous
From: flashmozzg@gmail.com
Date:
Subject: BUG #11672: Bad practice in checking for disk C, SQL Shell
Next
From: Michael Paquier
Date:
Subject: Re: BUG #11638: Transaction safety fails when constraints are dropped and analyze is done