Thread: Disk Encryption in Production

Disk Encryption in Production

From
Carlos Espejo
Date:
Anybody running their PostgreSQL server from a ecryptfs container? What are the common production setups out there? What are the drawbacks that people have experienced with their solution?

--
Carlos Espejo

Re: Disk Encryption in Production

From
Zenaan Harkness
Date:
On 3/26/14, Carlos Espejo <carlosespejo@gmail.com> wrote:
> Anybody running their PostgreSQL server from a ecryptfs container? What are
> the common production setups out there? What are the drawbacks that people
> have experienced with their solution?

I ran a couple web servers off Full Disk Encryption (FDE) installs,
and I don't recommend that - if the host or VM needs restarting, you
need a side-channel login (which could possibly be an expensive
on-site admin call depending on your contract) to get past the
kernel's "please enter disk password" prompt, which I find very
not-enjoyable.

Running an encrypted filesystem of some sort after normal bootup means
that, at the least, you ought be able to SSH in to fix up any
misconfigurations or enter encrypted volume passwords etc.

I have not use ecryptfs, but it looks like a newer version of encfs or
at least similar in design. I used to use encfs a few years ago, but
it has known security limitations and is no longer advised for
applications requiring genuine security. I cannot speak to ecryptfs.

I note in the Debian package description/manifest for encfs it says
"Encrypted data is stored within the native file system, thus no
fixed-size loopback image is required."

Certainly ecryptfs appears to share this (from a utility perspective
advantageous trait, that is, the lack of requirement for a fixed size
loopback file, but in addition, the ability to transfer individual
files from one filesystem to another (perhaps for backup purposes).

I would add to that list a distinct likelihood that the 'encrypted'
files are perhaps more amenable to incremental backup, whereas an
encrypted loopback filesystem-in-a-file may be harder to achieve
incremental backups with - though by no means impossible - it's just
different, and depends on whose doing your backups, who you want to
have keys etc.

HOWEVER1: there is usually the limitation with per-file encryption
systems, that certain metadata of the files being encrypted is visible
(not encrypted) - for example, the file name, access perms, ownership,
fair estimate of file size, and quite probably other things like ACLs
(if you use those). Each of these may or may not be of concern to you
or to your particular threat models.

HOWEVER2: when creating certain encrypted loopback containers, it is
almost trivial to create a sparse file for your 'encrypted container'
(in my experience - dd does the job nicely thank you very much), and
to then set that up as a loopback encrypted device (use your standard
procedure here), and finally just format your crypted container (with
chosen filesystem) in your chosen filesystem's "quick format" mode
(ie, effectively sparse). Truecrypt (not libre by Debian's standards)
used to provide such a 'sparse' type mode. It is still possible (at
least on GNU/Linux) to manually use truecrypt (or fully libre tcplay)
at the command line to achieve this with 'truecrypt' volumes.

But sparse containers have their own security issues too. There are
tradeoffs no matter what you do - convenience, speed, security and
more. You will need to do some substantial reading/research if you
have genuine threat/need for encrypted data store.

Conclusion: there are pros and cons all around - you DO have a threat
model don't you? :)

Good luck,
Zenaan


Re: Disk Encryption in Production

From
Tim Spencer
Date:
On Mar 25, 2014, at 3:30 PM, Carlos Espejo <carlosespejo@gmail.com> wrote:
> Anybody running their PostgreSQL server from a ecryptfs container? What are the common production setups out there?
Whatare the drawbacks that people have experienced with their solution? 

    We run postgres on XFS on lvm volumes put on top of cloud block devices encrypted with LUKS.  It feels like a lot
oflayers, but it lets us add more encrypted disk space on the fly very easily (especially since I've got all this
configset up in a chef cookbook).  It seems to work just fine.  I haven't done any testing, but I am pretty sure that
itadds latency.  But hey, if you need crypto, you need it.  :-)   
    We currently store the keys to LUKS encrypted with the host's private chef key as a host attribute in the
chef-serverso that the key data at rest would be safe, and we have an init script that the cookbook installs early in
theboot sequence that gets/decrypts the keys from chef, starts crypto up, and mounts the filesystems before postgres
startsup.  We've got some plans to improve this, but it's a heck of a lot better than storing them locally, and a heck
ofa lot cheaper than a real HSM. 

    Another option that we liked and tested out, but discarded because of cost, was Gazzang.  They have a really slick
setup. Pretty much plug n play, and work really well in the cloud, which is where we are. 

    The one thing that I have run into that was a problem with doing this on a loopback device mapped to a file on a
hostrather than directly on a real block device.  We did this on some cassandra servers, and pretty quickly began
seeingcorruption.  We never figured out where the problem was, but it was a real pain to deal with.  I'd avoid doing
that.

    Hope that helps.  Have fun!

        -tspencer