looking for a secure - Mailing list pgsql-general
From | Fran Fabrizio |
---|---|
Subject | looking for a secure |
Date | |
Msg-id | 3B66E1D7.2A4A87C2@mmrd.com Whole thread Raw |
Responses |
Re: looking for a secure
(Joel Burton <jburton@scw.org>)
|
List | pgsql-general |
Hi, I apologize in advance for the long post but I'm faced with a thorny problem and my lack of experience is showing. We are trying to figure out the best arrangement for our Pg server. We have what appears to me to be a difficult-to-satisfy set of security requirements/implications. Here's our setup... We have a Pg database here at the central office. We have two main groups of clients that need to talk to this database. In the field, we have/need to plan for upwards of 10,000 client sites that need to both put data into (log messages) and take data out of (downloading patches) this database. These 10,000 clients are talking to us over the internet, not a private network of any sort. We also have a couple of dozen people that we need to access a different section of the db through a web interface. Some of the data in the database is of a sensitive nature (IP addresses, account names and passwords to connect to the client sites). So, the challenge is to provide access to this db from the internet while making it reasonably hard to allow the wrong people to get access. We've been tossing over a lot of different scenarios, and here's what we've come up with so far: Scenario 1: We put a Pg server outside our firewall, and another one behind it. The outer database contains only a subset of the total db schema, just enough to receive the log messages and provide the patches that need to be available for download. The internet clients connect to this outer database. The internal database contains the full db schema. Then, the log and patch tables are replicated over to the internal, main database. Cons: replication doesn't seem to be a solid product yet, would require two-way replication (log messages need to be moved internally, new available patches need to be moved onto the outer db), means we have two databases to maintain Scenario 2: Same hardware setup as Scenario 1 but instead of replication we have a cron'ed perl script or psql script or something similar select from one db and insert into the other, and vice versa. Cons: still have two seperate databases, not real time, seems like a hack to me Scenario 3: Punch a hole through the firewall or move the main Pg database outside of the firewall and make the main database available on the internet. Cons: security implications Scenario 3 seems the most elegant to me. It avoids having to set up some sort of replication/copying scheme and having the same data stored in two different places. But we are understandably nervous about hanging that main db out there on the internet. So, I'm looking for the best recipe to minimize risk. Here's what I've thought about so far: - all 10,000 clients can get a separate Pg user account. performance issues? can we then restrict to a certain user/IP combo? can we restrict what actions they can take, what tables they can see, or just whether or not they have access to the db? does this even help? - SSL? is this even possible? The db client on those 10,000 machines is going to be a very lightweight C program out of necessity (perl and other languages is not supported, these machines are old and often we don't have permission to install new languages on them anyway) - the sensitive data fields can be encrypted in some reversible but secure fashion when we store them in the database - we can use things like tripwire, etc... to detect any unauthorized access to the db server machine - i have a nagging feeling i'm not seeing the big picture. does postgres have some other built-in security features that would help secure the box? revers lookups, maybe? or something else? I'm really interested in seeing what other people have done to alleviate these types of concerns, and what if anything I am missing as I approach the problem. Thanks for your time, Fran
pgsql-general by date: