On Mon, Sep 21, 2009 at 4:03 PM, Scot Kreienkamp <SKreien@la-z-boy.com> wrote:
> On the contrary, we've been running PG in production for years now under VMWare. Same with MSSQL. We've never had
anyproblems. Less so than an actual physical machine actually since we can move the server to different physical
hardwareon demand. Also makes disaster recovery MUCH easier.
>
> However, VMWare does have its places. A high usage database is not one of them, IMHO. A moderately or less used
one,depending on requirements and the hardware backing it, is often a good fit. And I agree with Scott about the
snapshots. They do tend to cause temporary communication issues with a running virtual machine occasionally, regardless
ofOS or DB type. (The benefits outweigh the risks 99% of the time though, with backups being that 1%.) In my
experiencethe level of interference from snapshotting a virtual machine also depends on the type and speed of your
physicaldisks backing the VMWare host and the size of the virtual machine and any existing snapshot. I've been told
thatin VSPhere (VMWare 4.0) this will be significantly improved.
I agree with pretty much everything you've said. I would never put a
high load system on vmware, but testing, workstation, development,
legacy etc is all good. I've never had any type of filesystem
corruption. I'm guessing the OP's issues are either coming from NFS
or hardware problems on the SAN (IMO not iikely). I would however
check all software versions, etc. and make sure it's all up to date.
Personally, I avoid NFS like the plague for anything other than basic
file serving. Running a database through a NAS gateway to a SAN is
crazy...even if it works performance is going to suck. If it was me
and this was a critical database, I'd dedicate a LUN on the san and
run a fiber cable direct to the vmware box, and mount the storage
directly.
merlin