Thread: RAID Controller (HP P400) beat by SW-RAID?
We've currently got PG 8.4.4 running on a whitebox hardware set up, with (2) 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM SATA drives, using the onboard IDE controller and ext3.
A few weeks back, we purchased two refurb'd HP DL360's G5's, and were hoping to set them up with PG 9.0.2, running replicated. These machines have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and are using the HP SA P400i with 512MB of BBWC. PG is running on an ext4 (noatime) partition, and they drives configured as RAID 1+0 (seems with this controller, I cannot do JBOD). I've spent a few hours going back and forth benchmarking the new systems, and have set up the DWC, and the accelerator cache using hpacucli. I've tried accelerator caches of 25/75, 50/50, and 75/25.
To start with, I've set the "relevant" parameters in postgresql.conf the same on the new config as the old:
max_connections = 150
shared_buffers = 6400MB (have tried as high as 20GB)
work_mem = 20MB (have tried as high as 100MB)
effective_io_concurrency = 6
fsync = on
synchronous_commit = off
wal_buffers = 16MB
checkpoint_segments = 30 (have tried 200 when I was loading the db)
random_page_cost = 2.5
effective_cache_size = 10240MB (have tried as high as 16GB)
First thing I noticed is that it takes the same amount of time to load the db (about 40 minutes) on the new hardware as the old hardware. I was really hoping with the faster, additional drives and a hardware RAID controller, that this would be faster. The database is only about 9GB with pg_dump (about 28GB with indexes).
Using pgfouine I've identified about 10 "problematic" SELECT queries that take anywhere from .1 seconds to 30 seconds on the old hardware. Running these same queries on the new hardware is giving me results in the .2 to 66 seconds. IE, it's twice as slow.
I've tried increasing the shared_buffers, and some other parameters (work_mem), but haven't yet seen the new hardware perform even at the same speed as the old hardware.
I was really hoping that with hardware RAID that something would be faster (loading times, queries, etc...). What am I doing wrong?
About the only thing left that I know to try is to drop the RAID1+0 and go to RAID0 in hardware, and do RAID1 in software. Any other thoughts?
Thanks!
--
Anthony
* Anthony Presley (anthony@resolution.com) wrote: > I was really hoping that with hardware RAID that something would be faster > (loading times, queries, etc...). What am I doing wrong? ext3 and ext4 do NOT perform identically out of the box.. You might be running into the write barriers problem here with ext4 forcing the RAID controllers to push commits all the way to the hard drive before returning (thus making the BBWC next to useless). You might try w/ ext3 on the new system instead. Also, the p800's are definitely better than the p400's, but I don't know that it's the controller that's really the issue here.. Thanks, Stephen
Attachment
Dne 12.9.2011 00:44, Anthony Presley napsal(a): > We've currently got PG 8.4.4 running on a whitebox hardware set up, > with (2) 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM > SATA drives, using the onboard IDE controller and ext3. > > A few weeks back, we purchased two refurb'd HP DL360's G5's, and > were hoping to set them up with PG 9.0.2, running replicated. These > machines have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and > are using the HP SA P400i with 512MB of BBWC. PG is running on an > ext4 (noatime) partition, and they drives configured as RAID 1+0 > (seems with this controller, I cannot do JBOD). I've spent a few > hours going back and forth benchmarking the new systems, and have set > up the DWC, and the accelerator cache using hpacucli. I've tried > accelerator caches of 25/75, 50/50, and 75/25. Whas is an 'accelerator cache'? Is that the cache on the controller? Then give 100% to the write cache - the read cache does not need to be protected by the battery, the page cache at the OS level can do the same service. Provide more details about the ext3/ext4 - there are various data modes (writeback, ordered, journal), various other settings (barriers, stripe size, ...) that matter. According to the benchmark I've done a few days back, the performance difference between ext3 and ext4 is rather small, when comparing equally configured file systems (i.e. data=journal vs. data=journal) etc. With read-only workload (e.g. just SELECT statements), the config does not matter (e.g. journal is just as fast as writeback). See for example these comparisons read-only workload: http://bit.ly/q04Tpg read-write workload: http://bit.ly/qKgWgn The ext4 is usually a bit faster than equally configured ext3, but the difference should not be 100%. > To start with, I've set the "relevant" parameters in postgresql.conf > the same on the new config as the old: > > max_connections = 150 shared_buffers = 6400MB (have tried as high as > 20GB) work_mem = 20MB (have tried as high as 100MB) > effective_io_concurrency = 6 fsync = on synchronous_commit = off > wal_buffers = 16MB checkpoint_segments = 30 (have tried 200 when I > was loading the db) random_page_cost = 2.5 effective_cache_size = > 10240MB (have tried as high as 16GB) > > First thing I noticed is that it takes the same amount of time to > load the db (about 40 minutes) on the new hardware as the old > hardware. I was really hoping with the faster, additional drives and > a hardware RAID controller, that this would be faster. The database > is only about 9GB with pg_dump (about 28GB with indexes). > > Using pgfouine I've identified about 10 "problematic" SELECT queries > that take anywhere from .1 seconds to 30 seconds on the old > hardware. Running these same queries on the new hardware is giving me > results in the .2 to 66 seconds. IE, it's twice as slow. > > I've tried increasing the shared_buffers, and some other parameters > (work_mem), but haven't yet seen the new hardware perform even at > the same speed as the old hardware. In that case some of the assumptions is wrong. For example the new RAID is slow for some reason. Bad stripe size, slow controller, ... Do the basic hw benchmarking, i.e. use bonnie++ to benchmark the disk, etc. Only if this provides expected results (i.e. the new hw performs better) it makes sense to mess with the database. Tomas
On September 11, 2011 03:44:34 PM Anthony Presley wrote: > First thing I noticed is that it takes the same amount of time to load the > db (about 40 minutes) on the new hardware as the old hardware. I was > really hoping with the faster, additional drives and a hardware RAID > controller, that this would be faster. The database is only about 9GB > with pg_dump (about 28GB with indexes). Loading the DB is going to be CPU-bound (on a single) core, unless your disks really suck, which they don't. Most of the time will be spent building indexes. I don't know offhand why the queries are slower, though, unless you're not getting as much cached before testing as on the older box.
On Sun, Sep 11, 2011 at 4:44 PM, Anthony Presley <anthony@resolution.com> wrote: > We've currently got PG 8.4.4 running on a whitebox hardware set up, with (2) > 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM SATA drives, using > the onboard IDE controller and ext3. > A few weeks back, we purchased two refurb'd HP DL360's G5's, and were hoping > to set them up with PG 9.0.2, running replicated. These machines have (2) > 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and are using the HP SA P400i > with 512MB of BBWC. PG is running on an ext4 (noatime) partition, and they Two issues here. One is that the onboard controller and disks on the old machine might not be obeying fsync properly, giving a speed boost at the expense of crash safeness. Two is that the P400 has gotten pretty horrible performance reviews on this list in the past.
>From: pgsql-performance-owner@postgresql.org [mailto:pgsql-performance-owner@postgresql.org] On Behalf Of Anthony Presley >Sent: Sunday, September 11, 2011 4:45 PM >To: pgsql-performance@postgresql.org >Subject: [PERFORM] RAID Controller (HP P400) beat by SW-RAID? >We've currently got PG 8.4.4 running on a whitebox hardware set up, with (2) 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM SATA drives, using the onboard IDE controller and ext3. >A few weeks back, we purchased two refurb'd HP DL360's G5's, and were hoping to set them up with PG 9.0.2, running replicated. These machines have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and are using the HP SA P400i with 512MB of BBWC. PG is running on an ext4 (noatime) partition, and they drives configured as RAID 1+0 (seems with this controller, I cannot do JBOD). I've spent a few hours going back and forth benchmarking the new systems, and have set up the DWC, and the accelerator cache using hpacucli. I've tried accelerator caches of 25/75, 50/50, and 75/25. > I would start of by recommending a more current version of 9.0...like 9.0.4 since you are building a new box. The rumor mill says 9.0.5 and 9.1.0 might be out soon (days?). but that is just rumor mill. Don't bank on it. What kernel are you on ? Long time HP user here, for better and worse... so here are a few other little things I recommend. Check the bios power management. Make sure it is set where you want it. (IIRC the G5s have this, I know G6s and G7s do). This can help with nasty latency problems if the box has been idle for a while then needs to start doing work. The p400i is not a great card, compared to more modern one, but you should be able to beat the old setup with what you have. Faster clocked cpu's more spindles, faster RPM spindles. Assuming the battery is working, with XFS or ext4 you can use nobarrier mount option and you should see some improvement. Make sure the raid card's firmware is current. I can't stress this enough. HP fixed a nasty bug with Raid 1+0 a few months ago where you could eat your data... They also seem to be fixing a lot of other bugs along the way as well. So do yourself a big favor and make sure that firmware is current. It might just head off headache down the road. Also make sure you have a 8.10.? (IIRC the version number right) or better version of hpacucli... there have been some fixes to that utility as well. IIRC most of the fixes in this have been around recognizing newere cards (812s and 410s) but some interface bugs have been fixed as well. You may need new packages for HP health. (I don't recall the official name, but new versions if hpacucli might not play well with old versions of hp health. Its HP so they have a new version about every month for firmware and their cli utility... thats HP for us. Anyways that is my fast input. Best of luck, -Mark
On Sun, Sep 11, 2011 at 6:17 PM, Tomas Vondra <tv@fuzzy.cz> wrote:
Dne 12.9.2011 00:44, Anthony Presley napsal(a):> We've currently got PG 8.4.4 running on a whitebox hardware set up,Whas is an 'accelerator cache'? Is that the cache on the controller?
> with (2) 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM
> SATA drives, using the onboard IDE controller and ext3.
>
> A few weeks back, we purchased two refurb'd HP DL360's G5's, and
> were hoping to set them up with PG 9.0.2, running replicated. These
> machines have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and
> are using the HP SA P400i with 512MB of BBWC. PG is running on an
> ext4 (noatime) partition, and they drives configured as RAID 1+0
> (seems with this controller, I cannot do JBOD). I've spent a few
> hours going back and forth benchmarking the new systems, and have set
> up the DWC, and the accelerator cache using hpacucli. I've tried
> accelerator caches of 25/75, 50/50, and 75/25.
Then give 100% to the write cache - the read cache does not need to be
protected by the battery, the page cache at the OS level can do the same
service.
It is the cache on the controller. I've tried giving 100% to that cache.
Provide more details about the ext3/ext4 - there are various data modes
(writeback, ordered, journal), various other settings (barriers, stripe
size, ...) that matter.
ext3 (on the old server) is using CentOS 5.2 defaults for mounting.
ext4 (on the new server) is using noatime,barrier=0
According to the benchmark I've done a few days back, the performance
difference between ext3 and ext4 is rather small, when comparing equally
configured file systems (i.e. data=journal vs. data=journal) etc.
With read-only workload (e.g. just SELECT statements), the config does
not matter (e.g. journal is just as fast as writeback).
See for example these comparisons
read-only workload: http://bit.ly/q04Tpg
read-write workload: http://bit.ly/qKgWgn
The ext4 is usually a bit faster than equally configured ext3, but the
difference should not be 100%.
Yes - it's very strange.
> To start with, I've set the "relevant" parameters in postgresql.confIn that case some of the assumptions is wrong. For example the new RAID
> the same on the new config as the old:
>
> max_connections = 150 shared_buffers = 6400MB (have tried as high as
> 20GB) work_mem = 20MB (have tried as high as 100MB)
> effective_io_concurrency = 6 fsync = on synchronous_commit = off
> wal_buffers = 16MB checkpoint_segments = 30 (have tried 200 when I
> was loading the db) random_page_cost = 2.5 effective_cache_size =
> 10240MB (have tried as high as 16GB)
>
> First thing I noticed is that it takes the same amount of time to
> load the db (about 40 minutes) on the new hardware as the old
> hardware. I was really hoping with the faster, additional drives and
> a hardware RAID controller, that this would be faster. The database
> is only about 9GB with pg_dump (about 28GB with indexes).
>
> Using pgfouine I've identified about 10 "problematic" SELECT queries
> that take anywhere from .1 seconds to 30 seconds on the old
> hardware. Running these same queries on the new hardware is giving me
> results in the .2 to 66 seconds. IE, it's twice as slow.
>
> I've tried increasing the shared_buffers, and some other parameters
> (work_mem), but haven't yet seen the new hardware perform even at
> the same speed as the old hardware.
is slow for some reason. Bad stripe size, slow controller, ...
Do the basic hw benchmarking, i.e. use bonnie++ to benchmark the disk,
etc. Only if this provides expected results (i.e. the new hw performs
better) it makes sense to mess with the database.
Tomas
Anthony Presley
Mark,
On Sun, Sep 11, 2011 at 10:10 PM, mark <dvlhntr@gmail.com> wrote:
--
Anthony Presley
>From: pgsql-performance-owner@postgresql.org
[mailto:pgsql-performance-owner@postgresql.org] On Behalf Of Anthony Presley
>Sent: Sunday, September 11, 2011 4:45 PM
>To: pgsql-performance@postgresql.org
>Subject: [PERFORM] RAID Controller (HP P400) beat by SW-RAID?I would start of by recommending a more current version of 9.0...like 9.0.4
>We've currently got PG 8.4.4 running on a whitebox hardware set up, with
(2) 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM SATA drives,
using the onboard IDE controller and ext3.
>A few weeks back, we purchased two refurb'd HP DL360's G5's, and were
hoping to set them up with PG 9.0.2, running replicated. These machines
have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and are using the HP
SA P400i with 512MB of BBWC. PG is running on an ext4 (noatime) partition,
and they drives configured as RAID 1+0 (seems with this controller, I cannot
do JBOD). I've spent a few hours going back and forth benchmarking the new
systems, and have set up the DWC, and the accelerator cache using hpacucli.
I've tried accelerator caches of 25/75, 50/50, and 75/25.
>
since you are building a new box. The rumor mill says 9.0.5 and 9.1.0 might
be out soon (days?). but that is just rumor mill. Don't bank on it.
Looks like 9.1 was released today - I may upgrade to that for our testing. I was just using whatever is in the repo.
What kernel are you on ?
2.6.18-238.19.1.el5
Long time HP user here, for better and worse... so here are a few other
little things I recommend.
Thanks!
Check the bios power management. Make sure it is set where you want it.
(IIRC the G5s have this, I know G6s and G7s do). This can help with nasty
latency problems if the box has been idle for a while then needs to start
doing work.
I've checked those, they look ok.
The p400i is not a great card, compared to more modern one, but you should
be able to beat the old setup with what you have. Faster clocked cpu's more
spindles, faster RPM spindles.
I've upgraded the CPU's to be X5470 today, to see if that helps with the speed of
Assuming the battery is working, with XFS or ext4 you can use nobarrier
mount option and you should see some improvement.
I've been using:
noatime,data=writeback,defaults
I will try:
noatime,data=writeback,barrier=0,defaults
Make sure the raid card's firmware is current. I can't stress this enough.
HP fixed a nasty bug with Raid 1+0 a few months ago where you could eat your
data... They also seem to be fixing a lot of other bugs along the way as
well. So do yourself a big favor and make sure that firmware is current. It
might just head off headache down the road.
I downloaded the latest firmware DVD on Thursday and ran that - everything is up to date.
Also make sure you have a 8.10.? (IIRC the version number right) or better
version of hpacucli... there have been some fixes to that utility as well.
IIRC most of the fixes in this have been around recognizing newere cards
(812s and 410s) but some interface bugs have been fixed as well. You may
need new packages for HP health. (I don't recall the official name, but new
versions if hpacucli might not play well with old versions of hp health.
I got that as well - thanks!
Its HP so they have a new version about every month for firmware and their
cli utility... that’s HP for us.
Anyways that is my fast input.
Best of luck,
Thanks!
--
Anthony Presley
So, today, I did the following:
- Swapped out the 5410's (2.3Ghz) for 5470's (3.33Ghz)
- Set the ext4 mount options to be noatime,barrier=0,data=writeback
- Installed PG 9.1 from the yum repo
Item one:
With the accelerator cache set to 0/100 (all 512MB for writing), loading the db / creating the indexes was about 8 minutes faster. Was hoping for more, but didn't get it. If I split the CREATE INDEXes into separate psql instances, will that be done in parallel?
Item two:
I'm still getting VERY strange results in my SELECT queries.
For example, on the new server:
http://explain.depesz.com/s/qji - This takes 307ms, all the time. Doesn't matter if it's "cached", or fresh from a reboot.
Same query on the live / old server:
http://explain.depesz.com/s/8Pd - This can take 2-3s the first time, but then takes 42ms once it's cached.
Both of these servers have the same indexes, and almost identical data. However, the old server is doing some different planning than the new server.
What did I switch (or should I unswitch)?
--
Anthony
On Sun, Sep 11, 2011 at 9:12 PM, Alan Hodgson <ahodgson@simkin.ca> wrote:
On September 11, 2011 03:44:34 PM Anthony Presley wrote:Loading the DB is going to be CPU-bound (on a single) core, unless your disks
> First thing I noticed is that it takes the same amount of time to load the
> db (about 40 minutes) on the new hardware as the old hardware. I was
> really hoping with the faster, additional drives and a hardware RAID
> controller, that this would be faster. The database is only about 9GB
> with pg_dump (about 28GB with indexes).
really suck, which they don't. Most of the time will be spent building
indexes.
I don't know offhand why the queries are slower, though, unless you're not
getting as much cached before testing as on the older box.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 12/09/11 15:10, mark wrote: > >> From: pgsql-performance-owner@postgresql.org > [mailto:pgsql-performance-owner@postgresql.org] On Behalf Of Anthony Presley >> Sent: Sunday, September 11, 2011 4:45 PM >> To: pgsql-performance@postgresql.org >> Subject: [PERFORM] RAID Controller (HP P400) beat by SW-RAID? >> We've currently got PG 8.4.4 running on a whitebox hardware set up, with > (2) 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM SATA drives, > using the onboard IDE controller and ext3. > >> A few weeks back, we purchased two refurb'd HP DL360's G5's, and were > hoping to set them up with PG 9.0.2, running replicated. These machines > have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and are using the HP > SA P400i with 512MB of BBWC. PG is running on an ext4 (noatime) partition, > and they drives configured as RAID 1+0 (seems with this controller, I cannot > do JBOD). I've spent a few hours going back and forth benchmarking the new > systems, and have set up the DWC, and the accelerator cache using hpacucli. > I've tried accelerator caches of 25/75, 50/50, and 75/25. > > I would start of by recommending a more current version of 9.0...like 9.0.4 > since you are building a new box. The rumor mill says 9.0.5 and 9.1.0 might > be out soon (days?). but that is just rumor mill. Don't bank on it. > > > What kernel are you on ? > > Long time HP user here, for better and worse... so here are a few other > little things I recommend. > > Check the bios power management. Make sure it is set where you want it. > (IIRC the G5s have this, I know G6s and G7s do). This can help with nasty > latency problems if the box has been idle for a while then needs to start > doing work. > > The p400i is not a great card, compared to more modern one, but you should > be able to beat the old setup with what you have. Faster clocked cpu's more > spindles, faster RPM spindles. > > Assuming the battery is working, with XFS or ext4 you can use nobarrier > mount option and you should see some improvement. > > > Make sure the raid card's firmware is current. I can't stress this enough. > HP fixed a nasty bug with Raid 1+0 a few months ago where you could eat your > data... They also seem to be fixing a lot of other bugs along the way as > well. So do yourself a big favor and make sure that firmware is current. It > might just head off headache down the road. > > Also make sure you have a 8.10.? (IIRC the version number right) or better > version of hpacucli... there have been some fixes to that utility as well. > IIRC most of the fixes in this have been around recognizing newere cards > (812s and 410s) but some interface bugs have been fixed as well. You may > need new packages for HP health. (I don't recall the official name, but new > versions if hpacucli might not play well with old versions of hp health. > > Its HP so they have a new version about every month for firmware and their > cli utility... that’s HP for us. > > Anyways that is my fast input. > > Best of luck, > > > -Mark > > pg 9.1.0 has already been released! I have had it installed and running for just under 24 hours... though http://www.postgresql.org/ is still not showing it, see: http://www.postgresql.org/ftp/source/ and http://jdbc.postgresql.org/download.html Cheers, Gavin
On 12-9-2011 0:44 Anthony Presley wrote: > A few weeks back, we purchased two refurb'd HP DL360's G5's, and were > hoping to set them up with PG 9.0.2, running replicated. These machines > have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and are using the > HP SA P400i with 512MB of BBWC. PG is running on an ext4 (noatime) > partition, and they drives configured as RAID 1+0 (seems with this > controller, I cannot do JBOD). If you really want a JBOD-setup, you can try a RAID0 for each available disk, i.e. in your case 6 separate RAID0's. That's how we configured our Dell H700 - which doesn't offer JBOD as well - for ZFS. Best regards, Arjen
On Tue, Sep 13, 2011 at 1:22 AM, Arjen van der Meijden <acmmailing@tweakers.net> wrote:
If you really want a JBOD-setup, you can try a RAID0 for each available disk, i.e. in your case 6 separate RAID0's. That's how we configured our Dell H700 - which doesn't offer JBOD as well - for ZFS.
On 12-9-2011 0:44 Anthony Presley wrote:A few weeks back, we purchased two refurb'd HP DL360's G5's, and were
hoping to set them up with PG 9.0.2, running replicated. These machines
have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and are using the
HP SA P400i with 512MB of BBWC. PG is running on an ext4 (noatime)
partition, and they drives configured as RAID 1+0 (seems with this
controller, I cannot do JBOD).
That's a pretty good idea ... I'll try that on our second server today. In the meantime, after tweaking it a bit, we were able to get (with iozone):
Old | New | ||
Initial write | 75.85 | 220.68 | |
Rewrite | 63.95 | 253.07 | |
Read | 45.04 | 171.35 | |
Re-read | 45 | 2405.23 | |
Random read | 27.56 | 1733.46 | |
Random write | 50.7 | 239.47 |
Not as fas as I'd like, but faster than the old disks, for sure.
--
Anthony
On 09/11/2011 06:44 PM, Anthony Presley wrote: > We've currently got PG 8.4.4 running on a whitebox hardware set up, > with (2) 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM SATA > drives, using the onboard IDE controller and ext3. > > A few weeks back, we purchased two refurb'd HP DL360's G5's, and were > hoping to set them up with PG 9.0.2, running replicated. These > machines have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and > are using the HP SA P400i with 512MB of BBWC. PG is running on an > ext4 (noatime) partition, and they drives configured as RAID 1+0 > (seems with this controller, I cannot do JBOD). . > To start with, I've set the "relevant" parameters in postgresql.conf > the same on the new config as the old: > > fsync = on > synchronous_commit = off The main thing that a hardware RAID controller improves on is being able to write synchronous commits much faster than you can do without one. If you've turned that off, you've essentially neutralized its primary value. In every other respect, software RAID is faster: the CPUs in your server are much faster than the IO processor on the card, and Linux has a lot more memory for caching than it does too. Turning off sync commit may be fine for loading, but you'll be facing data loss at every server interruption if you roll things out like that. It's not realistic production performance for most places running like that. A lot of your test results seem like they may be using different levels of write reliability, which makes things less fair than they should be too--in favor of the cheap IDE drives normally. Check out http://wiki.postgresql.org/wiki/Reliable_Writes for more information about that topic. -- Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us