Thread: hardware advice
Hi everyone, I want to buy a new server, and am contemplating a Dell R710 or the newer R720. The R710 has the x5600 series CPU, while the R720 has the newer E5-2600 series CPU. At this point I'm dealing with a fairly small database of 8 to 9 GB. The server will be dedicated to Postgres and a C++ based middle tier. The longest operations right now is loading the item list (80,000 items) and checking On Hand for an item. The item list does a sum for each item to get OH. The database design is out of my control. The on_hand lookup table currently has 3 million rows after 4 years of data. My main question is: Will a E5-2660 perform faster than a X5690? I'm leaning to clock speeds because I know doing the sum of those rows is CPU intensive, but have not done extensive research to see if the newer CPUs will outperform the x5690 per clock cycle. Overall the current CPU is hardly busy (after 1 min) - load average: 0.81, 0.46, 0.30, with % never exceeding 50%, but the speed increase is something I'm ready to pay for if it will actually be noticeably faster. I'm comparing the E5-2660 rather than the 2690 because of price. For both servers I'd have at least 32GB Ram and 4 Hard Drives in raid 10. Best regards, Mark
On Thu, Sep 27, 2012 at 4:11 PM, M. D. <lists@turnkey.bz> wrote: > At this point I'm dealing with a fairly small database of 8 to 9 GB. ... > The on_hand lookup table > currently has 3 million rows after 4 years of data. ... > For both servers I'd have at least 32GB Ram and 4 Hard Drives in raid 10. For a 9GB database, that amount of RAM seams like overkill to me. Unless you expect to grow a lot faster than you've been growing, or perhaps your middle tier consumes a lot of those 32GB, I don't see the point there.
On Thu, Sep 27, 2012 at 12:11 PM, M. D. <lists@turnkey.bz> wrote: > Hi everyone, > > I want to buy a new server, and am contemplating a Dell R710 or the newer > R720. The R710 has the x5600 series CPU, while the R720 has the newer > E5-2600 series CPU. > > At this point I'm dealing with a fairly small database of 8 to 9 GB. The > server will be dedicated to Postgres and a C++ based middle tier. The > longest operations right now is loading the item list (80,000 items) and > checking On Hand for an item. The item list does a sum for each item to get > OH. The database design is out of my control. The on_hand lookup table > currently has 3 million rows after 4 years of data. > > My main question is: Will a E5-2660 perform faster than a X5690? I'm leaning > to clock speeds because I know doing the sum of those rows is CPU intensive, > but have not done extensive research to see if the newer CPUs will > outperform the x5690 per clock cycle. Overall the current CPU is hardly busy > (after 1 min) - load average: 0.81, 0.46, 0.30, with % never exceeding 50%, > but the speed increase is something I'm ready to pay for if it will actually > be noticeably faster. > > I'm comparing the E5-2660 rather than the 2690 because of price. > > For both servers I'd have at least 32GB Ram and 4 Hard Drives in raid 10. I don't think you've supplied enough information for anyone to give you a meaningful answer. What's your current configuration? Are you I/O bound, CPU bound, memory limited, or some other problem? You need to do a specific analysis of the queries that are causing you problems (i.e. why do you need to upgrade at all?) Regarding Dell ... we were disappointed by Dell. They're expensive, they try to lock you in to their service contracts, and (when I bought two) they lock you in to their replacement parts, which cost 2-3x what you can buy from anyone else. If you're planning to use a RAID 10 configuration, then a BBU cache will make more difference than almost anything else you can do. I've heard that Dell's current RAID controller is pretty good, but in the past they've re-branded other controllers as "Perc XYZ" and you couldn't figure out what was really under the covers. RAID controllers are wildly different in performance, and you really want to get only the best. We use a "white box" vendor (ASA Computers), and have been very happy with the results. They build exactly what I ask for and deliver it in about a week. They offer on-site service and warranties, but don't pressure me to buy them. I'm not locked in to anything. Their prices are good. My current configuration is a dual 4-core Intel Xeon 2.13 GHz system with 12GB memory and 12x500GB 7200RPM SATA disks, controlled by a 3WARE RAID controller with a BBU cache. The OS and WAL are on a RAID1 pair, and the Postgres database is on a 8-disk RAID10 array. That leaves two hot spare disks. I get about 7,000 TPS for pg_bench. The chassis has dual hot-swappable power supplies and dual networks for failover. It's in the neighborhood of $5,000. Craig > > Best regards, > Mark > > > > -- > Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-performance
On 09/27/2012 01:22 PM, Claudio Freire wrote: > On Thu, Sep 27, 2012 at 4:11 PM, M. D. <lists@turnkey.bz> wrote: >> At this point I'm dealing with a fairly small database of 8 to 9 GB. > ... >> The on_hand lookup table >> currently has 3 million rows after 4 years of data. > ... >> For both servers I'd have at least 32GB Ram and 4 Hard Drives in raid 10. > For a 9GB database, that amount of RAM seams like overkill to me. > Unless you expect to grow a lot faster than you've been growing, or > perhaps your middle tier consumes a lot of those 32GB, I don't see the > point there. > The middle tier does caching and can easily take up to 10GB of RAM, therefore I'm buying more.
On 9/27/2012 1:11 PM, M. D. wrote: > > I want to buy a new server, and am contemplating a Dell R710 or the > newer R720. The R710 has the x5600 series CPU, while the R720 has the > newer E5-2600 series CPU. For this the best data I've found (excepting actually running tests on the physical hardware) is to use the SpecIntRate2006 numbers, which can be found for both machines on the spec.org web site. I think the newer CPU is the clear winner with a specintrate performance of 589 vs 432. It also has a significantly larger cache. Comparing single-threaded performance, the older CPU is slightly faster (50 vs 48). That wouldn't be a big enough difference to make me pick it. The Sandy Bridge-based machine will likely use less power. http://www.spec.org/cpu2006/results/res2012q2/cpu2006-20120604-22697.html http://www.spec.org/cpu2006/results/res2012q1/cpu2006-20111219-19272.html To find more results use this page : http://www.spec.org/cgi-bin/osgresults?conf=cpu2006;op=form (enter R710 or R720 in the "system" field).
On 9/27/2012 1:37 PM, Craig James wrote: > We use a "white box" vendor (ASA Computers), and have been very happy > with the results. They build exactly what I ask for and deliver it in > about a week. They offer on-site service and warranties, but don't > pressure me to buy them. I'm not locked in to anything. Their prices > are good. I'll second that : we build our own machines from white-label parts for typically less than 1/2 the Dell list price. However, Dell does provide value to some people : for example you can point a third-party software vendor at a Dell box and demand they make their application work properly whereas they may turn their nose up at a white label box. Same goes for Operating Systems : we have spent much time debugging Linux kernel issues on white box hardware. On Dell hardware we would most likely have not hit those bugs because Red Hat tests on Dell. So YMMV...
On 09/27/2012 01:47 PM, David Boreham wrote: > On 9/27/2012 1:37 PM, Craig James wrote: >> We use a "white box" vendor (ASA Computers), and have been very happy >> with the results. They build exactly what I ask for and deliver it in >> about a week. They offer on-site service and warranties, but don't >> pressure me to buy them. I'm not locked in to anything. Their prices >> are good. > > I'll second that : we build our own machines from white-label parts > for typically less than 1/2 the Dell list price. However, Dell does > provide value to some people : for example you can point a third-party > software vendor at a Dell box and demand they make their application > work properly whereas they may turn their nose up at a white label > box. Same goes for Operating Systems : we have spent much time > debugging Linux kernel issues on white box hardware. On Dell hardware > we would most likely have not hit those bugs because Red Hat tests on > Dell. So YMMV... > I'm in Belize, so what I'm considering is from ebay, where it's unlikely that I'll get the warranty. Should I consider some other brand rather? To build my own or buy custom might be an option too, but I would not get any warranty. Dell does sales directly to Belize, but the price is so much higher than US prices that it's hardly worth the support/warranty.
On 9/27/2012 1:56 PM, M. D. wrote: > I'm in Belize, so what I'm considering is from ebay, where it's > unlikely that I'll get the warranty. Should I consider some other > brand rather? To build my own or buy custom might be an option too, > but I would not get any warranty. I don't have any recent experience with white label system vendors, but I suspect they are assembling machines from supermicro, asus, intel or tyan motherboards and enclosures, which is what we do. You can buy the hardware from suppliers such as newegg.com. It takes some time to read the manufacturer's documentation, figure out what kind of memory to buy and so on, which is basically what you're paying a white label box seller to do for you. For example here's a similar barebones system to the R720 I found with a couple minutes searching on newegg.com : http://www.newegg.com/Product/Product.aspx?Item=N82E16816117259 You could order that SKU, plus the two CPU devices, however many memory sticks you need, and drives. If you need less RAM (the Dell box allows up to 24 sticks) there are probably cheaper options. The equivalent Supermicro box looks to be somewhat less expensive : http://www.newegg.com/Product/Product.aspx?Item=N82E16816101693 When you consider downtime and the cost to ship equipment back to the supplier, a warranty doesn't have much value to me but it may be useful in your situation.
On Thursday, September 27, 2012 02:13:01 PM David Boreham wrote: > The equivalent Supermicro box looks to be somewhat less expensive : > http://www.newegg.com/Product/Product.aspx?Item=N82E16816101693 > > When you consider downtime and the cost to ship equipment back to the > supplier, a warranty doesn't have much value to me but it may be useful > in your situation. And you can probably buy 2 Supermicros for the cost of the Dell. 100% spares.
On Thu, Sep 27, 2012 at 2:31 PM, Alan Hodgson <ahodgson@simkin.ca> wrote: > On Thursday, September 27, 2012 02:13:01 PM David Boreham wrote: >> The equivalent Supermicro box looks to be somewhat less expensive : >> http://www.newegg.com/Product/Product.aspx?Item=N82E16816101693 >> >> When you consider downtime and the cost to ship equipment back to the >> supplier, a warranty doesn't have much value to me but it may be useful >> in your situation. > > And you can probably buy 2 Supermicros for the cost of the Dell. 100% spares. This 100x this. We used to buy our boxes from aberdeeninc.com and got a 5 year replacement parts warranty included. We spent ~$10k on a server that was right around $18k from dell for the same numbers and a 3 year warranty.
On 09/27/2012 01:37 PM, Craig James wrote: > I don't think you've supplied enough information for anyone to give > you a meaningful answer. What's your current configuration? Are you > I/O bound, CPU bound, memory limited, or some other problem? You need > to do a specific analysis of the queries that are causing you problems > (i.e. why do you need to upgrade at all?) My current configuration is a Dell PE 1900, E5335, 16GB Ram, 2 250GB Raid 0. I'm buying a new server mostly because the current one is a bit slow and I need a new gateway server, so to get faster database responses, I want to upgrade this and use the old one for gateway. The current system is limited to 16GB Ram, so it is basically maxed out. A query that takes 89 seconds right now is run on a regular basis (82,000 rows): select item.item_id,item_plu.number,item.description, (select number from account where asset_acct = account_id), (select number from account where expense_acct = account_id), (select number from account where income_acct = account_id), (select dept.name from dept where dept.dept_id = item.dept_id) as dept, (select subdept.name from subdept where subdept.subdept_id = item.subdept_id) as subdept, (select sum(on_hand) from item_change where item_change.item_id = item.item_id) as on_hand, (select sum(on_order) from item_change where item_change.item_id = item.item_id) as on_order, (select sum(total_cost) from item_change where item_change.item_id = item.item_id) as total_cost from item join item_plu on item.item_id = item_plu.item_id and item_plu.seq_num = 0 where item.inactive_on is null and exists (select item_num.number from item_num where item_num.item_id = item.item_id) and exists (select stocked from item_store where stocked = 'Y' and inactive_on is null and item_store.item_id = item.item_id) Explain analyse: http://explain.depesz.com/s/sGq
On 09/27/2012 02:40 PM, David Boreham wrote: > I think the newer CPU is the clear winner with a specintrate > performance of 589 vs 432. The comparisons you linked to had 24 absolute threads pitted against 32, since the newer CPUs have a higher maximum cores per CPU. That said, you're right that it has a fairly large cache. And from my experience, Intel CPU generations have been scaling incredibly well lately. (Opteron, we hardly knew ye!) We went from Dunnington to Nehalem, and it was stunning how much better the X5675 was compared to the E7450. Sandy Bridge isn't quite that much of a jump though, so if you don't need that kind of bleeding-edge, you might be able to save some cash. This is especially true since the E5-2600 series has the same TDP profile and both use 32nm lithography. Me? I'm waiting for Haswell, the next "tock" in Intel's Tick-Tock strategy. -- Shaun Thomas OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604 312-444-8534 sthomas@optionshouse.com ______________________________________________ See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email
On 09/27/2012 03:44 PM, Scott Marlowe wrote: > This 100x this. We used to buy our boxes from aberdeeninc.com and got > a 5 year replacement parts warranty included. We spent ~$10k on a > server that was right around $18k from dell for the same numbers and a > 3 year warranty. Whatever you do, go for the Intel ethernet adaptor option. We've had so many headaches with integrated broadcom NICs. :( -- Shaun Thomas OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604 312-444-8534 sthomas@optionshouse.com ______________________________________________ See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email
On Thu, Sep 27, 2012 at 2:46 PM, M. D. <lists@turnkey.bz> wrote: > > select item.item_id,item_plu.number,item.description, > (select number from account where asset_acct = account_id), > (select number from account where expense_acct = account_id), > (select number from account where income_acct = account_id), > (select dept.name from dept where dept.dept_id = item.dept_id) as dept, > (select subdept.name from subdept where subdept.subdept_id = > item.subdept_id) as subdept, > (select sum(on_hand) from item_change where item_change.item_id = > item.item_id) as on_hand, > (select sum(on_order) from item_change where item_change.item_id = > item.item_id) as on_order, > (select sum(total_cost) from item_change where item_change.item_id = > item.item_id) as total_cost > from item join item_plu on item.item_id = item_plu.item_id and > item_plu.seq_num = 0 > where item.inactive_on is null and exists (select item_num.number from > item_num > where item_num.item_id = item.item_id) > and exists (select stocked from item_store where stocked = 'Y' > and inactive_on is null > and item_store.item_id = item.item_id) Have you tried re-writing this query first? Is there a reason to have a bunch of subselects instead of joining the tables? What pg version are you running btw? A newer version of pg might help too.
On Thu, Sep 27, 2012 at 2:50 PM, Shaun Thomas <sthomas@optionshouse.com> wrote: > On 09/27/2012 03:44 PM, Scott Marlowe wrote: > >> This 100x this. We used to buy our boxes from aberdeeninc.com and got >> a 5 year replacement parts warranty included. We spent ~$10k on a >> server that was right around $18k from dell for the same numbers and a >> 3 year warranty. > > > Whatever you do, go for the Intel ethernet adaptor option. We've had so many > headaches with integrated broadcom NICs. :( I too have had problems with broadcom, as well as with nvidia nics and most other built in nics on servers. The Intel PCI dual nic cards have been my savior in the past.
On 09/27/2012 03:55 PM, Scott Marlowe wrote: > Have you tried re-writing this query first? Is there a reason to have > a bunch of subselects instead of joining the tables? What pg version > are you running btw? A newer version of pg might help too. Wow, yeah. I was just about to say something about that. I even pasted it into a notepad and started cutting it apart, but I wasn't sure about enough of the column sources in all those subqueries. It looks like it'd be a very, very good candidate for a window function or two, and maybe a few CASE statements. But I'm about 80% certain it's not very efficient as is. -- Shaun Thomas OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604 312-444-8534 sthomas@optionshouse.com ______________________________________________ See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email
On 9/27/2012 2:55 PM, Scott Marlowe wrote: > Whatever you do, go for the Intel ethernet adaptor option. We've had so many > >headaches with integrated broadcom NICs.:( Sound advice, but not a get out of jail card unfortunately : we had a horrible problem with the Intel e1000 driver in RHEL for several releases. Finally diagnosed it just as RH shipped a fixed driver.
On 9/27/2012 2:47 PM, Shaun Thomas wrote: > On 09/27/2012 02:40 PM, David Boreham wrote: > >> I think the newer CPU is the clear winner with a specintrate >> performance of 589 vs 432. > > The comparisons you linked to had 24 absolute threads pitted against > 32, since the newer CPUs have a higher maximum cores per CPU. That > said, you're right that it has a fairly large cache. And from my > experience, Intel CPU generations have been scaling incredibly well > lately. (Opteron, we hardly knew ye!) Yes, the "rate" spec test uses all the available cores. I'm assuming a concurrent workload, but since the single-thread performance isn't that much different between the two I think the higher number of cores, larger cache, newer design CPU is the best choice. > > We went from Dunnington to Nehalem, and it was stunning how much > better the X5675 was compared to the E7450. Sandy Bridge isn't quite > that much of a jump though, so if you don't need that kind of > bleeding-edge, you might be able to save some cash. This is especially > true since the E5-2600 series has the same TDP profile and both use > 32nm lithography. We use Opteron on a price/performance basis. Intel always seems to come up with some way to make their low-cost processors useless (such as limiting the amount of memory they can address).
Hello, from benchmarking on my r/o in memory database, i can tell that 9.1 on x5650 is faster than 9.2 on e2440. I do not have x5690, but i have not so loaded e2660. If you can give me a dump and some queries, i can bench them. Nevertheless x5690 seems more efficient on single threaded workload than 2660, unless you have many clients.
On Thu, Sep 27, 2012 at 6:08 PM, David Boreham <david_list@boreham.org> wrote: >> >> We went from Dunnington to Nehalem, and it was stunning how much better >> the X5675 was compared to the E7450. Sandy Bridge isn't quite that much of a >> jump though, so if you don't need that kind of bleeding-edge, you might be >> able to save some cash. This is especially true since the E5-2600 series has >> the same TDP profile and both use 32nm lithography. > > We use Opteron on a price/performance basis. Intel always seems to come up > with some way to make their low-cost processors useless (such as limiting > the amount of memory they can address). Careful with AMD, since many (I'm not sure about the latest ones) cannot saturate the memory bus when running single-threaded. So, great if you have a high concurrent workload, quite bad if you don't.
On Thursday, September 27, 2012 03:04:51 PM David Boreham wrote: > On 9/27/2012 2:55 PM, Scott Marlowe wrote: > > Whatever you do, go for the Intel ethernet adaptor option. We've had so > > many> > > >headaches with integrated broadcom NICs.:( > > Sound advice, but not a get out of jail card unfortunately : we had a > horrible problem with the Intel e1000 driver in RHEL for several releases. > Finally diagnosed it just as RH shipped a fixed driver. Yeah I've been compiling a newer one on each kernel release for a couple of years. But the hardware rocks. The Supermicro boxes also mostly have Intel network onboard, so not a problem there.
On 09/27/2012 04:08 PM, Evgeny Shishkin wrote: > from benchmarking on my r/o in memory database, i can tell that 9.1 > on x5650 is faster than 9.2 on e2440. How did you run those benchmarks? I find that incredibly hard to believe. Not only does 9.2 scale *much* better than 9.1, but the E5-2440 is a 15MB cache Sandy Bridge, as opposed to a 12MB cache Nehalem. Despite the slightly lower clock speed, you should have much better performance with 9.2 on the 2440. I know one thing you might want to check is to make sure both servers have turbo mode enabled, and power savings turned off for all CPUs. Check the BIOS for the CPU settings, because some motherboards and vendors have different defaults. I know we got inconsistent and much worse performance until we made those two changes on our HP systems. We use pgbench for benchmarking, so there's not anything I can really send you. :) -- Shaun Thomas OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604 312-444-8534 sthomas@optionshouse.com ______________________________________________ See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email
On 09/27/2012 02:55 PM, Scott Marlowe wrote: > On Thu, Sep 27, 2012 at 2:46 PM, M. D. <lists@turnkey.bz> wrote: >> select item.item_id,item_plu.number,item.description, >> (select number from account where asset_acct = account_id), >> (select number from account where expense_acct = account_id), >> (select number from account where income_acct = account_id), >> (select dept.name from dept where dept.dept_id = item.dept_id) as dept, >> (select subdept.name from subdept where subdept.subdept_id = >> item.subdept_id) as subdept, >> (select sum(on_hand) from item_change where item_change.item_id = >> item.item_id) as on_hand, >> (select sum(on_order) from item_change where item_change.item_id = >> item.item_id) as on_order, >> (select sum(total_cost) from item_change where item_change.item_id = >> item.item_id) as total_cost >> from item join item_plu on item.item_id = item_plu.item_id and >> item_plu.seq_num = 0 >> where item.inactive_on is null and exists (select item_num.number from >> item_num >> where item_num.item_id = item.item_id) >> and exists (select stocked from item_store where stocked = 'Y' >> and inactive_on is null >> and item_store.item_id = item.item_id) > Have you tried re-writing this query first? Is there a reason to have > a bunch of subselects instead of joining the tables? What pg version > are you running btw? A newer version of pg might help too. > > This query is inside an application (Quasar Accounting) written in Qt and I don't have access to the source code. The query is cross database, so it's likely that's why it's written the way it is. The form this query is on also allows the user to add/remove columns, so it makes it a LOT easier from the application point of view to do columns as they are here. I had at one point tried to make this same query a table join, but did not notice any performance difference in pg 8.x - been a while so don't remember exactly what version. I'm currently on 9.0. I will upgrade to 9.2 once I get a new server. As noted above, I need to buy a new server anyway, so I'm going for this one and using the current as a VM server for several VMs and also a backup database server.
On 9/27/2012 3:16 PM, Claudio Freire wrote: > Careful with AMD, since many (I'm not sure about the latest ones) > cannot saturate the memory bus when running single-threaded. So, great > if you have a high concurrent workload, quite bad if you don't. > Actually we test memory bandwidth with John McCalpin's stream program. Unfortunately it is hard to find stream test results for recent machines so it can be hard to compare two boxes unless youown examples, so I didn't mention it as a useful option. But if you can find results for the machines, or ask a friendto run it for you...definitely useful information.
On Sep 28, 2012, at 1:20 AM, Shaun Thomas <sthomas@optionshouse.com> wrote: > On 09/27/2012 04:08 PM, Evgeny Shishkin wrote: > >> from benchmarking on my r/o in memory database, i can tell that 9.1 >> on x5650 is faster than 9.2 on e2440. > > How did you run those benchmarks? I find that incredibly hard to believe. Not only does 9.2 scale *much* better than 9.1,but the E5-2440 is a 15MB cache Sandy Bridge, as opposed to a 12MB cache Nehalem. Despite the slightly lower clock speed,you should have much better performance with 9.2 on the 2440. > > I know one thing you might want to check is to make sure both servers have turbo mode enabled, and power savings turnedoff for all CPUs. Check the BIOS for the CPU settings, because some motherboards and vendors have different defaults.I know we got inconsistent and much worse performance until we made those two changes on our HP systems. > > We use pgbench for benchmarking, so there's not anything I can really send you. :) Yes, on pgbench utilising cpu to 80-90% e2660 is better, it goes to 140k ro tps, so scalability is very very good. But i talk about real oltp ro query. Single threaded. And cpu clock was real winner.
Please don't take responses off list, someone else may have an insight I'd miss. On Thu, Sep 27, 2012 at 3:20 PM, M. D. <lists@turnkey.bz> wrote: > On 09/27/2012 02:55 PM, Scott Marlowe wrote: >> >> On Thu, Sep 27, 2012 at 2:46 PM, M. D. <lists@turnkey.bz> wrote: >>> >>> select item.item_id,item_plu.number,item.description, >>> (select number from account where asset_acct = account_id), >>> (select number from account where expense_acct = account_id), >>> (select number from account where income_acct = account_id), >>> (select dept.name from dept where dept.dept_id = item.dept_id) as dept, >>> (select subdept.name from subdept where subdept.subdept_id = >>> item.subdept_id) as subdept, >>> (select sum(on_hand) from item_change where item_change.item_id = >>> item.item_id) as on_hand, >>> (select sum(on_order) from item_change where item_change.item_id = >>> item.item_id) as on_order, >>> (select sum(total_cost) from item_change where item_change.item_id = >>> item.item_id) as total_cost >>> from item join item_plu on item.item_id = item_plu.item_id and >>> item_plu.seq_num = 0 >>> where item.inactive_on is null and exists (select item_num.number from >>> item_num >>> where item_num.item_id = item.item_id) >>> and exists (select stocked from item_store where stocked = 'Y' >>> and inactive_on is null >>> and item_store.item_id = item.item_id) >> >> Have you tried re-writing this query first? Is there a reason to have >> a bunch of subselects instead of joining the tables? What pg version >> are you running btw? A newer version of pg might help too. >> > This query is inside an application (Quasar Accounting) written in Qt and I > don't have access to the source code. The query is cross database, so it's > likely that's why it's written the way it is. The form this query is on also > allows the user to add/remove columns, so it makes it a LOT easier from the > application point of view to do columns as they are here. I had at one > point tried to make this same query a table join, but did not notice any > performance difference in pg 8.x - been a while so don't remember exactly > what version. Have you tried cranking up work_mem and see if it helps this query at least avoid a nested look on 80k rows? If they'd fit in memory and use bitmap hashes it should be MUCH faster than a nested loop. > > I'm currently on 9.0. I will upgrade to 9.2 once I get a new server. As > noted above, I need to buy a new server anyway, so I'm going for this one > and using the current as a VM server for several VMs and also a backup > database server. Well being on 9.0 should make a big diff from 8.2. But again, without enough work_mem for the query to use a bitmap hash or something more efficient than a nested loop it's gonna be slow.
On Thu, Sep 27, 2012 at 3:16 PM, Claudio Freire <klaussfreire@gmail.com> wrote: > On Thu, Sep 27, 2012 at 6:08 PM, David Boreham <david_list@boreham.org> wrote: >>> >>> We went from Dunnington to Nehalem, and it was stunning how much better >>> the X5675 was compared to the E7450. Sandy Bridge isn't quite that much of a >>> jump though, so if you don't need that kind of bleeding-edge, you might be >>> able to save some cash. This is especially true since the E5-2600 series has >>> the same TDP profile and both use 32nm lithography. >> >> We use Opteron on a price/performance basis. Intel always seems to come up >> with some way to make their low-cost processors useless (such as limiting >> the amount of memory they can address). > > Careful with AMD, since many (I'm not sure about the latest ones) > cannot saturate the memory bus when running single-threaded. So, great > if you have a high concurrent workload, quite bad if you don't. Conversely, we often got MUCH better parallel performance from our quad 12 core opteron servers than I could get on a dual 8 core xeon at the time. The newest quad 10 core Intels are about as fast as the quad 12 core opteron from 3 years ago. So for parallel operation, do remember to look at the opteron. It was much cheaper to get highly parallel operation on the opterons than the xeons at the time we got the quad 12 core machine at my last job.
On Thu, Sep 27, 2012 at 3:36 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > Conversely, we often got MUCH better parallel performance from our > quad 12 core opteron servers than I could get on a dual 8 core xeon at > the time. Clarification that the two base machines were about the same price. 48 opteron cores (2.2GHz) or 16 xeon cores at ~2.6GHz. It's been a few years, I'm not gonna testify to the exact numbers in court. But the performance to 32 to 100 threads was WAY better on the 48 core opteron machine, never really breaking down even to 120+ threads. The Intel machine hit a very real knee of performance and dropped off really badly after about 40 threads (they were hyperthreaded).
On Sep 28, 2012, at 1:36 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > On Thu, Sep 27, 2012 at 3:16 PM, Claudio Freire <klaussfreire@gmail.com> wrote: >> On Thu, Sep 27, 2012 at 6:08 PM, David Boreham <david_list@boreham.org> wrote: >>>> >>>> We went from Dunnington to Nehalem, and it was stunning how much better >>>> the X5675 was compared to the E7450. Sandy Bridge isn't quite that much of a >>>> jump though, so if you don't need that kind of bleeding-edge, you might be >>>> able to save some cash. This is especially true since the E5-2600 series has >>>> the same TDP profile and both use 32nm lithography. >>> >>> We use Opteron on a price/performance basis. Intel always seems to come up >>> with some way to make their low-cost processors useless (such as limiting >>> the amount of memory they can address). >> >> Careful with AMD, since many (I'm not sure about the latest ones) >> cannot saturate the memory bus when running single-threaded. So, great >> if you have a high concurrent workload, quite bad if you don't. > > Conversely, we often got MUCH better parallel performance from our > quad 12 core opteron servers than I could get on a dual 8 core xeon at > the time. The newest quad 10 core Intels are about as fast as the > quad 12 core opteron from 3 years ago. So for parallel operation, do > remember to look at the opteron. It was much cheaper to get highly > parallel operation on the opterons than the xeons at the time we got > the quad 12 core machine at my last job. > But what about latency, not throughput?
On 09/27/2012 04:39 PM, Scott Marlowe wrote: > Clarification that the two base machines were about the same price. > 48 opteron cores (2.2GHz) or 16 xeon cores at ~2.6GHz. It's been a > few years, I'm not gonna testify to the exact numbers in court. Same here. We got really good performance on Opteron "a few years ago" too. :) But some more anecdotes... with the 4x8 E7450 Dunnington, our performance was OK. With the 2x6x2 X5675 Nehalem, it was ridiculous. Half the cores, 2.5x the speed, so far as pgbench was concerned. On every workload, on every level of concurrency I tried. Like you said, the 7450 dropped off at higher concurrency, but the 5675 kept on trucking. That's why I qualified my statement about Intel CPUs as "lately." They really seem to have cleaned up their server architecture. -- Shaun Thomas OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604 312-444-8534 sthomas@optionshouse.com ______________________________________________ See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email
On Thu, Sep 27, 2012 at 3:40 PM, Evgeny Shishkin <itparanoia@gmail.com> wrote: > > On Sep 28, 2012, at 1:36 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > >> On Thu, Sep 27, 2012 at 3:16 PM, Claudio Freire <klaussfreire@gmail.com> wrote: >>> On Thu, Sep 27, 2012 at 6:08 PM, David Boreham <david_list@boreham.org> wrote: >>>>> >>>>> We went from Dunnington to Nehalem, and it was stunning how much better >>>>> the X5675 was compared to the E7450. Sandy Bridge isn't quite that much of a >>>>> jump though, so if you don't need that kind of bleeding-edge, you might be >>>>> able to save some cash. This is especially true since the E5-2600 series has >>>>> the same TDP profile and both use 32nm lithography. >>>> >>>> We use Opteron on a price/performance basis. Intel always seems to come up >>>> with some way to make their low-cost processors useless (such as limiting >>>> the amount of memory they can address). >>> >>> Careful with AMD, since many (I'm not sure about the latest ones) >>> cannot saturate the memory bus when running single-threaded. So, great >>> if you have a high concurrent workload, quite bad if you don't. >> >> Conversely, we often got MUCH better parallel performance from our >> quad 12 core opteron servers than I could get on a dual 8 core xeon at >> the time. The newest quad 10 core Intels are about as fast as the >> quad 12 core opteron from 3 years ago. So for parallel operation, do >> remember to look at the opteron. It was much cheaper to get highly >> parallel operation on the opterons than the xeons at the time we got >> the quad 12 core machine at my last job. > > But what about latency, not throughput? It means little when you're building a server to handle literally thousands of queries per seconds from hundreds of active connections. The intel box would have simply fallen over under the load we were handling on the 48 core opteron at the time. Note that under maximum load we saw load factors in the 20 to 100 on that opteron box and still got very good response times (average latency on most queries was still in the single digits of milliseconds). For single threaded or only a few threads, yeah, the intel was slightly faster, but as soon as the real load of our web site hit the machine it wasn't even close.
On Thu, Sep 27, 2012 at 3:44 PM, Shaun Thomas <sthomas@optionshouse.com> wrote: > On 09/27/2012 04:39 PM, Scott Marlowe wrote: > >> Clarification that the two base machines were about the same price. >> 48 opteron cores (2.2GHz) or 16 xeon cores at ~2.6GHz. It's been a >> few years, I'm not gonna testify to the exact numbers in court. > > > Same here. We got really good performance on Opteron "a few years ago" too. > :) > > But some more anecdotes... with the 4x8 E7450 Dunnington, our performance > was OK. With the 2x6x2 X5675 Nehalem, it was ridiculous. Half the cores, > 2.5x the speed, so far as pgbench was concerned. On every workload, on every > level of concurrency I tried. Like you said, the 7450 dropped off at higher > concurrency, but the 5675 kept on trucking. > > That's why I qualified my statement about Intel CPUs as "lately." They > really seem to have cleaned up their server architecture. Yeah, Intel's made a lot of headway on multi-core architecture since then. But the 5620 etc series of the time were still pretty meh at high concurrency compared to the opteron. The latest ones, which I've tested now (40 hyperthreaded cores i.e 80 virtual cores) are definitely faster than the now 4 year old 48 core opterons. But at a much higher cost for a pretty moderate (20 to 30%) increase in performance. OTOH, they don't "break down" past 40 to 100 connections any more, so that's the big improvement to me. How the curve looks like heading to 60+ threads is mildly interesting, but how the server performs as you go past it was what worried me before. Now both architectures seem to behave much better in such "overload" scenarios.
On Thu, Sep 27, 2012 at 3:28 PM, David Boreham <david_list@boreham.org> wrote: > On 9/27/2012 3:16 PM, Claudio Freire wrote: >> >> Careful with AMD, since many (I'm not sure about the latest ones) >> cannot saturate the memory bus when running single-threaded. So, great >> if you have a high concurrent workload, quite bad if you don't. >> > Actually we test memory bandwidth with John McCalpin's stream program. > Unfortunately it is hard to find stream test results for recent machines so > it can be hard to compare two boxes unless you own examples, so I didn't > mention it as a useful option. But if you can find results for the machines, > or ask a friend to run it for you...definitely useful information. IIRC the most recent tests from Greg Smith show the latest model Intels winning by a fair bit over the opterons. Before that though the 48 core opteron servers were winning. It tends to go back and forth. Dollar for dollar, the Opterons are usually the better value now, while the Intels give the absolute best performance money can buy.
On 09/27/2012 10:22 PM, M. D. wrote: > On 09/27/2012 02:55 PM, Scott Marlowe wrote: >> On Thu, Sep 27, 2012 at 2:46 PM, M. D. <lists@turnkey.bz> wrote: >>> select item.item_id,item_plu.number,item.description, >>> (select number from account where asset_acct = account_id), >>> (select number from account where expense_acct = account_id), >>> (select number from account where income_acct = account_id), >>> (select dept.name from dept where dept.dept_id = item.dept_id) as dept, >>> (select subdept.name from subdept where subdept.subdept_id = >>> item.subdept_id) as subdept, >>> (select sum(on_hand) from item_change where item_change.item_id = >>> item.item_id) as on_hand, >>> (select sum(on_order) from item_change where item_change.item_id = >>> item.item_id) as on_order, >>> (select sum(total_cost) from item_change where item_change.item_id = >>> item.item_id) as total_cost >>> from item join item_plu on item.item_id = item_plu.item_id and >>> item_plu.seq_num = 0 >>> where item.inactive_on is null and exists (select item_num.number from >>> item_num >>> where item_num.item_id = item.item_id) >>> and exists (select stocked from item_store where stocked = 'Y' >>> and inactive_on is null >>> and item_store.item_id = item.item_id) >> Have you tried re-writing this query first? Is there a reason to have >> a bunch of subselects instead of joining the tables? What pg version >> are you running btw? A newer version of pg might help too. >> >> > This query is inside an application (Quasar Accounting) written in Qt and I don't have access to the source code. Is there any prospect of the planner/executor being taught to merge each of those groups of three index scans, to aid this sort of poor query? -- Jeremy
On Thu, Sep 27, 2012 at 03:50:33PM -0500, Shaun Thomas wrote: > On 09/27/2012 03:44 PM, Scott Marlowe wrote: > > >This 100x this. We used to buy our boxes from aberdeeninc.com and got > >a 5 year replacement parts warranty included. We spent ~$10k on a > >server that was right around $18k from dell for the same numbers and a > >3 year warranty. > > Whatever you do, go for the Intel ethernet adaptor option. We've had > so many headaches with integrated broadcom NICs. :( > +++1 Sigh. Ken
On 9/27/2012 1:56 PM, M. D. wrote: >> >> I'm in Belize, so what I'm considering is from ebay, where it's unlikely >> that I'll get the warranty. Should I consider some other brand rather? To >> build my own or buy custom might be an option too, but I would not get any >> warranty. Your best warranty would be to have the confidence to do your own repairs, and to have the parts on hand. I'd seriously consider putting your own system together. Maybe go to a few sites with pre-configured machines and see what parts they use. Order those, screw the thing together yourself, and put a spare of each critical part on your shelf. A warranty is useless if you can't use it in a timely fashion. And you could easily get better reliability by spending the money on spare parts. I'd bet that for the price of a warranty you can buy a spare motherboard, a few spare disks, a memory stick or two, a spare power supply, and maybe even a spare 3WARE RAID controller. Craig
On 9/28/2012 9:46 AM, Craig James wrote: > Your best warranty would be to have the confidence to do your own > repairs, and to have the parts on hand. I'd seriously consider > putting your own system together. Maybe go to a few sites with > pre-configured machines and see what parts they use. Order those, > screw the thing together yourself, and put a spare of each critical > part on your shelf. > This is what I did for years, but after taking my old parts collection to the landfill a few times, realized I may as well just buy N+1 machines and keep zero spares on the shelf. That way I get a spare machine available for use immediately, and I know the parts are working (parts on the shelf may be defective). If something breaks, I use the spare machine until the replacement parts arrive. Note in addition that a warranty can be extremely useful in certain organizations as a vehicle of blame avoidance (this may be its primary purpose in fact). If I buy a bunch of machines that turn out to have buggy NICs, well that's my fault and I can kick myself since I own the company, stay up late into the night reading kernel code, and buy new NICs. If I have an evil Dilbertian boss, then well...I'd be seriously thinking about buying Dell boxes in order to blame Dell rather than myself, and be able to say "everything is warrantied" if badness goes down. Just saying...
On 09/28/2012 09:57 AM, David Boreham wrote: > On 9/28/2012 9:46 AM, Craig James wrote: >> Your best warranty would be to have the confidence to do your own >> repairs, and to have the parts on hand. I'd seriously consider >> putting your own system together. Maybe go to a few sites with >> pre-configured machines and see what parts they use. Order those, >> screw the thing together yourself, and put a spare of each critical >> part on your shelf. >> > This is what I did for years, but after taking my old parts collection > to the landfill a few times, realized I may as well just buy N+1 > machines and keep zero spares on the shelf. That way I get a spare > machine available for use immediately, and I know the parts are > working (parts on the shelf may be defective). If something breaks, I > use the spare machine until the replacement parts arrive. > > Note in addition that a warranty can be extremely useful in certain > organizations as a vehicle of blame avoidance (this may be its primary > purpose in fact). If I buy a bunch of machines that turn out to have > buggy NICs, well that's my fault and I can kick myself since I own the > company, stay up late into the night reading kernel code, and buy new > NICs. If I have an evil Dilbertian boss, then well...I'd be seriously > thinking about buying Dell boxes in order to blame Dell rather than > myself, and be able to say "everything is warrantied" if badness goes > down. Just saying... > I'm kinda in the latter shoes. Dell is the only thing that is trusted in my organisation. If I would build my own, I would be fully blamed for anything going wrong in the next 3 years. Thanks everyone for your input. Now my final choice will be if my budget allows for the latest and fastest, else I'm going for the x5690. I don't have hundreds of users, so I think the x5690 should do a pretty good job handling the load.
On Fri, Sep 28, 2012 at 11:33 AM, M. D. <lists@turnkey.bz> wrote: > On 09/28/2012 09:57 AM, David Boreham wrote: >> >> On 9/28/2012 9:46 AM, Craig James wrote: >>> >>> Your best warranty would be to have the confidence to do your own >>> repairs, and to have the parts on hand. I'd seriously consider >>> putting your own system together. Maybe go to a few sites with >>> pre-configured machines and see what parts they use. Order those, >>> screw the thing together yourself, and put a spare of each critical >>> part on your shelf. >>> >> This is what I did for years, but after taking my old parts collection to >> the landfill a few times, realized I may as well just buy N+1 machines and >> keep zero spares on the shelf. That way I get a spare machine available for >> use immediately, and I know the parts are working (parts on the shelf may be >> defective). If something breaks, I use the spare machine until the >> replacement parts arrive. >> >> Note in addition that a warranty can be extremely useful in certain >> organizations as a vehicle of blame avoidance (this may be its primary >> purpose in fact). If I buy a bunch of machines that turn out to have buggy >> NICs, well that's my fault and I can kick myself since I own the company, >> stay up late into the night reading kernel code, and buy new NICs. If I have >> an evil Dilbertian boss, then well...I'd be seriously thinking about buying >> Dell boxes in order to blame Dell rather than myself, and be able to say >> "everything is warrantied" if badness goes down. Just saying... >> > I'm kinda in the latter shoes. Dell is the only thing that is trusted in my > organisation. If I would build my own, I would be fully blamed for anything > going wrong in the next 3 years. Thanks everyone for your input. Now my > final choice will be if my budget allows for the latest and fastest, else > I'm going for the x5690. I don't have hundreds of users, so I think the > x5690 should do a pretty good job handling the load. If people in your organization trust Dell, they just haven't dealt with them enough.
>________________________________ > From: M. D. <lists@turnkey.bz> >To: pgsql-performance@postgresql.org >Sent: Friday, 28 September 2012, 18:33 >Subject: Re: [PERFORM] hardware advice > >On 09/28/2012 09:57 AM, David Boreham wrote: >> On 9/28/2012 9:46 AM, Craig James wrote: >>> Your best warranty would be to have the confidence to do your own >>> repairs, and to have the parts on hand. I'd seriously consider >>> putting your own system together. Maybe go to a few sites with >>> pre-configured machines and see what parts they use. Order those, >>> screw the thing together yourself, and put a spare of each critical >>> part on your shelf. >>> >> This is what I did for years, but after taking my old parts collection to the landfill a few times, realized I may aswell just buy N+1 machines and keep zero spares on the shelf. That way I get a spare machine available for use immediately,and I know the parts are working (parts on the shelf may be defective). If something breaks, I use the sparemachine until the replacement parts arrive. >> >> Note in addition that a warranty can be extremely useful in certain organizations as a vehicle of blame avoidance (thismay be its primary purpose in fact). If I buy a bunch of machines that turn out to have buggy NICs, well that's my faultand I can kick myself since I own the company, stay up late into the night reading kernel code, and buy new NICs. IfI have an evil Dilbertian boss, then well...I'd be seriously thinking about buying Dell boxes in order to blame Dell ratherthan myself, and be able to say "everything is warrantied" if badness goes down. Just saying... >> >I'm kinda in the latter shoes. Dell is the only thing that is trusted in my organisation. If I would build my own, I wouldbe fully blamed for anything going wrong in the next 3 years. Thanks everyone for your input. Now my final choice willbe if my budget allows for the latest and fastest, else I'm going for the x5690. I don't have hundreds of users, soI think the x5690 should do a pretty good job handling the load. > > Having plenty experience with Dell I'd urge you reconsider. All the Dell servers we've had have arrived hideously misconfigured, and tech support gets you nowhere. Once we've rejigged the hardware ourselves, maybe replacing a part or two they've performed okay. Reliability has been okay, however one of our newer R910s recently all of a sudden went dead to the world; no prior symptoms showing in our hardware and software monitoring, no errors in the os logs, nothing in the dell drac logs. After a hard reset it's back up as if nothing happened, and it's an issue I'm none the wiser to the cause. Not good piece of mind. Look around and find another vendor, even if your company has to pay more for you to have that blame avoidance.
> From: pgsql-performance-owner@postgresql.org [mailto:pgsql-performance-owner@postgresql.org] On Behalf Of Glyn Astill
> Sent: Tuesday, October 02, 2012 4:21 AM
> To: M. D.; pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] hardware advice
>
>> From: M. D. <lists@turnkey.bz>
>> To: pgsql-performance@postgresql.org
>> Sent: Friday, 28 September 2012, 18:33
>> Subject: Re: [PERFORM] hardware advice
>>
>> On 09/28/2012 09:57 AM, David Boreham wrote:
>>> On 9/28/2012 9:46 AM, Craig James wrote:
>>>> Your best warranty would be to have the confidence to do your own
>>>> repairs, and to have the parts on hand. I'd seriously consider
>>>> putting your own system together. Maybe go to a few sites with
>>>> pre-configured machines and see what parts they use. Order those,
>>>> screw the thing together yourself, and put a spare of each critical
>>>> part on your shelf.
>>>>
>>> This is what I did for years, but after taking my old parts collection to the landfill a few times, realized I may as well just buy N+1 machines and keep zero spares on the shelf. That way I get a spare machine available for use immediately, and I know the parts are working (parts on the shelf may be defective). If something breaks, I use the spare machine until the replacement parts arrive.
>>>
>>> Note in addition that a warranty can be extremely useful in certain organizations as a vehicle of blame avoidance (this may be its primary purpose in fact). If I buy a bunch of machines that turn out to have buggy NICs, well that's my fault and I can kick myself since I own the company, stay up late into the night reading kernel code, and buy new NICs. If I have an evil Dilbertian boss, then well...I'd be seriously thinking about buying Dell boxes in order to blame Dell rather than myself, and be able to say "everything is warrantied" if badness goes down. Just saying...
>>>
>>I'm kinda in the latter shoes. Dell is the only thing that is trusted in my organisation. If I would build my own, I would be fully blamed for anything going wrong in the next 3 years. Thanks everyone for your input. Now my final choice will be if my budget allows for the latest and fastest, else I'm going for the x5690. I don't have hundreds of users, so I think the x5690 should do a pretty good job handling the load.
>>
>>
>
> Having plenty experience with Dell I'd urge you reconsider. All the Dell servers
> we've had have arrived hideously misconfigured, and tech support gets you
> nowhere. Once we've rejigged the hardware ourselves, maybe replacing a
> part or two they've performed okay.
>
> Reliability has been okay, however one of our newer R910s recently all
> of a sudden went dead to the world; no prior symptoms showing in our
> hardware and software monitoring, no errors in the os logs, nothing in
> the dell drac logs. After a hard reset it's back up as if nothing
> happened, and it's an issue I'm none the wiser to the cause. Not good
> piece of mind.
>
> Look around and find another vendor, even if your company has to pay
> more for you to have that blame avoidance.
We're currently using Dell and have had enough problems to think about switching.
What about HP?
Dan Franklin
On Tue, Oct 2, 2012 at 10:51:46AM -0400, Franklin, Dan (FEN) wrote: > > Look around and find another vendor, even if your company has to pay > > > more for you to have that blame avoidance. > > We're currently using Dell and have had enough problems to think about > switching. > > What about HP? If you need a big vendor, I think HP is a good choice. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
On 10/2/2012 2:20 AM, Glyn Astill wrote: > newer R910s recently all of a sudden went dead to the world; no prior symptoms > showing in our hardware and software monitoring, no errors in the os logs, > nothing in the dell drac logs. After a hard reset it's back up as if > nothing happened, and it's an issue I'm none the wiser to the cause. Not > good piece of mind. This could be an OS bug rather than a hardware problem.
On Tue, Oct 2, 2012 at 9:14 AM, Bruce Momjian <bruce@momjian.us> wrote: > On Tue, Oct 2, 2012 at 10:51:46AM -0400, Franklin, Dan (FEN) wrote: >> We're currently using Dell and have had enough problems to think about >> switching. >> >> What about HP? > > If you need a big vendor, I think HP is a good choice. This brings up a point I make sometimes to folks. Big companies can get great treatment from big vendors. When you work somewhere that orders servers by the truckload, you need a vendor who can fill trucks with servers in a day's notice, and send you a hundred different replacement parts the next. Conversely, if you are a smaller company that orders a dozen or so servers a year, then often a big vendor is not the best match. You're just a drop in the ocean to them. A small vendor is often a much better match here. They can carefully test those two 48 core opteron servers with 100 drives over a week's time to make sure it works the way you need it to. It might take them four weeks to build a big specialty box, but it will usually get built right and for a decent price. Also the sales people will usually be more knowledgeable about the machines they sell. Recent job: 20 or fewer servers ordered a year, boutique shop for them (aberdeeninc in this case). Other recent job: 20 or more servers a week. Big reseller (not at liberty to release the name).
----- Original Message ----- > From: David Boreham <david_list@boreham.org> > To: "pgsql-performance@postgresql.org" <pgsql-performance@postgresql.org> > Cc: > Sent: Tuesday, 2 October 2012, 16:14 > Subject: Re: [PERFORM] hardware advice > > On 10/2/2012 2:20 AM, Glyn Astill wrote: >> newer R910s recently all of a sudden went dead to the world; no prior > symptoms >> showing in our hardware and software monitoring, no errors in the os logs, >> nothing in the dell drac logs. After a hard reset it's back up as if >> nothing happened, and it's an issue I'm none the wiser to the > cause. Not >> good piece of mind. > This could be an OS bug rather than a hardware problem. Yeah actually I'm leaning towards this being a specific bug in the linux kernel. Everything else I said still stands though.