Re: Fusion-io ioDrive - Mailing list pgsql-performance
From | Jonah H. Harris |
---|---|
Subject | Re: Fusion-io ioDrive |
Date | |
Msg-id | 36e682920807070809lbce9225wd1c67978d063dd01@mail.gmail.com Whole thread Raw |
In response to | Re: Fusion-io ioDrive ("Merlin Moncure" <mmoncure@gmail.com>) |
Responses |
Re: Fusion-io ioDrive
|
List | pgsql-performance |
On Mon, Jul 7, 2008 at 9:23 AM, Merlin Moncure <mmoncure@gmail.com> wrote: > I have a lot of problems with your statements. First of all, we are > not really talking about 'RAM' storage...I think your comments would > be more on point if we were talking about mounting database storage > directly from the server memory for example. Sever memory and cpu are > involved to the extent that the o/s using them for caching and > filesystem things and inside the device driver. I'm not sure how those cards work, but my guess is that the CPU will go 100% busy (with a near-zero I/O wait) on any sizable workload. In this case, the current pgbench configuration being used is quite small and probably won't resemble this. > Also, your comments seem to indicate that having a slower device leads > to higher concurrency because it allows the process to yield and do > other things. This is IMO simply false. Argue all you want, but this is a fairly well known (20+ year-old) behavior. > With faster storage cpu loads will increase but only because the overall > system throughput increases and cpu/memory 'work' increases in terms > of overall system activity. Again, I said that response times (throughput) would improve. I'd like to see your argument for explaining how you can handle more CPU-only operations when 0% of the CPU is free for use. > Presumably as storage approaches speedsof main system memory > the algorithms of dealing with it will become simpler (not having to > go through acrobatics to try and making everything sequential) > and thus faster. We'll have to see. > I also find the remarks of software 'optimizing' for strict hardware > assumptions (L1+L2) cache a little suspicious. In some old programs I > remember keeping a giant C 'union' of critical structures that was > exactly 8k to fit in the 486 cpu cache. In modern terms I think that > type of programming (sans some specialized environments) is usually > counter-productive...I think PostgreSQL's approach of deferring as > much work as possible to the o/s is a great approach. All of the major database vendors still see an immense value in optimizing their algorithms and memory structures for specific platforms and CPU caches. Hence, if they're *paying* money for very-specialized industry professionals to optimize in this way, I would hesitate to say there isn't any value in it. As a fact, Postgres doesn't have those low-level resources, so for the most part, I have to agree that they have to rely on the OS. -Jonah
pgsql-performance by date: