Thread: Tips for a system with _extremely_ slow IO?
Any advice for settings for extremely IO constrained systems? A demo I've set up for sales seems to be spending much of it's time in disk wait states. The particular system I'm working with is: Ext3 on Debian inside Microsoft VirtualPC on NTFS on WindowsXP on laptops of our sales team. Somewhat surprisingly, CPU performance is close to native; but disk IO is much worse - probably orders of magnitude worse - since there are so many layers of filesystems involved. Unfortunately, no, I don't think the sales guys will upgrade to BSD. :) The database is too large to fit entirely in memory (3GB of spatial data using PostGIS); and has relative large updates (people can add "layers" consisting of perhaps 10000 points, lines, and polygons out of a million or so possibilities - they do this by doing 10K inserts into tables with postgis geometry columns). Steps I've already done: * Gave virtual PC as much memory as possible (1/2 gig) * Tuned postgresql.conf; setting increased effective_cache_size to 10000 (tested a few values with this workload) reduced cpu_index_tuple_cost to 0.0005 (encourages indexes which may reduce disk hits) decreased random_page_cost to 2 (seems the fragmented NTFS means many sequential access are probably a random access anyway) increased work_mem to 15000 (sorting on disk was very VERY amazingly slow) increased shared_buffers to 3000 (guess) * Tuned ext3 (yeah, I'll try JFS or XFS next) Journal_data_writeback == minimize journaling? commit=600,noatime in fstab * tuned the VM echo 60000 > /proc/sys/vm/dirty_expire_centisecs echo 70 > /proc/sys/vm/dirty_ratio It seems for this workload, the two biggest benefits were "commit=600" and writeback for ext3 and "echo 60000 > /proc/sys/vm/dirty_expire_centisecs" If I understand right, this combination says that dirty pages can sit in memory far longer than the defaults -- and I guess this delays my bad IO times to the point in the salesguys presentation when he's playing with powerpoint:). Much of this tuning was guesswork; but it did make the demo go from "unacceptable" to "reasonable". Were any of my guesses particularly bad, and may be doing more harm than good? Any more ideas on how to deal with a pathologically slow IO system? Ron
On Fri, Dec 17, 2004 at 11:51:12PM -0800, Ron Mayer wrote: > Any advice for settings for extremely IO constrained systems? > > A demo I've set up for sales seems to be spending much of it's time in > disk wait states. > > > The particular system I'm working with is: > Ext3 on Debian inside Microsoft VirtualPC on NTFS > on WindowsXP on laptops of our sales team. > Somewhat surprisingly, CPU performance is close to native; but disk IO > is much worse - probably orders of magnitude worse - since there are > so many layers of filesystems involved. Unfortunately, no, I don't > think the sales guys will upgrade to BSD. :) > > The database is too large to fit entirely in memory (3GB of spatial data > using PostGIS); and has relative large updates (people can add "layers" > consisting of perhaps 10000 points, lines, and polygons out of a million > or so possibilities - they do this by doing 10K inserts into tables with > postgis geometry columns). I've found VirtualPC to be somewhat slower than VMWare for some things (and faster for others) and less friendly to a Linux guest OS. Try an identical build running inside VMWare. Can you run the VM using a native disk partition, rather than one emulated by a big NTFS file? Even if your application needs to run under Linux, can you run the database directly on XP (8.0RC2 hot off the presses...) and connect to it from the Linux VM? Cheers, Steve
On Fri, 2004-12-17 at 23:51 -0800, Ron Mayer wrote: > Any advice for settings for extremely IO constrained systems? > > A demo I've set up for sales seems to be spending much of it's time in > disk wait states. > > > The particular system I'm working with is: > Ext3 on Debian inside Microsoft VirtualPC on NTFS > on WindowsXP on laptops of our sales team. As this is only for demo purposes, you might consider turning fsync off, although I have no idea if it would have any effect on your setup. gnari
> > A demo I've set up for sales seems to be spending much of it's time in > > disk wait states. > > > > > > The particular system I'm working with is: > > Ext3 on Debian inside Microsoft VirtualPC on NTFS > > on WindowsXP on laptops of our sales team. > > As this is only for demo purposes, you might consider turning fsync off, > although I have no idea if it would have any effect on your setup. Try removing VirtualPC from the equation. You can run the win32 native port or dual boot your laptop for example. Merlin