Thread: tuning for AIX 5L with large memory
I will soon have at my disposal a new IBM pSeries server. The main mission for this box will be to serve several pg databases. I have ordered 8GB of RAM and want to learn the best way to tune pg and AIX for this configuration. Specifically, I am curious about shared memory limitations. I've had to tune the shmmax on linux machines before but I'm new to AIX and not sure if this is even required on that platform? Google has not been much help for specifics here. Hoping someone else here has a similar platform and can offer some advice.. Thanks! -Dan Harris
Clinging to sanity, fbsd@drivefaster.net (Dan Harris) mumbled into her beard: > I will soon have at my disposal a new IBM pSeries server. The main > mission for this box will be to serve several pg databases. I have > ordered 8GB of RAM and want to learn the best way to tune pg and AIX > for this configuration. Specifically, I am curious about shared > memory limitations. I've had to tune the shmmax on linux machines > before but I'm new to AIX and not sure if this is even required on > that platform? Google has not been much help for specifics here. > > Hoping someone else here has a similar platform and can offer some advice.. We have a couple of these at work; they're nice and fast, although the process of compiling things, well, "makes me feel a little unclean." One of our sysadmins did all the "configuring OS stuff" part; I don't recall offhand if there was a need to twiddle something in order to get it to have great gobs of shared memory. A quick Google on this gives me the impression that AIX supports, out of the box, multiple GB of shared memory without special kernel configuration. A DB/2 configuration guide tells users of Solaris and HP/UX that they need to set shmmax in sundry config files and reboot. No such instruction for AIX. If it needs configuring, it's probably somewhere in SMIT. And you can always try starting up an instance to see how big it'll let you make shared memory. The usual rule of thumb has been that having substantially more than 10000 blocks worth of shared memory is unworthwhile. I don't think anyone has done a detailed study on AIX to see if bigger numbers play well or not. I would think that having more than about 1 to 1.5GB of shared memory in use for buffer cache would start playing badly, but I have no numbers. -- select 'cbbrowne' || '@' || 'cbbrowne.com'; http://www3.sympatico.ca/cbbrowne/sap.html Would-be National Mottos: USA: "We don't care where you come from. We can't find our *own* country on a map..."
Christopher Browne wrote: > One of our sysadmins did all the "configuring OS stuff" part; I don't > recall offhand if there was a need to twiddle something in order to > get it to have great gobs of shared memory. FWIW, the section on configuring kernel resources under various Unixen[1] doesn't have any documentation for AIX. If someone out there knows which knobs need to be tweaked, would they mind sending in a doc patch? (Or just specifying what needs to be done, and I'll add the SGML.) -Neil [1] http://developer.postgresql.org/docs/postgres/kernel-resources.html#SYSVIPC
Christopher Browne wrote: >We have a couple of these at work; they're nice and fast, although the >process of compiling things, well, "makes me feel a little unclean." > > > > Thanks very much for your detailed reply, Christopher. Would you mind elaborating on the "makes me feel a little unclean" statement? Also, I'm curious which models you are running and if you have any anecdotal comparisons for perfomance? I'm completely unfamiliar with AIX, so if there are dark corners that await me, I'd love to hear a little more so I can be prepared. I'm going out on a limb here and jumping to an unfamiliar architecture as well as OS, but the IO performance of these systems has convinced me that it's what I need to break out of my I/O limited x86 systems. I suppose when I do get it, I'll just experiment with different sizes of shared memory and run some benchmarks. For the price of these things, they better be some good marks! Thanks again -Dan Harris
fbsd@drivefaster.net (Dan Harris) writes: > Christopher Browne wrote: > >>We have a couple of these at work; they're nice and fast, although the >>process of compiling things, well, "makes me feel a little unclean." >> > Thanks very much for your detailed reply, Christopher. Would you mind > elaborating on the "makes me feel a little unclean" statement? The way AIX manages symbol tables for shared libraries is fairly astounding in its verbosity. Go and try to compile, by hand, a shared library, and you'll see :-). > Also, I'm curious which models you are running and if you have any > anecdotal comparisons for perfomance? I'm completely unfamiliar > with AIX, so if there are dark corners that await me, I'd love to > hear a little more so I can be prepared. I'm going out on a limb > here and jumping to an unfamiliar architecture as well as OS, but > the IO performance of these systems has convinced me that it's what > I need to break out of my I/O limited x86 systems. It would probably be better for Andrew Sullivan to speak to the details on that. The main focus of comparison has been between AIX and Solaris, and the AIX systems have looked generally pretty good. We haven't yet had AIX under what could be truly assessed as "heavy load." That comes, in part, from the fact that brand-new latest-generation pSeries hardware is _way_ faster than three-year-old Solaris hardware. Today's top-of-the-line is faster than what was high-end three years ago, so the load that the Sun boxes can cope with "underwhelms" the newer IBM hardware :-). > I suppose when I do get it, I'll just experiment with different > sizes of shared memory and run some benchmarks. For the price of > these things, they better be some good marks! Well, there's more than one way of looking at these things. One of the important perspectives to me is the one of reliability. A system that is Way Fast, but which crashes once in a while with some hardware fault is no good. I have been getting accustomed to Sun and Dell systems crashing way too often :-(. One of the merits of the pSeries hardware is that it's got the maturity of IBM's long term experience at building reliable servers. If the IBM hardware was a bit slower (unlikely, based on it being way newer than the older Suns), but had suitable reliability, that would seem a reasonable tradeoff to me. I take the very same perspective on the discussions of "which filesystem is best?" Raw speed is NOT the only issue; it is secondary, as far as I am concerned, to "Is It Reliable?" -- (format nil "~S@~S" "cbbrowne" "ntlug.org") http://cbbrowne.com/info/lsf.html Appendium to the Rules of the Evil Overlord #1: "I will not build excessively integrated security-and-HVAC systems. They may be Really Cool, but are far too vulnerable to breakdowns."
neilc@samurai.com (Neil Conway) writes: > Christopher Browne wrote: >> One of our sysadmins did all the "configuring OS stuff" part; I don't >> recall offhand if there was a need to twiddle something in order to >> get it to have great gobs of shared memory. > > FWIW, the section on configuring kernel resources under various > Unixen[1] doesn't have any documentation for AIX. If someone out there > knows which knobs need to be tweaked, would they mind sending in a doc > patch? (Or just specifying what needs to be done, and I'll add the > SGML.) After verifying that nobody wound up messing with the kernel parameters, here's a docs patch... Index: runtime.sgml =================================================================== RCS file: /projects/cvsroot/pgsql-server/doc/src/sgml/runtime.sgml,v retrieving revision 1.263 diff -c -u -r1.263 runtime.sgml --- runtime.sgml 29 Apr 2004 04:37:09 -0000 1.263 +++ runtime.sgml 26 May 2004 16:35:43 -0000 @@ -3557,6 +3557,26 @@ </listitem> </varlistentry> + <varlistentry> + <term><systemitem class="osname">AIX</></term> + <indexterm><primary>AIX</><secondary>IPC configuration</></> + <listitem> + <para> + At least as of version 5.1, it should not be necessary to do + any special configuration for such parameters as + <varname>SHMMAX</varname>, as it appears this is configured to + allow all memory to be used as shared memory. That is the + sort of configuration commonly used for other databases such + as <application>DB/2</application>.</para> + + <para> It may, however, be necessary to modify the global + <command>ulimit</command> information in + <filename>/etc/security/limits</filename>, as the default hard + limits for filesizes (<varname>fsize</varname>) and numbers of + files (<varname>nofiles</varname>) may be too low. + </para> + </listitem> + </varlistentry> <varlistentry> <term><systemitem class="osname">Solaris</></term> -- select 'cbbrowne' || '@' || 'acm.org'; http://www.ntlug.org/~cbbrowne/linuxxian.html Hail to the sun god, he sure is a fun god, Ra, Ra, Ra!!