Re: Memory Leakage Problem - Mailing list pgsql-general
From | John Sidney-Woollett |
---|---|
Subject | Re: Memory Leakage Problem |
Date | |
Msg-id | 439E7B7A.9080205@wardbrook.com Whole thread Raw |
In response to | Re: Memory Leakage Problem (Will Glynn <wglynn@freedomhealthcare.org>) |
Responses |
Re: Memory Leakage Problem
|
List | pgsql-general |
We're seeing memory problems on one of our postgres databases. We're using 7.4.6, and I suspect the kernel version is a key factor with this problem. One running under Redhat Linux 2.4.18-14smp #1 SMP and the other Debian Linux 2.6.8.1-4-686-smp #1 SMP The second Debian server is a replicated slave using Slony. We NEVER see any problems on the "older" Redhat (our master) DB, whereas the Debian slave database requires slony and postgres to be stopped every 2-3 weeks. This server just consumes more and more memory until it goes swap crazy and the load averages start jumping through the roof. Stopping the two services restores the server to some sort of normality - the load averages drop dramatically and remain low. But the memory is only fully recovered by a server reboot. Over time memory gets used up, until you get to the point where those services require another stop and start. Just my 2 cents... John Will Glynn wrote: > Mike Rylander wrote: > >> Right, I can definitely see that happening. Some backends are upwards >> of 200M, some are just a few since they haven't been touched yet. >> >> >>> Now, multiply that effect by N backends doing this at once, and you'll >>> have a very skewed view of what's happening in your system. >>> >> >> Absolutely ... >> >>> I'd trust the totals reported by free and dstat a lot more than summing >>> per-process numbers from ps or top. >>> >> >> And there's the part that's confusing me: the numbers for used memory >> produced by free and dstat, after subtracting the buffers/cache >> amounts, are /larger/ than those that ps and top report. (top says the >> same thing as ps, on the whole.) >> > > I'm seeing the same thing on one of our 8.1 servers. Summing RSS from > `ps` or RES from `top` accounts for about 1 GB, but `free` says: > > total used free shared buffers cached > Mem: 4060968 3870328 190640 0 14788 432048 > -/+ buffers/cache: 3423492 637476 > Swap: 2097144 175680 1921464 > > That's 3.4 GB/170 MB in RAM/swap, up from 2.7 GB/0 last Thursday, 2.2 > GB/0 last Monday, or 1.9 GB after a reboot ten days ago. Stopping > Postgres brings down the number, but not all the way -- it drops to > about 2.7 GB, even though the next most memory-intensive process is > `ntpd` at 5 MB. (Before Postgres starts, there's less than 30 MB of > stuff running.) The only way I've found to get this box back to normal > is to reboot it. > >>>> Now, I'm not blaming Pg for the apparent discrepancy in calculated vs. >>>> reported-by-free memory usage, but I only noticed this after upgrading >>>> to 8.1. >>>> >>> I don't know of any reason to think that 8.1 would act differently from >>> older PG versions in this respect. >>> >> >> Neither can I, which is why I don't blame it. ;) I'm just reporting >> when/where I noticed the issue. >> > I can't offer any explanation for why this server is starting to swap -- > where'd the memory go? -- but I know it started after upgrading to > PostgreSQL 8.1. I'm not saying it's something in the PostgreSQL code, > but this server definitely didn't do this in the months under 7.4. > > Mike: is your system AMD64, by any chance? The above system is, as is > another similar story I heard. > > --Will Glynn > Freedom Healthcare > > ---------------------------(end of broadcast)--------------------------- > TIP 9: In versions below 8.0, the planner will ignore your desire to > choose an index scan if your joining column's datatypes do not > match
pgsql-general by date: