Thread: Re: 7.4 beta 1 getting out of swap
Bertrand Petit <elrond@phoe.frmug.org> writes: > And I just got another one, much simpler, that failed the same > way with the same data set: > UPDATE rimdb_atitles SET aka_title=convert(byte_title,charset,'UTF8'); [ where rimdb_atitles has an index on column "attribs varchar[]" ] Uh-huh. Actually, any large insert or update on that table will run out of memory, I bet. The problem appears to be due to the newly-added support for indexing array columns --- array_cmp() leaks memory, which is verboten for index support operators. At first I thought this would be an easy fix --- just rewrite array_cmp to not depend on deconstruct_array, as array_eq already does not. I soon found that that only reduced the speed of leakage, however. The real problem comes from the fact that array_eq and array_cmp expect to be able to save information across calls using flinfo->fn_extra. While this works to some extent, the btree routines generate a new scankey --- with a nulled fn_extra --- for every index AM call. btree knows to delete the scankey when it's done, but it doesn't know anything about deleting what fn_extra points to. (Even if it did, there is additional leakage inside equality_oper(), which would be very difficult to clean up directly.) Quite aside from the memory leak problem, it's annoying to think that the array element information will be looked up again on every btree operation. That seems expensive. I can think of a number of ways we might attack this, but none seem especially attractive --- 1. Have the index AMs create and switch into a special memory context for each call, rather than running in the main execution context. I am not sure this is workable at all, since the AMs tend to think they can create data structures that will live across calls (for example a btree lookup stack). It'd be the most general solution, if we could make it work. 2. Modify the index AMs so that the comparison function FmgrInfo is preserved across a whole query. I think this requires changes to the index AM API (index_insert for instance has no provision for sharing data across multiple calls). Messy, and would likely mean an initdb. It would probably be the fastest answer though, since lookups wouldn't need to be done more than once per query. 3. Set up a long-lived cache internal to the array functions that can translate element type OID to the needed lookup data, and won't leak memory across repeated calls. This is not the fastest or most general solution, but it seems the most localized and safest fix. Has anyone got some other ideas? regards, tom lane
> I can think of a number of ways we might attack this, but none seem > especially attractive --- > > 1. Have the index AMs create and switch into a special memory context > for each call, rather than running in the main execution context. > I am not sure this is workable at all, since the AMs tend to think they > can create data structures that will live across calls (for example a > btree lookup stack). It'd be the most general solution, if we could > make it work. > > 2. Modify the index AMs so that the comparison function FmgrInfo is > preserved across a whole query. I think this requires changes to the > index AM API (index_insert for instance has no provision for sharing > data across multiple calls). Messy, and would likely mean an initdb. > It would probably be the fastest answer though, since lookups wouldn't > need to be done more than once per query. #2 seems most natural in that it formalizes something that is common for lots of index methods. We are only in beta1, so I think we can initdb. > 3. Set up a long-lived cache internal to the array functions that can > translate element type OID to the needed lookup data, and won't leak > memory across repeated calls. This is not the fastest or most general > solution, but it seems the most localized and safest fix. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001+ If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania19073
Tom Lane wrote: > Bertrand Petit <elrond@phoe.frmug.org> writes: >> And I just got another one, much simpler, that failed the same >>way with the same data set: >>UPDATE rimdb_atitles SET aka_title=convert(byte_title,charset,'UTF8'); > > [ where rimdb_atitles has an index on column "attribs varchar[]" ] > > Uh-huh. Actually, any large insert or update on that table will run out > of memory, I bet. The problem appears to be due to the newly-added > support for indexing array columns --- array_cmp() leaks memory, which > is verboten for index support operators. Ugh. > I can think of a number of ways we might attack this, but none seem > especially attractive --- > > 1. Have the index AMs create and switch into a special memory context > for each call, rather than running in the main execution context. > I am not sure this is workable at all, since the AMs tend to think they > can create data structures that will live across calls (for example a > btree lookup stack). It'd be the most general solution, if we could > make it work. This seems like a risky change at this point. > 2. Modify the index AMs so that the comparison function FmgrInfo is > preserved across a whole query. I think this requires changes to the > index AM API (index_insert for instance has no provision for sharing > data across multiple calls). Messy, and would likely mean an initdb. > It would probably be the fastest answer though, since lookups wouldn't > need to be done more than once per query. This seems like a fairly big change this late in the game too. > 3. Set up a long-lived cache internal to the array functions that can > translate element type OID to the needed lookup data, and won't leak > memory across repeated calls. This is not the fastest or most general > solution, but it seems the most localized and safest fix. > I think I like #3 the best, but maybe that's because it's the one I think I understand the best ;-) It seems to me that #3 is the least risky, and even if it isn't the best possible performance, this is the initial implementation of indexes on arrays, so it isn't like we're taking away something. Maybe solution #2 is better held as a performance enhancement for 7.5. Do you want me to take a shot at this since I created the mess? Joe
Joe Conway <mail@joeconway.com> writes: > Tom Lane wrote: >> 3. Set up a long-lived cache internal to the array functions that can >> translate element type OID to the needed lookup data, and won't leak >> memory across repeated calls. This is not the fastest or most general >> solution, but it seems the most localized and safest fix. > It seems to me that #3 is the least risky, and even if it isn't the best > possible performance, this is the initial implementation of indexes on > arrays, so it isn't like we're taking away something. Maybe solution #2 > is better held as a performance enhancement for 7.5. I'm leaning that way too. It occurs to me also that the same cache could be used to eliminate repeated lookups in sorting setup --- which would not be much of a win percentagewise, compared to the sort itself, but still it seems worth doing. > Do you want me to take a shot at this since I created the mess? Actually I led you down the garden path on that, IIRC --- I was the one who insisted these lookups needed to be cached. I'll work on fixing it. regards, tom lane
> Joe Conway <mail@joeconway.com> writes: >> Tom Lane wrote: > 3. Set up a long-lived cache internal to the array functions that can > translate element type OID to the needed lookup data, and won't leak > memory across repeated calls. This is not the fastest or most general > solution, but it seems the most localized and safest fix. >> It seems to me that #3 is the least risky, and even if it isn't the best >> possible performance, this is the initial implementation of indexes on >> arrays, so it isn't like we're taking away something. Maybe solution #2 >> is better held as a performance enhancement for 7.5. > I'm leaning that way too. It occurs to me also that the same cache > could be used to eliminate repeated lookups in sorting setup --- which > would not be much of a win percentagewise, compared to the sort itself, > but still it seems worth doing. I've committed fixes for this, and verified that inserts/updates on an indexed array column don't leak memory anymore. regards, tom lane