Thread: Memory exhausted in AllocSetReAlloc
Using CREATE INDEX on a large table (>70 million rows) results in the following error: ERROR: Memory exhausted in AllocSetReAlloc(134217728) (134217728 is exactly 128 megabytes). The box has 2Gigs of RAM. => select version(); version --------------------------------------------------------------------- PostgreSQL 7.1.3 on i386-unknown-freebsd4.4, compiled by GCC 2.95.3 Is there anyway around this ? -- Eric Cholet
> Using CREATE INDEX on a large table (>70 million rows) > results in the following error: > ERROR: Memory exhausted in AllocSetReAlloc(134217728) > (134217728 is exactly 128 megabytes). You may be running into a per-process limit, which can be adjusted in the kernel (probably after boot time, but I'm not familiar with the details on FreeBSD). - Thomas
Eric Cholet <cholet@logilune.com> writes: > Using CREATE INDEX on a large table (>70 million rows) > results in the following error: > > ERROR: Memory exhausted in AllocSetReAlloc(134217728) > > (134217728 is exactly 128 megabytes). > > The box has 2Gigs of RAM. Have you double checked to make sure the ulimits for postmaster are large enough? -Doug -- Let us cross over the river, and rest under the shade of the trees. --T. J. Jackson, 1863
Doug McNaught <doug@wireboard.com> writes: > Have you double checked to make sure the ulimits for postmaster are > large enough? More to the point, what have you got SORT_MEM set to? We're assuming that this allocation request is correct, and indeed it probably is if you've got SORT_MEM cranked up to the moon. But if you don't then there may be a different issue (like corrupted data). FWIW, running with SORT_MEM set in the hundreds of megs on a 2gig box does not strike me as a good idea. Keep in mind that that number is *per sorting operation*. Each backend might be trying to use that much simultaneously --- in fact, in complex queries it's not hard to cause a single backend to be executing multiple sort steps. regards, tom lane
--On mercredi 14 novembre 2001 10:37 -0500 Tom Lane <tgl@sss.pgh.pa.us> wrote: > Doug McNaught <doug@wireboard.com> writes: >> Have you double checked to make sure the ulimits for postmaster are >> large enough? > > More to the point, what have you got SORT_MEM set to? We're assuming > that this allocation request is correct, and indeed it probably is if > you've got SORT_MEM cranked up to the moon. But if you don't then > there may be a different issue (like corrupted data). > > FWIW, running with SORT_MEM set in the hundreds of megs on a 2gig > box does not strike me as a good idea. Keep in mind that that number > is *per sorting operation*. Each backend might be trying to use that > much simultaneously --- in fact, in complex queries it's not hard to > cause a single backend to be executing multiple sort steps. That's it, trying to make postgres use more memory I cranked up SORT_MEM to a much too high value. I just reduced it back to 1024 and it's working fine now. Thanks to the other posters who replied, I found out FreeBSD's default kernel limits the data segment size to 128M, so I raised that as well. I certainly didn't understand SORT_MEM's semantics. Any pointers, hints on how to tune postgres to make best use of all the memory I've got would be appreciated. My largest table is going to have ~ 200 million rows. The docs are a little terse on how to tune for that kind of volume. Thanks, -- Eric Cholet
I said: > More to the point, what have you got SORT_MEM set to? Or maybe that's not the point. A quick look through the sources didn't turn up any repalloc calls that look like they could make requests that large (at least not before you'd run out of memory from other causes). Could you attach to the backend process with gdb, set a breakpoint at elog, and get a backtrace, so we can see where the oversize request is coming from? regards, tom lane
> That's it, trying to make postgres use more memory I cranked up SORT_MEM > to a much too high value. I just reduced it back to 1024 and it's working > fine now. > > Thanks to the other posters who replied, I found out FreeBSD's default > kernel limits the data segment size to 128M, so I raised that as well. > > I certainly didn't understand SORT_MEM's semantics. Any pointers, hints > on how to tune postgres to make best use of all the memory I've got > would be appreciated. My largest table is going to have ~ 200 million > rows. The docs are a little terse on how to tune for that kind of volume. There is a performance paper I wrote on techdocs site: http://techdocs.postgresql.org -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026