work_mem / maintenance_work_mem maximums - Mailing list pgsql-hackers

From Stephen Frost
Subject work_mem / maintenance_work_mem maximums
Date
Msg-id 20100920165111.GP26232@tamriel.snowman.net
Whole thread Raw
Responses Re: work_mem / maintenance_work_mem maximums  (Bruce Momjian <bruce@momjian.us>)
List pgsql-hackers
Greetings,
 After watching a database import go abysmally slow on a pretty beefy box with tons of RAM, I got annoyed and went to
huntdown why in the world PG wasn't using but a bit of memory.  Turns out to be a well known and long-standing issue:
 
 http://www.mail-archive.com/pgsql-hackers@postgresql.org/msg101139.html
 Now, we could start by fixing guc.c to correctly have the max value for these be MaxAllocSize/1024, for starters, then
atleast our users would know when they set a higher value it's not going to be used. That, in my mind, is a pretty
clearbug fix.  Of course, that doesn't help us poor data-warehousing bastards with 64G+ machines.
 
 Sooo..  I don't know much about what the limit is or why it's there, but based on the comments, I'm wondering if we
couldjust move the limit to a more 'sane' place than the-function-we-use-to-allocate.  If we need a hard limit due to
TOAST,let's put it there, but I'm hopeful we could work out a way to get rid of this limit in repalloc and that we can
letsorts and the like (uh, index creation) use what memory the user has decided it should be able to.
 
     Thanks,
    Stephen

pgsql-hackers by date:

Previous
From: Magnus Hagander
Date:
Subject: Git conversion status
Next
From: Robert Haas
Date:
Subject: Re: bg worker: general purpose requirements