Re: SET work_mem = '1TB'; - Mailing list pgsql-hackers

From Gavin Flower
Subject Re: SET work_mem = '1TB';
Date
Msg-id 519BE9FD.4040502@archidevsys.co.nz
Whole thread Raw
In response to SET work_mem = '1TB';  (Simon Riggs <simon@2ndQuadrant.com>)
List pgsql-hackers
<div class="moz-cite-prefix">On 22/05/13 09:13, Simon Riggs wrote:<br /></div><blockquote
cite="mid:CA+U5nMJpR1HsAUQR2MLLmp14mYsGCHNBf1G1Kp3hUfL_uwWAhw@mail.gmail.com"type="cite"><pre wrap="">I worked up a
smallpatch to support Terabyte setting for memory.
 
Which is OK, but it only works for 1TB, not for 2TB or above.

Which highlights that since we measure things in kB, we have an
inherent limit of 2047GB for our memory settings. It isn't beyond
belief we'll want to go that high, or at least won't be by end 2014
and will be annoying sometime before 2020.

Solution seems to be to support something potentially bigger than INT
for GUCs. So we can reclassify GUC_UNIT_MEMORY according to the
platform we're on.

Opinions?

--Simon Riggs                   <a class="moz-txt-link-freetext"
href="http://www.2ndQuadrant.com/">http://www.2ndQuadrant.com/</a>PostgreSQLDevelopment, 24x7 Support, Training &
Services
</pre><br /></blockquote> I suspect it should be fixed before it starts being a problem, for 2 reasons:<br
/><ol><li>bestto panic early while we have time<br /> (or more prosaically: doing it soon gives us more time to get it
rightwithout undue pressure)<br /><br /><li>not able to cope with 2TB and above might put off companies with seriously
massivedatabases from moving to Postgres<br /></ol> Probably an idea to check what other values should be increased as
well.<br/><br /><br /> Cheers,<br /> Gavin<br /> 

pgsql-hackers by date:

Previous
From: Simon Riggs
Date:
Subject: SET work_mem = '1TB';
Next
From: Christoph Berg
Date:
Subject: Re: plperl segfault in plperl_trusted_init() on kfreebsd