Thread: SET work_mem = '1TB';

SET work_mem = '1TB';

From
Simon Riggs
Date:
I worked up a small patch to support Terabyte setting for memory.
Which is OK, but it only works for 1TB, not for 2TB or above.

Which highlights that since we measure things in kB, we have an
inherent limit of 2047GB for our memory settings. It isn't beyond
belief we'll want to go that high, or at least won't be by end 2014
and will be annoying sometime before 2020.

Solution seems to be to support something potentially bigger than INT
for GUCs. So we can reclassify GUC_UNIT_MEMORY according to the
platform we're on.

Opinions?

--
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Attachment

Re: SET work_mem = '1TB';

From
Gavin Flower
Date:
<div class="moz-cite-prefix">On 22/05/13 09:13, Simon Riggs wrote:<br /></div><blockquote
cite="mid:CA+U5nMJpR1HsAUQR2MLLmp14mYsGCHNBf1G1Kp3hUfL_uwWAhw@mail.gmail.com"type="cite"><pre wrap="">I worked up a
smallpatch to support Terabyte setting for memory.
 
Which is OK, but it only works for 1TB, not for 2TB or above.

Which highlights that since we measure things in kB, we have an
inherent limit of 2047GB for our memory settings. It isn't beyond
belief we'll want to go that high, or at least won't be by end 2014
and will be annoying sometime before 2020.

Solution seems to be to support something potentially bigger than INT
for GUCs. So we can reclassify GUC_UNIT_MEMORY according to the
platform we're on.

Opinions?

--Simon Riggs                   <a class="moz-txt-link-freetext"
href="http://www.2ndQuadrant.com/">http://www.2ndQuadrant.com/</a>PostgreSQLDevelopment, 24x7 Support, Training &
Services
</pre><br /></blockquote> I suspect it should be fixed before it starts being a problem, for 2 reasons:<br
/><ol><li>bestto panic early while we have time<br /> (or more prosaically: doing it soon gives us more time to get it
rightwithout undue pressure)<br /><br /><li>not able to cope with 2TB and above might put off companies with seriously
massivedatabases from moving to Postgres<br /></ol> Probably an idea to check what other values should be increased as
well.<br/><br /><br /> Cheers,<br /> Gavin<br /> 

Re: SET work_mem = '1TB';

From
Jeff Janes
Date:
On Tuesday, May 21, 2013, Simon Riggs wrote:
I worked up a small patch to support Terabyte setting for memory.
Which is OK, but it only works for 1TB, not for 2TB or above.

I've incorporated my review into a new version, attached.

Added "TB" to the docs, added the macro KB_PER_TB, and made "show" to print "1TB" rather than "1024GB".

I tested several of the memory settings to see that it can be set and retrieved.  I haven't tested actual execution as I don't have that kind of RAM.

I don't see how it could have a performance impact, it passes make check etc., and I don't think it warrants a new regression test.

I'll set it to ready for committer.

Cheers,

Jeff
Attachment

Re: SET work_mem = '1TB';

From
Fujii Masao
Date:
On Tue, Jun 18, 2013 at 1:06 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
> On Tuesday, May 21, 2013, Simon Riggs wrote:
>>
>> I worked up a small patch to support Terabyte setting for memory.
>> Which is OK, but it only works for 1TB, not for 2TB or above.
>
>
> I've incorporated my review into a new version, attached.
>
> Added "TB" to the docs, added the macro KB_PER_TB, and made "show" to print
> "1TB" rather than "1024GB".

Looks good to me. But I found you forgot to change postgresql.conf.sample,
so I changed it and attached the updated version of the patch.

Barring any objection to this patch and if no one picks up this, I
will commit this.

Regards,

--
Fujii Masao

Attachment

Re: SET work_mem = '1TB';

From
Simon Riggs
Date:
On 18 June 2013 17:10, Fujii Masao <masao.fujii@gmail.com> wrote:
> On Tue, Jun 18, 2013 at 1:06 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
>> On Tuesday, May 21, 2013, Simon Riggs wrote:
>>>
>>> I worked up a small patch to support Terabyte setting for memory.
>>> Which is OK, but it only works for 1TB, not for 2TB or above.
>>
>>
>> I've incorporated my review into a new version, attached.
>>
>> Added "TB" to the docs, added the macro KB_PER_TB, and made "show" to print
>> "1TB" rather than "1024GB".
>
> Looks good to me. But I found you forgot to change postgresql.conf.sample,
> so I changed it and attached the updated version of the patch.
>
> Barring any objection to this patch and if no one picks up this, I
> will commit this.

In truth, I hadn't realised somebody had added this to the CF. It was
meant to be an exploration and demonstration that further work was/is
required rather than a production quality submission. AFAICS it is
still limited to '1 TB' only...

Thank you both for adding to this patch. Since you've done that, it
seems churlish of me to interrupt that commit.

I will make a note to extend the support to higher values of TBs later.

--Simon Riggs                   http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services



Re: SET work_mem = '1TB';

From
Josh Berkus
Date:
> In truth, I hadn't realised somebody had added this to the CF. It was
> meant to be an exploration and demonstration that further work was/is
> required rather than a production quality submission. AFAICS it is
> still limited to '1 TB' only...

At the beginning of the CF, I do a sweep of patch files emailed to
-hackers and not in the CF.  I believe there were three such of yours,
take a look at the CF list.  Like I said, better to track them
unnecessarily than to lose them.

> Thank you both for adding to this patch. Since you've done that, it
> seems churlish of me to interrupt that commit.

Well, I think that someone needs to actually test doing a sort with,
say, 100GB of RAM and make sure it doesn't crash.  Anyone have a machine
they can try that on?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com



Re: SET work_mem = '1TB';

From
Stephen Frost
Date:
* Josh Berkus (josh@agliodbs.com) wrote:
> Well, I think that someone needs to actually test doing a sort with,
> say, 100GB of RAM and make sure it doesn't crash.  Anyone have a machine
> they can try that on?

It can be valuable to bump up work_mem well beyond the amount of system
memory actually available on the system to get the 'right' plan to be
chosen (which often ends up needing much less actual memory to run).

I've used that trick on a box w/ 512GB of RAM and had near-100G PG
backend processes which were doing hashjoins.  Don't think I've ever had
it try doing a sort w/ a really big work_mem.
Thanks,
    Stephen

Re: SET work_mem = '1TB';

From
Simon Riggs
Date:
On 18 June 2013 18:45, Josh Berkus <josh@agliodbs.com> wrote:
>
>> In truth, I hadn't realised somebody had added this to the CF. It was
>> meant to be an exploration and demonstration that further work was/is
>> required rather than a production quality submission. AFAICS it is
>> still limited to '1 TB' only...
>
> At the beginning of the CF, I do a sweep of patch files emailed to
> -hackers and not in the CF.  I believe there were three such of yours,
> take a look at the CF list.  Like I said, better to track them
> unnecessarily than to lose them.

Thanks. Please delete the patch marked "Batch API for After Triggers".
All others are submissions by me.

--Simon Riggs                   http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services



Re: SET work_mem = '1TB';

From
Josh Berkus
Date:
On 06/18/2013 10:59 AM, Simon Riggs wrote:

> Thanks. Please delete the patch marked "Batch API for After Triggers".
> All others are submissions by me.

The CF app doesn't permit deletion of patches, so I marked it "returned
with feedback".

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com



Re: SET work_mem = '1TB';

From
Fujii Masao
Date:
On Wed, Jun 19, 2013 at 2:40 AM, Simon Riggs <simon@2ndquadrant.com> wrote:
> On 18 June 2013 17:10, Fujii Masao <masao.fujii@gmail.com> wrote:
>> On Tue, Jun 18, 2013 at 1:06 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
>>> On Tuesday, May 21, 2013, Simon Riggs wrote:
>>>>
>>>> I worked up a small patch to support Terabyte setting for memory.
>>>> Which is OK, but it only works for 1TB, not for 2TB or above.
>>>
>>>
>>> I've incorporated my review into a new version, attached.
>>>
>>> Added "TB" to the docs, added the macro KB_PER_TB, and made "show" to print
>>> "1TB" rather than "1024GB".
>>
>> Looks good to me. But I found you forgot to change postgresql.conf.sample,
>> so I changed it and attached the updated version of the patch.
>>
>> Barring any objection to this patch and if no one picks up this, I
>> will commit this.
>
> In truth, I hadn't realised somebody had added this to the CF. It was
> meant to be an exploration and demonstration that further work was/is
> required rather than a production quality submission. AFAICS it is
> still limited to '1 TB' only...

Yes.

> Thank you both for adding to this patch. Since you've done that, it
> seems churlish of me to interrupt that commit.

I was thinking that this is the infrastructure patch for your future
proposal, i.e., support higher values of TBs. But if it interferes with
your future proposal, of course I'm okay to drop this patch. Thought?

Regards,

-- 
Fujii Masao



Re: SET work_mem = '1TB';

From
Simon Riggs
Date:
On 18 June 2013 22:57, Fujii Masao <masao.fujii@gmail.com> wrote:
> On Wed, Jun 19, 2013 at 2:40 AM, Simon Riggs <simon@2ndquadrant.com> wrote:
>> On 18 June 2013 17:10, Fujii Masao <masao.fujii@gmail.com> wrote:
>>> On Tue, Jun 18, 2013 at 1:06 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
>>>> On Tuesday, May 21, 2013, Simon Riggs wrote:
>>>>>
>>>>> I worked up a small patch to support Terabyte setting for memory.
>>>>> Which is OK, but it only works for 1TB, not for 2TB or above.
>>>>
>>>>
>>>> I've incorporated my review into a new version, attached.
>>>>
>>>> Added "TB" to the docs, added the macro KB_PER_TB, and made "show" to print
>>>> "1TB" rather than "1024GB".
>>>
>>> Looks good to me. But I found you forgot to change postgresql.conf.sample,
>>> so I changed it and attached the updated version of the patch.
>>>
>>> Barring any objection to this patch and if no one picks up this, I
>>> will commit this.
>>
>> In truth, I hadn't realised somebody had added this to the CF. It was
>> meant to be an exploration and demonstration that further work was/is
>> required rather than a production quality submission. AFAICS it is
>> still limited to '1 TB' only...
>
> Yes.
>
>> Thank you both for adding to this patch. Since you've done that, it
>> seems churlish of me to interrupt that commit.
>
> I was thinking that this is the infrastructure patch for your future
> proposal, i.e., support higher values of TBs. But if it interferes with
> your future proposal, of course I'm okay to drop this patch. Thought?

Yes, please commit.

--Simon Riggs                   http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services



Re: SET work_mem = '1TB';

From
Fujii Masao
Date:
On Wed, Jun 19, 2013 at 4:47 PM, Simon Riggs <simon@2ndquadrant.com> wrote:
> On 18 June 2013 22:57, Fujii Masao <masao.fujii@gmail.com> wrote:
>> On Wed, Jun 19, 2013 at 2:40 AM, Simon Riggs <simon@2ndquadrant.com> wrote:
>>> On 18 June 2013 17:10, Fujii Masao <masao.fujii@gmail.com> wrote:
>>>> On Tue, Jun 18, 2013 at 1:06 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
>>>>> On Tuesday, May 21, 2013, Simon Riggs wrote:
>>>>>>
>>>>>> I worked up a small patch to support Terabyte setting for memory.
>>>>>> Which is OK, but it only works for 1TB, not for 2TB or above.
>>>>>
>>>>>
>>>>> I've incorporated my review into a new version, attached.
>>>>>
>>>>> Added "TB" to the docs, added the macro KB_PER_TB, and made "show" to print
>>>>> "1TB" rather than "1024GB".
>>>>
>>>> Looks good to me. But I found you forgot to change postgresql.conf.sample,
>>>> so I changed it and attached the updated version of the patch.
>>>>
>>>> Barring any objection to this patch and if no one picks up this, I
>>>> will commit this.
>>>
>>> In truth, I hadn't realised somebody had added this to the CF. It was
>>> meant to be an exploration and demonstration that further work was/is
>>> required rather than a production quality submission. AFAICS it is
>>> still limited to '1 TB' only...
>>
>> Yes.
>>
>>> Thank you both for adding to this patch. Since you've done that, it
>>> seems churlish of me to interrupt that commit.
>>
>> I was thinking that this is the infrastructure patch for your future
>> proposal, i.e., support higher values of TBs. But if it interferes with
>> your future proposal, of course I'm okay to drop this patch. Thought?
>
> Yes, please commit.

Committed.

Regards,

-- 
Fujii Masao