Thread: palloc() too large on pg_buffercache with large shared_buffers

palloc() too large on pg_buffercache with large shared_buffers

From
Kouhei Kaigai
Date:
Hello,

It looks to me pg_buffercache tries to allocate more than 1GB using
palloc(), when shared_buffers is more than 256GB.

# show shared_buffers ;
 shared_buffers
----------------
 280GB
(1 row)

# SELECT buffers, d.datname, coalesce(c.relname, '???')
    FROM (SELECT count(*) buffers, reldatabase, relfilenode
            FROM pg_buffercache group by reldatabase, relfilenode) b
       LEFT JOIN pg_database d ON d.oid = b.reldatabase
       LEFT JOIN pg_class c ON d.oid = (SELECT oid FROM pg_database
                                         WHERE datname = current_database())
                           AND b.relfilenode = pg_relation_filenode(c.oid)
           ORDER BY buffers desc;
ERROR:  invalid memory alloc request size 1174405120

It is a situation to use MemoryContextAllocHuge(), instead of palloc().
Also, it may need a back patching?

Thanks,
--
NEC Business Creation Division / PG-Strom Project
KaiGai Kohei <kaigai@ak.jp.nec.com>



Attachment

Re: palloc() too large on pg_buffercache with large shared_buffers

From
Robert Haas
Date:
On Wed, Sep 14, 2016 at 12:13 AM, Kouhei Kaigai <kaigai@ak.jp.nec.com> wrote:
> It looks to me pg_buffercache tries to allocate more than 1GB using
> palloc(), when shared_buffers is more than 256GB.
>
> # show shared_buffers ;
>  shared_buffers
> ----------------
>  280GB
> (1 row)
>
> # SELECT buffers, d.datname, coalesce(c.relname, '???')
>     FROM (SELECT count(*) buffers, reldatabase, relfilenode
>             FROM pg_buffercache group by reldatabase, relfilenode) b
>        LEFT JOIN pg_database d ON d.oid = b.reldatabase
>        LEFT JOIN pg_class c ON d.oid = (SELECT oid FROM pg_database
>                                          WHERE datname = current_database())
>                            AND b.relfilenode = pg_relation_filenode(c.oid)
>            ORDER BY buffers desc;
> ERROR:  invalid memory alloc request size 1174405120
>
> It is a situation to use MemoryContextAllocHuge(), instead of palloc().
> Also, it may need a back patching?

I guess so.  Although it's not very desirable for it to use that much
memory, I suppose if you have a terabyte of shared_buffers you
probably have 4GB of memory on top of that to show what they contain.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: palloc() too large on pg_buffercache with large shared_buffers

From
Kouhei Kaigai
Date:
> On Wed, Sep 14, 2016 at 12:13 AM, Kouhei Kaigai <kaigai@ak.jp.nec.com> wrote:
> > It looks to me pg_buffercache tries to allocate more than 1GB using
> > palloc(), when shared_buffers is more than 256GB.
> >
> > # show shared_buffers ;
> >  shared_buffers
> > ----------------
> >  280GB
> > (1 row)
> >
> > # SELECT buffers, d.datname, coalesce(c.relname, '???')
> >     FROM (SELECT count(*) buffers, reldatabase, relfilenode
> >             FROM pg_buffercache group by reldatabase, relfilenode) b
> >        LEFT JOIN pg_database d ON d.oid = b.reldatabase
> >        LEFT JOIN pg_class c ON d.oid = (SELECT oid FROM pg_database
> >                                          WHERE datname = current_database())
> >                            AND b.relfilenode = pg_relation_filenode(c.oid)
> >            ORDER BY buffers desc;
> > ERROR:  invalid memory alloc request size 1174405120
> >
> > It is a situation to use MemoryContextAllocHuge(), instead of palloc().
> > Also, it may need a back patching?
> 
> I guess so.  Although it's not very desirable for it to use that much
> memory, I suppose if you have a terabyte of shared_buffers you
> probably have 4GB of memory on top of that to show what they contain.
>
Exactly. I found this problem when a people asked me why shared_buffers=280GB
is slower than shared_buffers=128MB to scan 350GB table.
As I expected, most of shared buffers are not in-use and it also reduced
amount of free memory; usable for page-cache.

Thanks,
--
NEC Business Creation Division / PG-Strom Project
KaiGai Kohei <kaigai@ak.jp.nec.com>

Re: palloc() too large on pg_buffercache with large shared_buffers

From
Robert Haas
Date:
On Wed, Sep 14, 2016 at 7:59 PM, Kouhei Kaigai <kaigai@ak.jp.nec.com> wrote:
>> On Wed, Sep 14, 2016 at 12:13 AM, Kouhei Kaigai <kaigai@ak.jp.nec.com> wrote:
>> > It looks to me pg_buffercache tries to allocate more than 1GB using
>> > palloc(), when shared_buffers is more than 256GB.
>> >
>> > # show shared_buffers ;
>> >  shared_buffers
>> > ----------------
>> >  280GB
>> > (1 row)
>> >
>> > # SELECT buffers, d.datname, coalesce(c.relname, '???')
>> >     FROM (SELECT count(*) buffers, reldatabase, relfilenode
>> >             FROM pg_buffercache group by reldatabase, relfilenode) b
>> >        LEFT JOIN pg_database d ON d.oid = b.reldatabase
>> >        LEFT JOIN pg_class c ON d.oid = (SELECT oid FROM pg_database
>> >                                          WHERE datname = current_database())
>> >                            AND b.relfilenode = pg_relation_filenode(c.oid)
>> >            ORDER BY buffers desc;
>> > ERROR:  invalid memory alloc request size 1174405120
>> >
>> > It is a situation to use MemoryContextAllocHuge(), instead of palloc().
>> > Also, it may need a back patching?
>>
>> I guess so.  Although it's not very desirable for it to use that much
>> memory, I suppose if you have a terabyte of shared_buffers you
>> probably have 4GB of memory on top of that to show what they contain.
>>
> Exactly. I found this problem when a people asked me why shared_buffers=280GB
> is slower than shared_buffers=128MB to scan 350GB table.
> As I expected, most of shared buffers are not in-use and it also reduced
> amount of free memory; usable for page-cache.

OK.  Committed and back-patched to 9.4.  There's no support for huge
allocations before that.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company