Thread: pg13.2: invalid memory alloc request size NNNN

pg13.2: invalid memory alloc request size NNNN

From
Justin Pryzby
Date:
ts=# \errverbose 
ERROR:  XX000: invalid memory alloc request size 18446744073709551613

#0  pg_re_throw () at elog.c:1716
#1  0x0000000000a33b12 in errfinish (filename=0xbff20e "mcxt.c", lineno=959, funcname=0xbff2db <__func__.6684>
"palloc")at elog.c:502
 
#2  0x0000000000a6760d in palloc (size=18446744073709551613) at mcxt.c:959
#3  0x00000000009fb149 in text_to_cstring (t=0x2aaae8023010) at varlena.c:212
#4  0x00000000009fbf05 in textout (fcinfo=0x2094538) at varlena.c:557
#5  0x00000000006bdd50 in ExecInterpExpr (state=0x2093990, econtext=0x20933d8, isnull=0x7fff5bf04a87) at
execExprInterp.c:1112
#6  0x00000000006d4f18 in ExecEvalExprSwitchContext (state=0x2093990, econtext=0x20933d8, isNull=0x7fff5bf04a87) at
../../../src/include/executor/executor.h:316
#7  0x00000000006d4f81 in ExecProject (projInfo=0x2093988) at ../../../src/include/executor/executor.h:350
#8  0x00000000006d5371 in ExecScan (node=0x20932c8, accessMtd=0x7082e0 <SeqNext>, recheckMtd=0x708385 <SeqRecheck>) at
execScan.c:238
#9  0x00000000007083c2 in ExecSeqScan (pstate=0x20932c8) at nodeSeqscan.c:112
#10 0x00000000006d1b00 in ExecProcNodeInstr (node=0x20932c8) at execProcnode.c:466
#11 0x00000000006e742c in ExecProcNode (node=0x20932c8) at ../../../src/include/executor/executor.h:248
#12 0x00000000006e77de in ExecAppend (pstate=0x2089208) at nodeAppend.c:267
#13 0x00000000006d1b00 in ExecProcNodeInstr (node=0x2089208) at execProcnode.c:466
#14 0x000000000070964f in ExecProcNode (node=0x2089208) at ../../../src/include/executor/executor.h:248
#15 0x0000000000709795 in ExecSort (pstate=0x2088ff8) at nodeSort.c:108
#16 0x00000000006d1b00 in ExecProcNodeInstr (node=0x2088ff8) at execProcnode.c:466
#17 0x00000000006d1ad1 in ExecProcNodeFirst (node=0x2088ff8) at execProcnode.c:450
#18 0x00000000006dec36 in ExecProcNode (node=0x2088ff8) at ../../../src/include/executor/executor.h:248
#19 0x00000000006df079 in fetch_input_tuple (aggstate=0x2088a20) at nodeAgg.c:589
#20 0x00000000006e1fad in agg_retrieve_direct (aggstate=0x2088a20) at nodeAgg.c:2368
#21 0x00000000006e1bfd in ExecAgg (pstate=0x2088a20) at nodeAgg.c:2183
#22 0x00000000006d1b00 in ExecProcNodeInstr (node=0x2088a20) at execProcnode.c:466
#23 0x00000000006d1ad1 in ExecProcNodeFirst (node=0x2088a20) at execProcnode.c:450
#24 0x00000000006c6ffa in ExecProcNode (node=0x2088a20) at ../../../src/include/executor/executor.h:248
#25 0x00000000006c966b in ExecutePlan (estate=0x2032f48, planstate=0x2088a20, use_parallel_mode=false,
operation=CMD_SELECT,sendTuples=true, numberTuples=0, direction=ForwardScanDirection, dest=0xbb3400 <donothingDR>, 
 
    execute_once=true) at execMain.c:1632

#3  0x00000000009fb149 in text_to_cstring (t=0x2aaae8023010) at varlena.c:212
212             result = (char *) palloc(len + 1);

(gdb) l
207             /* must cast away the const, unfortunately */
208             text       *tunpacked = pg_detoast_datum_packed(unconstify(text *, t));
209             int                     len = VARSIZE_ANY_EXHDR(tunpacked);
210             char       *result;
211
212             result = (char *) palloc(len + 1);

(gdb) p len
$1 = -4

This VM had some issue early today and I killed the VM, causing PG to execute
recovery.  I'm tentatively blaming that on zfs, so this could conceivably be a
data error (although recovery supposedly would have resolved it).  I just
checked and data_checksums=off.

The query has mode(), string_agg(), distinct.

Here's a redacted plan for the query:

 GroupAggregate  (cost=15681340.44..20726393.56 rows=908609 width=618)
   Group Key: (((COALESCE(a.ii, $0) || lpad(a.ii, 5, '0'::text)) || lpad(a.ii, 5, '0'::text))), a.ii, (COALESCE(a.ii,
$2)),(CASE (a.ii)::integer WHEN 1 THEN 'qq'::text WHEN 2 THEN 'qq'::text WHEN 3 THEN 'qq'::text WHEN 4 THEN 'qq'::text
WHEN5 THEN 'qq qq'::text WHEN 6 THEN 'qq-qq'::text ELSE a.ii END), (CASE WHEN (COALESCE(a.ii, $3) = substr(a.ii, 1,
length(COALESCE(a.ii,$4)))) THEN 'qq qq'::text WHEN (hashed SubPlan 7) THEN 'qq qq'::text ELSE 'qq qq qq'::text END)
 
   InitPlan 1 (returns $0)
     ->  Seq Scan on d
   InitPlan 3 (returns $2)
     ->  Seq Scan on d d
   InitPlan 4 (returns $3)
     ->  Seq Scan on d d
   InitPlan 5 (returns $4)
     ->  Seq Scan on d d
   InitPlan 6 (returns $5)
     ->  Seq Scan on d d
   ->  Sort  (cost=15681335.39..15704050.62 rows=9086093 width=313)
         Sort Key: (((COALESCE(a.ii, $0) || lpad(a.ii, 5, '0'::text)) || lpad(a.ii, 5, '0'::text))), a.ii,
(COALESCE(a.ii,$2)), (CASE (a.ii)::integer WHEN 1 THEN 'qq'::text WHEN 2 THEN 'qq'::text WHEN 3 THEN 'qq'::text WHEN 4
THEN'qq'::text WHEN 5 THEN 'qq qq'::text WHEN 6 THEN 'qq-qq'::text ELSE a.ii END), (CASE WHEN (COALESCE(a.ii, $3) =
substr(a.ii,1, length(COALESCE(a.ii, $4)))) THEN 'qq qq'::text WHEN (hashed SubPlan 7) THEN 'qq qq'::text ELSE 'qq qq
qq'::textEND)
 
         ->  Append  (cost=1.01..13295792.30 rows=9086093 width=313)
               ->  Seq Scan on a a  (cost=1.01..5689033.34 rows=3948764 width=328)
                     Filter: ((ii >= '2021-02-10 00:00:00+10'::timestamp with time zone) AND (ii < '2021-02-11
00:00:00+10'::timestampwith time zone))
 
                     SubPlan 7
                       ->  Seq Scan on d d  (cost=0.00..1.01 rows=1 width=7)
               ->  Seq Scan on b  (cost=1.01..12.75 rows=1 width=417)
                     Filter: ((ii >= '2021-02-10 00:00:00+10'::timestamp with time zone) AND (ii < '2021-02-11
00:00:00+10'::timestampwith time zone))
 
                     SubPlan 11
                       ->  Seq Scan on d d  (cost=0.00..1.01 rows=1 width=7)
               ->  Seq Scan on c c  (cost=1.01..7561315.74 rows=5137328 width=302)
                     Filter: ((ii >= '2021-02-10 00:00:00+10'::timestamp with time zone) AND (ii < '2021-02-11
00:00:00+10'::timestampwith time zone))
 
                     SubPlan 14
                       ->  Seq Scan on d d  (cost=0.00..1.01 rows=1 width=7)

I restored to a test cluster, but so far not able to reproduce the issue there,
so I'm soliciting suggestions how to debug it further.

-- 
Justin



Re: pg13.2: invalid memory alloc request size NNNN

From
Justin Pryzby
Date:
On Thu, Feb 11, 2021 at 07:48:37PM -0600, Justin Pryzby wrote:
> #3  0x00000000009fb149 in text_to_cstring (t=0x2aaae8023010) at varlena.c:212
> 212             result = (char *) palloc(len + 1);
> 
> (gdb) l
> 207             /* must cast away the const, unfortunately */
> 208             text       *tunpacked = pg_detoast_datum_packed(unconstify(text *, t));
> 209             int                     len = VARSIZE_ANY_EXHDR(tunpacked);
> 210             char       *result;
> 211
> 212             result = (char *) palloc(len + 1);
> 
> (gdb) p len
> $1 = -4

I reproduced this with a simpler query:

ts=# explain analyze SELECT CASE rattype::integer WHEN NNNNN THEN '.......' END AS ra_type FROM t WHERE tm BETWEEN
'2021-02-1123:55' AND '2021-02-11 23:56';
 

#0  pg_re_throw () at elog.c:1714
#1  0x00000000008aa0f6 in errfinish (filename=<optimized out>, filename@entry=0xa3ff9e "mcxt.c",
lineno=lineno@entry=959,funcname=funcname@entry=0xa400f8 <__func__.7429> "palloc") at elog.c:502
 
#2  0x00000000008d2344 in palloc (size=18446744073709551613) at mcxt.c:959
#3  0x00000000008819ab in text_to_cstring (t=0x2aaad43ad008) at varlena.c:212
#4  0x0000000000629b1d in ExecInterpExpr (state=0x1df4350, econtext=0x1df34b8, isnull=<optimized out>) at
execExprInterp.c:1112
#5  0x0000000000636c22 in ExecEvalExprSwitchContext (isNull=0x7ffc2a0ed0d7, econtext=0x1df34b8, state=0x1df4350) at
../../../src/include/executor/executor.h:316
#6  ExecProject (projInfo=0x1df4348) at ../../../src/include/executor/executor.h:350
#7  ExecScan (node=<optimized out>, accessMtd=0x644170 <BitmapHeapNext>, recheckMtd=0x644b40 <BitmapHeapRecheck>) at
execScan.c:238
#8  0x0000000000633ef8 in ExecProcNodeInstr (node=0x1df32a8) at execProcnode.c:466
#9  0x000000000062d192 in ExecProcNode (node=0x1df32a8) at ../../../src/include/executor/executor.h:248
#10 ExecutePlan (execute_once=<optimized out>, dest=0x9fc360 <donothingDR>, direction=<optimized out>, numberTuples=0,
sendTuples=true,operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x1df32a8,
 
    estate=0x1df3078) at execMain.c:1632

(gdb) p (varattrib_1b)t
$16 = {va_header = 8 '\b', va_data = 0x7ffcf4bd4bb9 "\200}\310\324\177"}

(gdb) p ((varattrib_4b)t)->va_4byte->va_header
$22 = 3363667976

(gdb) down
#4  0x00000000009fbf05 in textout (fcinfo=0x2fd5e48) at varlena.c:557
557             PG_RETURN_CSTRING(TextDatumGetCString(txt));
(gdb) p *fcinfo 
$1 = {flinfo = 0x2fd5df8, context = 0x0, resultinfo = 0x0, fncollation = 0, isnull = false, nargs = 1, args =
0x2fd5e68}
(gdb) p *fcinfo->args
$2 = {value = 140551873462280, isnull = false}

Just now, this cluster was killed while creating an index on a separate table:

Program received signal SIGSEGV, Segmentation fault.
0x00007ff549e53f33 in __memcpy_sse2 () from /lib64/libc.so.6
(gdb) bt
#0  0x00007ff549e53f33 in __memcpy_sse2 () from /lib64/libc.so.6
#1  0x00000000009fee14 in varstrfastcmp_locale (a1p=0x363e424 "", len1=-4, a2p=0x3a81a51 "", len2=15, ssup=0x2e100c8)
atvarlena.c:2337
 
#2  0x00000000009febbd in varlenafastcmp_locale (x=56878112, y=61348432, ssup=0x2e100c8) at varlena.c:2249
#3  0x0000000000a6ea4d in ApplySortComparator (datum1=56878112, isNull1=false, datum2=61348432, isNull2=false,
ssup=0x2e100c8)at ../../../../src/include/utils/sortsupport.h:224
 
#4  0x0000000000a7938f in comparetup_index_btree (a=0x7ff53b375798, b=0x7ff53a4a5048, state=0x2e0fc28) at
tuplesort.c:4147
#5  0x0000000000a6f237 in qsort_tuple (a=0x7ff53a4a5048, n=1071490, cmp_tuple=0xa79318 <comparetup_index_btree>,
state=0x2e0fc28)at qsort_tuple.c:150
 
#6  0x0000000000a74b53 in tuplesort_sort_memtuples (state=0x2e0fc28) at tuplesort.c:3490
#7  0x0000000000a7427b in dumptuples (state=0x2e0fc28, alltuples=true) at tuplesort.c:3156
#8  0x0000000000a72397 in tuplesort_performsort (state=0x2e0fc28) at tuplesort.c:2038
#9  0x00000000005011d2 in _bt_leafbuild (btspool=0x2e139a8, btspool2=0x0) at nbtsort.c:553
#10 0x0000000000500d56 in btbuild (heap=0x7ff54afef410, index=0x7ff54aff5250, indexInfo=0x2d5e8e8) at nbtsort.c:333
#11 0x00000000005787d1 in index_build (heapRelation=0x7ff54afef410, indexRelation=0x7ff54aff5250, indexInfo=0x2d5e8e8,
isreindex=false,parallel=true) at index.c:2962
 
#12 0x0000000000575ba7 in index_create (heapRelation=0x7ff54afef410, indexRelationName=0x2d624a8
"cdrs_huawei_sgsnpdprecord_2021_02_12_servedimsi_idx",indexRelationId=3880431557, parentIndexRelid=0,
parentConstraintId=0,
    relFileNode=0, indexInfo=0x2d5e8e8, indexColNames=0x2e12ae0, accessMethodObjectId=403, tableSpaceId=3787872951,
collationObjectId=0x2e12c08,classObjectId=0x2e12c50, coloptions=0x2e12c68, reloptions=48311088, flags=0,
 
    constr_flags=0, allow_system_table_mods=false, is_internal=false, constraintId=0x7ffc8964c23c) at index.c:1231
#13 0x0000000000651cd9 in DefineIndex (relationId=3840862493, stmt=0x2d5e7a8, indexRelationId=0, parentIndexId=0,
parentConstraintId=0,is_alter_table=false, check_rights=true, check_not_in_use=true, skip_build=false,
 
    quiet=false) at indexcmds.c:1105
#14 0x00000000008c620d in ProcessUtilitySlow (pstate=0x2d62398, pstmt=0x2d3bd78,
    queryString=0x2d3ae48 "CREATE INDEX ii ON tt (cc) WITH (FILLFACTOR=100) TABLESPACE cdr_index;",
    context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x2d3c038, qc=0x7ffc8964cdd0) at utility.c:1517

(gdb) up
#2  0x00000000009febbd in varlenafastcmp_locale (x=56878112, y=61348432, ssup=0x2e100c8) at varlena.c:2249
2249            result = varstrfastcmp_locale(a1p, len1, a2p, len2, ssup);

(gdb) l
2244            a2p = VARDATA_ANY(arg2);
2245
2246            len1 = VARSIZE_ANY_EXHDR(arg1);
2247            len2 = VARSIZE_ANY_EXHDR(arg2);
2248
2249            result = varstrfastcmp_locale(a1p, len1, a2p, len2, ssup);
2250
2251            /* We can't afford to leak memory here. */
2252            if (PointerGetDatum(arg1) != x)
2253                    pfree(arg1);

(gdb) p len1
$1 = -4
(gdb) p len2
$2 = 15

And now I was able to crash again by creating index on a 3rd, similar table.

FYI, this is running centos 7.8.



Re: pg13.2: invalid memory alloc request size NNNN

From
Tomas Vondra
Date:

On 2/12/21 2:48 AM, Justin Pryzby wrote:
> ts=# \errverbose
> ERROR:  XX000: invalid memory alloc request size 18446744073709551613
> 
> #0  pg_re_throw () at elog.c:1716
> #1  0x0000000000a33b12 in errfinish (filename=0xbff20e "mcxt.c", lineno=959, funcname=0xbff2db <__func__.6684>
"palloc")at elog.c:502
 
> #2  0x0000000000a6760d in palloc (size=18446744073709551613) at mcxt.c:959
> #3  0x00000000009fb149 in text_to_cstring (t=0x2aaae8023010) at varlena.c:212
> #4  0x00000000009fbf05 in textout (fcinfo=0x2094538) at varlena.c:557
> #5  0x00000000006bdd50 in ExecInterpExpr (state=0x2093990, econtext=0x20933d8, isnull=0x7fff5bf04a87) at
execExprInterp.c:1112
> #6  0x00000000006d4f18 in ExecEvalExprSwitchContext (state=0x2093990, econtext=0x20933d8, isNull=0x7fff5bf04a87) at
../../../src/include/executor/executor.h:316
> #7  0x00000000006d4f81 in ExecProject (projInfo=0x2093988) at ../../../src/include/executor/executor.h:350
> #8  0x00000000006d5371 in ExecScan (node=0x20932c8, accessMtd=0x7082e0 <SeqNext>, recheckMtd=0x708385 <SeqRecheck>)
atexecScan.c:238
 
> #9  0x00000000007083c2 in ExecSeqScan (pstate=0x20932c8) at nodeSeqscan.c:112
> #10 0x00000000006d1b00 in ExecProcNodeInstr (node=0x20932c8) at execProcnode.c:466
> #11 0x00000000006e742c in ExecProcNode (node=0x20932c8) at ../../../src/include/executor/executor.h:248
> #12 0x00000000006e77de in ExecAppend (pstate=0x2089208) at nodeAppend.c:267
> #13 0x00000000006d1b00 in ExecProcNodeInstr (node=0x2089208) at execProcnode.c:466
> #14 0x000000000070964f in ExecProcNode (node=0x2089208) at ../../../src/include/executor/executor.h:248
> #15 0x0000000000709795 in ExecSort (pstate=0x2088ff8) at nodeSort.c:108
> #16 0x00000000006d1b00 in ExecProcNodeInstr (node=0x2088ff8) at execProcnode.c:466
> #17 0x00000000006d1ad1 in ExecProcNodeFirst (node=0x2088ff8) at execProcnode.c:450
> #18 0x00000000006dec36 in ExecProcNode (node=0x2088ff8) at ../../../src/include/executor/executor.h:248
> #19 0x00000000006df079 in fetch_input_tuple (aggstate=0x2088a20) at nodeAgg.c:589
> #20 0x00000000006e1fad in agg_retrieve_direct (aggstate=0x2088a20) at nodeAgg.c:2368
> #21 0x00000000006e1bfd in ExecAgg (pstate=0x2088a20) at nodeAgg.c:2183
> #22 0x00000000006d1b00 in ExecProcNodeInstr (node=0x2088a20) at execProcnode.c:466
> #23 0x00000000006d1ad1 in ExecProcNodeFirst (node=0x2088a20) at execProcnode.c:450
> #24 0x00000000006c6ffa in ExecProcNode (node=0x2088a20) at ../../../src/include/executor/executor.h:248
> #25 0x00000000006c966b in ExecutePlan (estate=0x2032f48, planstate=0x2088a20, use_parallel_mode=false,
operation=CMD_SELECT,sendTuples=true, numberTuples=0, direction=ForwardScanDirection, dest=0xbb3400 <donothingDR>,
 
>      execute_once=true) at execMain.c:1632
> 
> #3  0x00000000009fb149 in text_to_cstring (t=0x2aaae8023010) at varlena.c:212
> 212             result = (char *) palloc(len + 1);
> 
> (gdb) l
> 207             /* must cast away the const, unfortunately */
> 208             text       *tunpacked = pg_detoast_datum_packed(unconstify(text *, t));
> 209             int                     len = VARSIZE_ANY_EXHDR(tunpacked);
> 210             char       *result;
> 211
> 212             result = (char *) palloc(len + 1);
> 
> (gdb) p len
> $1 = -4
> 
> This VM had some issue early today and I killed the VM, causing PG to execute
> recovery.  I'm tentatively blaming that on zfs, so this could conceivably be a
> data error (although recovery supposedly would have resolved it).  I just
> checked and data_checksums=off.
> 

This seems very much like a corrupted varlena header - length (-4) is 
clearly bogus, and it's what triggers the problem, because that's what 
wraps around to 18446744073709551613 (which is 0xFFFFFFFFFFFFFFFD).

This has to be a value stored in a table, not some intermediate value 
created during execution. So I don't think the exact query matters. Can 
you try doing something like pg_dump, which has to detoast everything?

The question is whether this is due to the VM getting killed in some 
strange way (what VM system is this, how is the storage mounted?) or 
whether the recovery is borked and failed to do the right thing.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: pg13.2: invalid memory alloc request size NNNN

From
Justin Pryzby
Date:
On Fri, Feb 12, 2021 at 06:44:54PM +0100, Tomas Vondra wrote:
> > (gdb) p len
> > $1 = -4
> > 
> > This VM had some issue early today and I killed the VM, causing PG to execute
> > recovery.  I'm tentatively blaming that on zfs, so this could conceivably be a
> > data error (although recovery supposedly would have resolved it).  I just
> > checked and data_checksums=off.
> 
> This seems very much like a corrupted varlena header - length (-4) is
> clearly bogus, and it's what triggers the problem, because that's what wraps
> around to 18446744073709551613 (which is 0xFFFFFFFFFFFFFFFD).
> 
> This has to be a value stored in a table, not some intermediate value
> created during execution. So I don't think the exact query matters. Can you
> try doing something like pg_dump, which has to detoast everything?

Right, COPY fails and VACUUM FULL crashes.

message                | invalid memory alloc request size 18446744073709551613
query                  | COPY child.tt TO '/dev/null';

> The question is whether this is due to the VM getting killed in some strange
> way (what VM system is this, how is the storage mounted?) or whether the
> recovery is borked and failed to do the right thing.

This is qemu/kvm, with block storage:
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/data/postgres'/>

And then more block devices for ZFS vdevs:
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/data/zfs2'/>
      ...

Those are LVM volumes (I know that ZFS/LVM is discouraged).

$ zpool list -v
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zfs          762G   577G   185G        -         -    71%    75%  1.00x    ONLINE  -
  vdj        127G  92.7G  34.3G        -         -    64%  73.0%      -  ONLINE  
  vdd        127G  95.6G  31.4G        -         -    74%  75.2%      -  ONLINE  
  vdf        127G  96.0G  31.0G        -         -    75%  75.6%      -  ONLINE  
  vdg        127G  95.8G  31.2G        -         -    74%  75.5%      -  ONLINE  
  vdh        127G  95.5G  31.5G        -         -    74%  75.2%      -  ONLINE  
  vdi        128G   102G  25.7G        -         -    71%  79.9%      -  ONLINE  

This is recently upgraded to ZFS 2.0.0, and then to 2.0.1:

Jan 21 09:33:26 Installed: zfs-dkms-2.0.1-1.el7.noarch
Dec 23 08:41:21 Installed: zfs-dkms-2.0.0-1.el7.noarch

The VM has gotten "wedged" and I've had to kill it a few times in the last 24h
(needless to say this is not normal).  That part seems like a kernel issue and
not postgres problem.  It's unclear if that's due to me trying to tickle the
postgres ERROR.  It's the latest centos7 kernel: 3.10.0-1160.15.2.el7.x86_64

-- 
Justin



Re: pg13.2: invalid memory alloc request size NNNN

From
Greg Stark
Date:
I think to get a size of -4 you would be trying to read a varlena
pointer pointing to four nul bytes. I bet if you run dd on the
corresponding block you'll find a chunk of nuls in the page. That
perhaps makes sense with ZFS where if a new page was linked to the
tree but never written it would be an uninitialized page rather than
the old data.

I'm becoming increasingly convinced that there are a lot of storage
systems out there that just lose data whenever they crash or lose
power. Systems that are supposed to be better than that.