[sqlsmith] PANIC: failed to add BRIN tuple - Mailing list pgsql-hackers

From Andreas Seltenreich
Subject [sqlsmith] PANIC: failed to add BRIN tuple
Date
Msg-id 87mvni77jh.fsf@elite.ansel.ydns.eu
Whole thread Raw
Responses Re: [sqlsmith] PANIC: failed to add BRIN tuple
Re: [sqlsmith] PANIC: failed to add BRIN tuple
Re: [sqlsmith] PANIC: failed to add BRIN tuple
List pgsql-hackers
There was one instance of this PANIC when testing with the regression db
of master at 50e5315.

,----
| WARNING:  specified item offset is too large
| PANIC:  failed to add BRIN tuple
| server closed the connection unexpectedly
`----

It is reproducible with the query below on this instance only.  I've put
the data directory (20MB) here:
   http://ansel.ydns.eu/~andreas/brincrash.tar.xz

The instance was running on Debian Jessie amd64.  Query and Backtrace
below.

regards,
Andreas

--8<---------------cut here---------------start------------->8---
update public.brintest set byteacol = null, charcol =
public.brintest.charcol, int2col = null, int4col =
public.brintest.int4col, textcol = public.brintest.textcol, oidcol =
cast(coalesce(cast(coalesce(null, public.brintest.oidcol) as oid),
pg_catalog.pg_my_temp_schema()) as oid), tidcol =
public.brintest.tidcol, float8col = public.brintest.float8col,
macaddrcol = null, cidrcol = public.brintest.cidrcol, datecol =
public.brintest.datecol, timecol = public.brintest.timecol,
timestamptzcol = pg_catalog.clock_timestamp(), intervalcol =
public.brintest.intervalcol, timetzcol = public.brintest.timetzcol,
bitcol = public.brintest.bitcol, varbitcol =
public.brintest.varbitcol, uuidcol = null returning
public.brintest.byteacol as c0;
--8<---------------cut here---------------end--------------->8---

Core was generated by `postgres: smith regression [local] UPDATE                           '.
Program terminated with signal SIGABRT, Aborted.
#0  0x00007fd2cda67067 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
56    ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  0x00007fd2cda67067 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x00007fd2cda68448 in __GI_abort () at abort.c:89
#2  0x00000000007ec969 in errfinish (dummy=dummy@entry=0) at elog.c:557
#3  0x00000000007f011c in elog_finish (elevel=elevel@entry=20, fmt=fmt@entry=0x82ca8f "failed to add BRIN tuple") at
elog.c:1378
#4  0x0000000000470618 in brin_doupdate (idxrel=0x101f4c0, pagesPerRange=1, revmap=0x10d20e50, heapBlk=8, oldbuf=2878,
oldoff=9,origtup=0x10d864a8, origsz=6144, newtup=0x5328a88, newsz=6144, samepage=1 '\001') at brin_pageops.c:184 
#5  0x000000000046e5bb in brininsert (idxRel=0x101f4c0, values=0x211b, nulls=0x6 <error: Cannot access memory at
address0x6>, heaptid=0xffffffffffffffff, heapRel=0x7fd2ce6fd700, checkUnique=UNIQUE_CHECK_NO) at brin.c:244 
#6  0x00000000005d887f in ExecInsertIndexTuples (slot=0xe92a560, tupleid=0x10d21084, estate=0x9ed8a68, noDupErr=0
'\000',specConflict=0x0, arbiterIndexes=0x0) at execIndexing.c:383 
#7  0x00000000005f74d5 in ExecUpdate (tupleid=0x7ffe11ea74a0, oldtuple=0x211b, slot=0xe92a560,
planSlot=0xffffffffffffffff,epqstate=0x7fd2ce6fd700, estate=0x9ed8a68, canSetTag=1 '\001') at nodeModifyTable.c:1015 
#8  0x00000000005f7b6c in ExecModifyTable (node=0x9ed8d28) at nodeModifyTable.c:1501
#9  0x00000000005dd5d8 in ExecProcNode (node=node@entry=0x9ed8d28) at execProcnode.c:396
#10 0x00000000005d962f in ExecutePlan (dest=0xde86040, direction=<optimized out>, numberTuples=0, sendTuples=<optimized
out>,operation=CMD_UPDATE, use_parallel_mode=<optimized out>, planstate=0x9ed8d28, estate=0x9ed8a68) at execMain.c:1567 
#11 standard_ExecutorRun (queryDesc=0xde860d8, direction=<optimized out>, count=0) at execMain.c:338
#12 0x00000000006f74c9 in ProcessQuery (plan=<optimized out>, sourceText=0xd74e88 "update public.brintest[...]",
params=0x0,dest=0xde86040, completionTag=0x7ffe11ea7670 "") at pquery.c:185 
#13 0x00000000006f775f in PortalRunMulti (portal=portal@entry=0xde8abf0, isTopLevel=isTopLevel@entry=1 '\001',
dest=dest@entry=0xde86040,altdest=0xc96680 <donothingDR>, completionTag=completionTag@entry=0x7ffe11ea7670 "") at
pquery.c:1267
#14 0x00000000006f7a0c in FillPortalStore (portal=portal@entry=0xde8abf0, isTopLevel=isTopLevel@entry=1 '\001') at
pquery.c:1044
#15 0x00000000006f845d in PortalRun (portal=0xde8abf0, count=9223372036854775807, isTopLevel=<optimized out>,
dest=0x9ee76b8,altdest=0x9ee76b8, completionTag=0x7ffe11ea7a20 "") at pquery.c:782 
#16 0x00000000006f5c63 in exec_simple_query (query_string=<optimized out>) at postgres.c:1094
#17 PostgresMain (argc=233352176, argv=0xe8ad358, dbname=0xcf7508 "regression", username=0xe8ad3b0 "Xӊ\016") at
postgres.c:4059
#18 0x000000000046c8b2 in BackendRun (port=0xd1c580) at postmaster.c:4258
#19 BackendStartup (port=0xd1c580) at postmaster.c:3932
#20 ServerLoop () at postmaster.c:1690
#21 0x000000000069081e in PostmasterMain (argc=argc@entry=4, argv=argv@entry=0xcf64f0) at postmaster.c:1298
#22 0x000000000046d80d in main (argc=4, argv=0xcf64f0) at main.c:228



pgsql-hackers by date:

Previous
From: Amit Kapila
Date:
Subject: Re: Autovacuum to prevent wraparound tries to consume xid
Next
From: Alexander Korotkov
Date:
Subject: Re: Autovacuum to prevent wraparound tries to consume xid