Thread: BUG #18832: Segfault in GrantLockLocal
The following bug has been logged on the website: Bug reference: 18832 Logged by: Robins Tharakan Email address: tharakan@gmail.com PostgreSQL version: Unsupported/Unknown Operating system: Ubuntu Description: While running some tests on a recent HEAD commit (d4a6c847ca), I got 3 segfaults today (2 for ALTER DEFAULT PRIVILEGES and 1 FOR SELECT pg_drop_replication_slot()). I just happen to be testing this commit, so I am not suspecting this commit to be the cause, but what I can add is that I've run similar workloads last month and these crashes are too close to be a coincidence - in effect I wouldn't be surprised if this is owing to a recent change. The crashes are frequent but not easy to reproduce (even with the exact same database), so probably timing of concurrent queries is relevant, although despite my best efforts I don't know / can't say what else runs concurrently to cause this. Thought I'd share here in case the backtraces helps someone triage this faster. SQL === The SQLs are entirely different, but they both crashed with almost exact backtrace, thus clubbing them in this bug report. ALTER DEFAULT PRIVILEGES FOR ROLE regress_dep_user1 IN SCHEMA deptest GRANT ALL ON TABLES TO regress_dep_user2; AND SELECT pg_drop_replication_slot(...); Commit ====== Not trivial to reproduce - so not much help here Error Log - ALTER DEFAULT PRIVILEGES ========= Occurence 1: 2025-03-06 05:19:43.945 ACDT [241648] LOG: client backend (PID 245735) was terminated by signal 11: Segmentation fault 2025-03-06 05:19:43.945 ACDT [241648] DETAIL: Failed process was running: ALTER DEFAULT PRIVILEGES FOR ROLE regress_dep_user1 IN SCHEMA deptest GRANT ALL ON TABLES TO regress_dep_user2; 2025-03-06 05:19:43.945 ACDT [241648] LOG: terminating any other active server processes 2025-03-06 05:19:43.957 ACDT [241648] LOG: all server processes terminated; reinitializing 2025-03-06 05:19:44.126 ACDT [246034] FATAL: the database system is in recovery mode 2025-03-06 05:19:44.126 ACDT [246033] LOG: database system was interrupted; last known up at 2025-03-06 05:19:12 ACDT 2025-03-06 05:19:44.126 ACDT [246037] FATAL: the database system is in recovery mode Occurence 2: 2025-03-06 07:02:39.362 ACDT [441902] LOG: client backend (PID 446070) was terminated by signal 11: Segmentation fault 2025-03-06 07:02:39.362 ACDT [441902] DETAIL: Failed process was running: ALTER DEFAULT PRIVILEGES FOR ROLE regress_dep_user1 IN SCHEMA deptest GRANT ALL ON TABLES TO regress_dep_user2; 2025-03-06 07:02:39.362 ACDT [441902] LOG: terminating any other active server processes WARNING: terminating connection because of crash of another server process DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. HINT: In a moment you should be able to reconnect to the database and repeat your command. 2025-03-06 07:02:39.367 ACDT [446319] FATAL: the database system is in recovery mode Backtrace - ALTER DEFAULT PRIVILEGES ========= (Was able to capture backtrace only for the second crash) Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Core was generated by `postgres: d4a6c847ca@sqith: smith postgres 127.0.0.1(48740) ALTER DEFAU'. Program terminated with signal SIGSEGV, Segmentation fault. #0 0x00005ac747716802 in GrantLockLocal (locallock=0x5ac749e58398, owner=0x5ac749df1718) at lock.c:1758 1758 lockOwners[i].owner = owner; (gdb) bt #0 0x00005ac747716802 in GrantLockLocal (locallock=0x5ac749e58398, owner=0x5ac749df1718) at lock.c:1758 #1 0x00005ac747716a72 in GrantAwaitedLock () at lock.c:1840 #2 0x00005ac74772aa40 in LockErrorCleanup () at proc.c:814 #3 0x00005ac7472324f8 in AbortTransaction () at xact.c:2846 #4 0x00005ac7472330e2 in AbortCurrentTransactionInternal () at xact.c:3495 #5 0x00005ac747233053 in AbortCurrentTransaction () at xact.c:3449 #6 0x00005ac74773fe0f in PostgresMain (dbname=0x5ac749df0e30 "postgres", username=0x5ac749df0e18 "smith") at postgres.c:4408 #7 0x00005ac7477365ad in BackendMain (startup_data=0x7fffa06267f0, startup_data_len=4) at backend_startup.c:107 #8 0x00005ac747640752 in postmaster_child_launch (child_type=B_BACKEND, child_slot=70, startup_data=0x7fffa06267f0, startup_data_len=4, client_sock=0x7fffa0626850) at launch_backend.c:274 #9 0x00005ac74764721a in BackendStartup (client_sock=0x7fffa0626850) at postmaster.c:3519 #10 0x00005ac7476446f3 in ServerLoop () at postmaster.c:1688 #11 0x00005ac747643fe9 in PostmasterMain (argc=3, argv=0x5ac749d6bc10) at postmaster.c:1386 #12 0x00005ac7474e3db5 in main (argc=3, argv=0x5ac749d6bc10) at main.c:230 (gdb) Backtrace Full - ALTER DEFAULT PRIVILEGES ============== #0 0x00005ac747716802 in GrantLockLocal (locallock=0x5ac749e58398, owner=0x5ac749df1718) at lock.c:1758 lockOwners = 0x0 i = 0 #1 0x00005ac747716a72 in GrantAwaitedLock () at lock.c:1840 No locals. #2 0x00005ac74772aa40 in LockErrorCleanup () at proc.c:814 lockAwaited = 0x5ac749e58398 partitionLock = 0x797ddaff5480 timeouts = {{id = DEADLOCK_TIMEOUT, keep_indicator = false}, {id = LOCK_TIMEOUT, keep_indicator = true}} #3 0x00005ac7472324f8 in AbortTransaction () at xact.c:2846 s = 0x5ac747dba440 <TopTransactionStateData> latestXid = 0 is_parallel_worker = false __func__ = "AbortTransaction" #4 0x00005ac7472330e2 in AbortCurrentTransactionInternal () at xact.c:3495 s = 0x5ac747dba440 <TopTransactionStateData> #5 0x00005ac747233053 in AbortCurrentTransaction () at xact.c:3449 No locals. #6 0x00005ac74773fe0f in PostgresMain (dbname=0x5ac749df0e30 "postgres", username=0x5ac749df0e18 "smith") at postgres.c:4408 local_sigjmp_buf = {{__jmpbuf = {140735884194072, 3455269497379230227, 3, 0, 99811950217592, 133581917298688, 3455269497461019155, 7314509273468768787}, __mask_was_saved = 1, __saved_mask = {__val = {4194304, 99811983801536, 133581900226632, 4869, 99811983801536, 133581900226632, 643328, 0, 99811983801536, 4294967297, 99811983801536, 140735884191552, 99811943579575, 0, 99811983801536, 140735884191584}}}} send_ready_for_query = false idle_in_transaction_timeout_enabled = false idle_session_timeout_enabled = false __func__ = "PostgresMain" #7 0x00005ac7477365ad in BackendMain (startup_data=0x7fffa06267f0, startup_data_len=4) at backend_startup.c:107 bsdata = 0x7fffa06267f0 #8 0x00005ac747640752 in postmaster_child_launch (child_type=B_BACKEND, child_slot=70, startup_data=0x7fffa06267f0, startup_data_len=4, client_sock=0x7fffa0626850) at launch_backend.c:274 pid = 0 #9 0x00005ac74764721a in BackendStartup (client_sock=0x7fffa0626850) at postmaster.c:3519 bn = 0x797df60a1d38 pid = 1238813760 startup_data = {canAcceptConnections = CAC_OK} cac = CAC_OK __func__ = "BackendStartup" #10 0x00005ac7476446f3 in ServerLoop () at postmaster.c:1688 s = {sock = 9, raddr = {addr = {ss_family = 2, __ss_padding = "\276d\177\000\000\001\000\000\000\000\000\000\000\000\230r\336I\307Z\000\000\001\000CM\000\000\000\000\230r\336I\307Z\000\000\020", '\000' <repeats 33 times>, "CM\000\000\000\000\250v\336I\307Z", '\000' <repeats 33 times>, __ss_align = 1296236544}, salen = 16}} i = 0 now = 1741206717 last_lockfile_recheck_time = 1741206707 last_touch_time = 1741206707 events = {{pos = 1, events = 2, fd = 6, user_data = 0x0}, {pos = 0, events = 0, fd = 6, user_data = 0x0}, {pos = 0, events = 0, fd = 1201168209, user_data = 0x0}, {pos = 7, events = 0, fd = 1024, user_data = 0x5ac749d73f50}, {pos = 1296236544, events = 0, fd = 0, user_data = 0x0}, {pos = 0, events = 0, fd = 0, user_data = 0x5ac749de7270}, { pos = 0, events = 0, fd = 2048, user_data = 0x8}, {pos = 1238843216, events = 23239, fd = 1024, user_data = 0x438}, {pos = 1239306792, events = 23239, fd = 1239306808, user_data = 0x4d430001}, {pos = 1239306792, events = 23239, fd = 16, user_data = 0x0}, {pos = 0, events = 0, fd = 0, user_data = 0x4d430002}, {pos = 1239306808, events = 23239, fd = 8, user_data = 0x0}, {pos = 0, events = 0, fd = 0, user_data = 0x4d430000}, {pos = 1239306808, events = 23239, fd = 8, user_data = 0x0}, {pos = 0, events = 0, fd = 1238844208, user_data = 0x7fffa0626b50}, {pos = 1201177455, events = 23239, fd = -1604162944, user_data = 0x5ac749d73f50}, {pos = 0, events = 0, fd = 1024, user_data = 0x0}, {pos = 0, events = 0, fd = 1238843416, user_data = 0x0}, {pos = 5120, events = 0, fd = 1238844240, user_data = 0x2d0}, {pos = 4, events = 0, fd = 16, user_data = 0x5ac747984418 <wipe_mem+162>}, {pos = 784, events = 0, fd = 1238843456, user_data = 0x0}, {pos = 0, events = 0, fd = 1296236545, user_data = 0x5ac749d74040}, {pos = 784, events = 0, fd = 0, user_data = 0x0}, {pos = 0, events = 0, fd = 1296236544, user_data = 0x5ac749d74040}, {pos = 784, events = 0, fd = 0, user_data = 0x0}, {pos = 0, events = 0, fd = 3, user_data = 0x81d41a496f152b00}, {pos = -1604162624, events = 32767, fd = 1201164451, user_data = 0x0}, { pos = 1238843216, events = 23239, fd = 1296236544, user_data = 0xde7298}, {pos = 0, events = 0, fd = 1238843392, user_data = 0x5ac749d73f50}, {pos = 1024, events = 0, fd = 88, user_data = 0x5ac749d74000}, {pos = 0, events = 0, fd = 1238843456, user_data = 0x7fffa0626c60}, {pos = 1201233573, events = 23239, fd = 1203001705, user_data = 0x5ac749d73f50}, {pos = 0, events = 0, fd = 0, user_data = 0x1304}, {pos = 1238843216, events = 23239, fd = 0, user_data = 0x0}, {pos = 0, events = 0, fd = 0, user_data = 0x1303}, {pos = 1238843216, events = 23239, fd = 0, user_data = 0x0}, {pos = 0, events = 0, fd = 0, user_data = 0x7fffa0626c70}, {pos = 1863658240, events = 2178161225, fd = -1604162384, user_data = 0x5ac749d89b20}, {pos = -1604162352, events = 32767, fd = 0, user_data = 0x5ac749de7270}, {pos = 0, events = 0, fd = 0, user_data = 0x6be31}, {pos = -120, events = 4294967295, fd = 0, user_data = 0x7fffa0626dc0}, {pos = -179355285, events = 31101, fd = 0, user_data = 0x5ac749d89b20}, { pos = 1203001688, events = 23239, fd = 0, user_data = 0x7fffa0626d20}, {pos = 1863658240, events = 2178161225, fd = 0, user_data = 0xffffffffffffff88}, {pos = 0, events = 0, fd = 1239310928, user_data = 0x0}, {pos = -155541504, events = 31101, fd = -1604162208, user_data = 0x797df54addae <__GI___libc_free+126>}, {pos = 1203006005, events = 23239, fd = 1239097824, user_data = 0x1da00000003}, {pos = -179739816, events = 31101, fd = 0, user_data = 0x81d41a496f152b00}, {pos = -179747632, events = 31101, fd = 0, user_data = 0x0}, {pos = 0, events = 0, fd = 1238932456, user_data = 0x81d41a496f152b00}, {pos = 1024, events = 0, fd = -1604161256, user_data = 0x3}, { pos = 1863658240, events = 2178161225, fd = -1604162112, user_data = 0x797df544550d <__GI___sigprocmask+13>}, {pos = -1604161920, events = 32767, fd = 1197737208, user_data = 0x6be3147b46a35}, {pos = 0, events = 0, fd = 4868, user_data = 0x5ac749d89b20}, {pos = 4194304, events = 0, fd = 0, user_data = 0x0}, {pos = 0, events = 0, Error Log - SELECT pg_drop_replication_slot(...); ========= 2025-03-06 11:07:05.977 ACDT [740840] LOG: duration: 30222.816 ms 2025-03-06 11:07:06.596 ACDT [736676] LOG: client backend (PID 740584) was terminated by signal 11: Segmentation fault 2025-03-06 11:07:06.596 ACDT [736676] DETAIL: Failed process was running: SELECT pg_drop_replication_slot('regress_pg_walinspect_slot'); 2025-03-06 11:07:06.596 ACDT [736676] LOG: terminating any other active server processes 2025-03-06 11:07:06.606 ACDT [741206] FATAL: the database system is in recovery mode Backtrace - SELECT pg_drop_replication_slot(...); ========= Program terminated with signal SIGSEGV, Segmentation fault. #0 0x00005840026dc7e4 in GrantLockLocal (locallock=0x5840169d7bc8, owner=0x584016970b28) at lock.c:1758 1758 lockOwners[i].owner = owner; (gdb) bt #0 0x00005840026dc7e4 in GrantLockLocal (locallock=0x5840169d7bc8, owner=0x584016970b28) at lock.c:1758 #1 0x00005840026dca54 in GrantAwaitedLock () at lock.c:1840 #2 0x00005840026f0a22 in LockErrorCleanup () at proc.c:814 #3 0x00005840021f84f8 in AbortTransaction () at xact.c:2846 #4 0x00005840021f90e2 in AbortCurrentTransactionInternal () at xact.c:3495 #5 0x00005840021f9053 in AbortCurrentTransaction () at xact.c:3449 #6 0x0000584002705df1 in PostgresMain (dbname=0x584016970240 "postgres", username=0x584016970228 "smith") at postgres.c:4408 #7 0x00005840026fc58f in BackendMain (startup_data=0x7ffc96df68c0, startup_data_len=4) at backend_startup.c:107 #8 0x0000584002606752 in postmaster_child_launch (child_type=B_BACKEND, child_slot=88, startup_data=0x7ffc96df68c0, startup_data_len=4, client_sock=0x7ffc96df6920) at launch_backend.c:274 #9 0x000058400260d21a in BackendStartup (client_sock=0x7ffc96df6920) at postmaster.c:3519 #10 0x000058400260a6f3 in ServerLoop () at postmaster.c:1688 #11 0x0000584002609fe9 in PostmasterMain (argc=3, argv=0x5840168eac10) at postmaster.c:1386 #12 0x00005840024a9db5 in main (argc=3, argv=0x5840168eac10) at main.c:230 (gdb) Backtrace Full - SELECT pg_drop_replication_slot(...); ============== #0 0x00005840026dc7e4 in GrantLockLocal (locallock=0x5840169d7bc8, owner=0x584016970b28) at lock.c:1758 lockOwners = 0x0 i = 0 #1 0x00005840026dca54 in GrantAwaitedLock () at lock.c:1840 No locals. #2 0x00005840026f0a22 in LockErrorCleanup () at proc.c:814 lockAwaited = 0x5840169d7bc8 partitionLock = 0x7272705f5600 timeouts = {{id = DEADLOCK_TIMEOUT, keep_indicator = false}, {id = LOCK_TIMEOUT, keep_indicator = true}} #3 0x00005840021f84f8 in AbortTransaction () at xact.c:2846 s = 0x584002d80440 <TopTransactionStateData> latestXid = 0 is_parallel_worker = false __func__ = "AbortTransaction" #4 0x00005840021f90e2 in AbortCurrentTransactionInternal () at xact.c:3495 s = 0x584002d80440 <TopTransactionStateData> #5 0x00005840021f9053 in AbortCurrentTransaction () at xact.c:3449 No locals. #6 0x0000584002705df1 in PostgresMain (dbname=0x584016970240 "postgres", username=0x584016970228 "smith") at postgres.c:4408 local_sigjmp_buf = {{__jmpbuf = {140722839712232, -2448216454042316899, 3, 0, 97031948511608, 125836300689408, -2448216454115717219, -7962617830530183267}, __mask_was_saved = 1, __saved_mask = {__val = {4194304, 97032279600320, 125836284145736, 4869, 97032279600320, 125836284145736, 643328, 0, 97032279600320, 4294967297, 97032279600320, 140722839709712, 97031941873561, 0, 97032279600320, 140722839709744}}}} send_ready_for_query = false idle_in_transaction_timeout_enabled = false idle_session_timeout_enabled = false __func__ = "PostgresMain" #7 0x00005840026fc58f in BackendMain (startup_data=0x7ffc96df68c0, startup_data_len=4) at backend_startup.c:107 bsdata = 0x7ffc96df68c0 #8 0x0000584002606752 in postmaster_child_launch (child_type=B_BACKEND, child_slot=88, startup_data=0x7ffc96df68c0, startup_data_len=4, client_sock=0x7ffc96df6920) at launch_backend.c:274 pid = 0 #9 0x000058400260d21a in BackendStartup (client_sock=0x7ffc96df6920) at postmaster.c:3519 bn = 0x72728b550098 pid = 378453056 startup_data = {canAcceptConnections = CAC_OK} cac = CAC_OK __func__ = "BackendStartup" #10 0x000058400260a6f3 in ServerLoop () at postmaster.c:1688 s = {sock = 9, raddr = {addr = {ss_family = 2, __ss_padding = "\266 \177\000\000\001\000\000\000\000\000\000\000\000\030h\226\026@X\000\000\001\000CM\000\000\000\000\030h\226\026@X\000\000\020", '\000'
On Thu, Mar 6, 2025 at 9:57 PM PG Bug reporting form <noreply@postgresql.org> wrote: > While running some tests on a recent HEAD commit (d4a6c847ca), I got 3 > segfaults today (2 for ALTER DEFAULT PRIVILEGES and 1 FOR SELECT > pg_drop_replication_slot()). I just happen to be testing this commit, so I > am not suspecting this commit to be the cause, but what I can add is that > I've run similar workloads last month and these crashes are too close to be > a coincidence - in effect I wouldn't be surprised if this is owing to a > recent change. I encountered the same issue months ago and reported it here [1]. Heikki suspected that commit 3c0fd64fec might be the culprit. Unfortunately, we haven't been able to track it down because we couldn't find a reliable way to reproduce it. Your report shows that this issue still exists on master. I think it would be great if we could take this chance to trace it down to the root cause. [1] https://www.postgresql.org/message-id/flat/CAMbWs4_dNX1SzBmvFdoY-LxJh_4W_BjtVd5i008ihfU-wFF%3Deg%40mail.gmail.com Thanks Richard
On 2025-Mar-07, Richard Guo wrote: > I encountered the same issue months ago and reported it here [1]. > Heikki suspected that commit 3c0fd64fec might be the culprit. > Unfortunately, we haven't been able to track it down because we > couldn't find a reliable way to reproduce it. One way to capture this might be to run the problem workload under rr enough times until it reproduces, and then it can then be replayed under the debugger. https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD#Recording_Postgres_using_rr_Record_and_Replay_Framework -- Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
On Fri, 7 Mar 2025 at 21:07, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
>
>
> One way to capture this might be to run the problem workload under rr
> enough times until it reproduces, and then it can then be replayed under
> the debugger.
>
> https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD#Recording_Postgres_using_rr_Record_and_Replay_Framework
>
Thanks Álvaro / Richard for the pointers.
Initially that seemed like some work for rare segfaults, and although
it's still a hit or miss, I now see that when it rains it pours (all these
ROLLBACKs have the same backtrace), and so does appear worth
the effort to track further. Pasting what I already have.
I'll try rr, and update if I find something.
$ grep "Failed process was running" logfile | grep -v MERGE | grep -v select | grep -v SELECT
grep: logfile: binary file matches
2025-03-17 01:58:10.682 ACDT [190142] DETAIL: Failed process was running: ROLLBACK;
2025-03-17 06:36:52.796 ACDT [190142] DETAIL: Failed process was running: ROLLBACK;
2025-03-17 09:17:35.950 ACDT [190142] DETAIL: Failed process was running: insert into public.test_range_gist ( ir ) values (
2025-03-17 10:29:32.296 ACDT [190142] DETAIL: Failed process was running: ROLLBACK;
2025-03-17 10:36:30.187 ACDT [190142] DETAIL: Failed process was running: ROLLBACK;
2025-03-17 22:12:17.090 ACDT [190142] DETAIL: Failed process was running: ROLLBACK;
2025-03-17 22:23:26.155 ACDT [190142] DETAIL: Failed process was running: ROLLBACK;
2025-03-19 08:24:08.434 ACDT [2357560] DETAIL: Failed process was running: ROLLBACK;
2025-03-19 08:53:49.066 ACDT [2357560] DETAIL: Failed process was running: ROLLBACK;
2025-03-25 01:07:30.666 ACDT [4338] DETAIL: Failed process was running: ROLLBACK;
2025-03-25 06:27:50.560 ACDT [4338] DETAIL: Failed process was running: ROLLBACK;
Core was generated by `postgres: 44fe6ceb51f@sqith: u8 postgres 127.0.0.1(37802) ROLLBACK '.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00005fbb793422d6 in GrantLockLocal (locallock=0x5fbba9c35c38, owner=0x5fbba9b419d8) at lock.c:1805
1805 lockOwners[i].owner = owner;
(gdb) bt
#0 0x00005fbb793422d6 in GrantLockLocal (locallock=0x5fbba9c35c38, owner=0x5fbba9b419d8) at lock.c:1805
#1 0x00005fbb79342546 in GrantAwaitedLock () at lock.c:1887
#2 0x00005fbb7935654e in LockErrorCleanup () at proc.c:814
#3 0x00005fbb78e54fb2 in AbortTransaction () at xact.c:2853
#4 0x00005fbb78e55781 in CommitTransactionCommandInternal () at xact.c:3275
#5 0x00005fbb78e555f0 in CommitTransactionCommand () at xact.c:3163
#6 0x00005fbb7936a05c in finish_xact_command () at postgres.c:2834
#7 0x00005fbb7936744e in exec_simple_query (query_string=0x5fbba9b030b0 "ROLLBACK;") at postgres.c:1298
#8 0x00005fbb7936cbf3 in PostgresMain (dbname=0x5fbba9b44258 "postgres", username=0x5fbba9b44240 "u8") at postgres.c:4757
#9 0x00005fbb79362779 in BackendMain (startup_data=0x7ffc6f57c3e0, startup_data_len=24) at backend_startup.c:122
#10 0x00005fbb79265e5a in postmaster_child_launch (child_type=B_BACKEND, child_slot=299, startup_data=0x7ffc6f57c3e0, startup_data_len=24, client_sock=0x7ffc6f57c440)
at launch_backend.c:291
#11 0x00005fbb7926c9fa in BackendStartup (client_sock=0x7ffc6f57c440) at postmaster.c:3580
#12 0x00005fbb79269e14 in ServerLoop () at postmaster.c:1701
#13 0x00005fbb7926970a in PostmasterMain (argc=3, argv=0x5fbba9abcab0) at postmaster.c:1399
#14 0x00005fbb79108b07 in main (argc=3, argv=0x5fbba9abcab0) at main.c:230
-
robins
I'll try rr, and update if I find something.
$ grep "Failed process was running" logfile | grep -v MERGE | grep -v select | grep -v SELECT
grep: logfile: binary file matches
2025-03-17 01:58:10.682 ACDT [190142] DETAIL: Failed process was running: ROLLBACK;
2025-03-17 06:36:52.796 ACDT [190142] DETAIL: Failed process was running: ROLLBACK;
2025-03-17 09:17:35.950 ACDT [190142] DETAIL: Failed process was running: insert into public.test_range_gist ( ir ) values (
2025-03-17 10:29:32.296 ACDT [190142] DETAIL: Failed process was running: ROLLBACK;
2025-03-17 10:36:30.187 ACDT [190142] DETAIL: Failed process was running: ROLLBACK;
2025-03-17 22:12:17.090 ACDT [190142] DETAIL: Failed process was running: ROLLBACK;
2025-03-17 22:23:26.155 ACDT [190142] DETAIL: Failed process was running: ROLLBACK;
2025-03-19 08:24:08.434 ACDT [2357560] DETAIL: Failed process was running: ROLLBACK;
2025-03-19 08:53:49.066 ACDT [2357560] DETAIL: Failed process was running: ROLLBACK;
2025-03-25 01:07:30.666 ACDT [4338] DETAIL: Failed process was running: ROLLBACK;
2025-03-25 06:27:50.560 ACDT [4338] DETAIL: Failed process was running: ROLLBACK;
Core was generated by `postgres: 44fe6ceb51f@sqith: u8 postgres 127.0.0.1(37802) ROLLBACK '.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00005fbb793422d6 in GrantLockLocal (locallock=0x5fbba9c35c38, owner=0x5fbba9b419d8) at lock.c:1805
1805 lockOwners[i].owner = owner;
(gdb) bt
#0 0x00005fbb793422d6 in GrantLockLocal (locallock=0x5fbba9c35c38, owner=0x5fbba9b419d8) at lock.c:1805
#1 0x00005fbb79342546 in GrantAwaitedLock () at lock.c:1887
#2 0x00005fbb7935654e in LockErrorCleanup () at proc.c:814
#3 0x00005fbb78e54fb2 in AbortTransaction () at xact.c:2853
#4 0x00005fbb78e55781 in CommitTransactionCommandInternal () at xact.c:3275
#5 0x00005fbb78e555f0 in CommitTransactionCommand () at xact.c:3163
#6 0x00005fbb7936a05c in finish_xact_command () at postgres.c:2834
#7 0x00005fbb7936744e in exec_simple_query (query_string=0x5fbba9b030b0 "ROLLBACK;") at postgres.c:1298
#8 0x00005fbb7936cbf3 in PostgresMain (dbname=0x5fbba9b44258 "postgres", username=0x5fbba9b44240 "u8") at postgres.c:4757
#9 0x00005fbb79362779 in BackendMain (startup_data=0x7ffc6f57c3e0, startup_data_len=24) at backend_startup.c:122
#10 0x00005fbb79265e5a in postmaster_child_launch (child_type=B_BACKEND, child_slot=299, startup_data=0x7ffc6f57c3e0, startup_data_len=24, client_sock=0x7ffc6f57c440)
at launch_backend.c:291
#11 0x00005fbb7926c9fa in BackendStartup (client_sock=0x7ffc6f57c440) at postmaster.c:3580
#12 0x00005fbb79269e14 in ServerLoop () at postmaster.c:1701
#13 0x00005fbb7926970a in PostmasterMain (argc=3, argv=0x5fbba9abcab0) at postmaster.c:1399
#14 0x00005fbb79108b07 in main (argc=3, argv=0x5fbba9abcab0) at main.c:230
-
robins
Hello Robins and Richard, 27.03.2025 11:42, Robins Tharakan wrote: > Core was generated by `postgres: 44fe6ceb51f@sqith: u8 postgres 127.0.0.1(37802) ROLLBACK '. > Program terminated with signal SIGSEGV, Segmentation fault. > #0 0x00005fbb793422d6 in GrantLockLocal (locallock=0x5fbba9c35c38, owner=0x5fbba9b419d8) at lock.c:1805 > 1805 lockOwners[i].owner = owner; > (gdb) bt > #0 0x00005fbb793422d6 in GrantLockLocal (locallock=0x5fbba9c35c38, owner=0x5fbba9b419d8) at lock.c:1805 > #1 0x00005fbb79342546 in GrantAwaitedLock () at lock.c:1887 > #2 0x00005fbb7935654e in LockErrorCleanup () at proc.c:814 > #3 0x00005fbb78e54fb2 in AbortTransaction () at xact.c:2853 > #4 0x00005fbb78e55781 in CommitTransactionCommandInternal () at xact.c:3275 > #5 0x00005fbb78e555f0 in CommitTransactionCommand () at xact.c:3163 > #6 0x00005fbb7936a05c in finish_xact_command () at postgres.c:2834 > #7 0x00005fbb7936744e in exec_simple_query (query_string=0x5fbba9b030b0 "ROLLBACK;") at postgres.c:1298 > #8 0x00005fbb7936cbf3 in PostgresMain (dbname=0x5fbba9b44258 "postgres", username=0x5fbba9b44240 "u8") at postgres.c:4757 > #9 0x00005fbb79362779 in BackendMain (startup_data=0x7ffc6f57c3e0, startup_data_len=24) at backend_startup.c:122 > #10 0x00005fbb79265e5a in postmaster_child_launch (child_type=B_BACKEND, child_slot=299, startup_data=0x7ffc6f57c3e0, > startup_data_len=24, client_sock=0x7ffc6f57c440) > at launch_backend.c:291 > #11 0x00005fbb7926c9fa in BackendStartup (client_sock=0x7ffc6f57c440) at postmaster.c:3580 > #12 0x00005fbb79269e14 in ServerLoop () at postmaster.c:1701 > #13 0x00005fbb7926970a in PostmasterMain (argc=3, argv=0x5fbba9abcab0) at postmaster.c:1399 > #14 0x00005fbb79108b07 in main (argc=3, argv=0x5fbba9abcab0) at main.c:230 Perhaps I observed the same issue: https://www.postgresql.org/message-id/e11a30e5-c0d8-491d-8546-3a1b50c10ad4%40gmail.com Best regards, Alexander Lakhin Neon (https://neon.tech)