BUG #18692: Segmentation fault when extending a varchar column with a gist index with custom signal length - Mailing list pgsql-bugs
From | PG Bug reporting form |
---|---|
Subject | BUG #18692: Segmentation fault when extending a varchar column with a gist index with custom signal length |
Date | |
Msg-id | 18692-72ea398df3ec6712@postgresql.org Whole thread Raw |
Responses |
Re: BUG #18692: Segmentation fault when extending a varchar column with a gist index with custom signal length
|
List | pgsql-bugs |
The following bug has been logged on the website: Bug reference: 18692 Logged by: Nicolas Maus Email address: nicolas.maus@bertelsmann.de PostgreSQL version: 16.4 Operating system: SLES 15.5 Description: When extending a varchar column with a gist index with a custom signal length the Postgres server crashes with a segmentation fault. Tested on 16.4 and 14.13 on SLES 15.5 How to reproduce: create extension if not exists pg_trgm; create table contains_trgm (trgm_column varchar(200)); -- important: this bug only occurs when a custom signal length (siglen=x) is provided create index gist_index on contains_trgm using gist (trgm_column public.gist_trgm_ops (siglen=32)); -- will trigger segmentation fault (new column size must be bigger than old one): alter table contains_trgm alter column trgm_column type varchar(768); Log output: 2024-11-06 14:55:09.806 CET : LOG: 00000: server process (PID 130312) was terminated by signal 11: Segmentation fault 2024-11-06 14:55:09.806 CET : DETAIL: Failed process was running: alter table contains_trgm alter column trgm_column type varchar(768); 2024-11-06 14:55:09.806 CET : LOCATION: LogChildExit, postmaster.c:3689 2024-11-06 14:55:09.807 CET : LOG: 00000: terminating any other active server processes 2024-11-06 14:55:09.807 CET : LOCATION: HandleChildCrash, postmaster.c:3490 2024-11-06 14:55:09.823 CET 172.28.110.149(56320) trigram_test ggdbpadm: FATAL: 57P03: the database system is in recovery mode 2024-11-06 14:55:09.823 CET 172.28.110.149(56320) trigram_test ggdbpadm: LOCATION: ProcessStartupPacket, postmaster.c:2358 2024-11-06 14:55:09.875 CET : LOG: 00000: all server processes terminated; reinitializing 2024-11-06 14:55:09.875 CET : LOCATION: PostmasterStateMachine, postmaster.c:3950 2024-11-06 14:55:09.947 CET : LOG: 00000: database system was interrupted; last known up at 2024-11-06 14:52:20 CET 2024-11-06 14:55:09.947 CET : LOCATION: StartupXLOG, xlog.c:5113 2024-11-06 14:55:10.064 CET : LOG: 00000: database system was not properly shut down; automatic recovery in progress 2024-11-06 14:55:10.064 CET : LOCATION: InitWalRecovery, xlogrecovery.c:926 2024-11-06 14:55:10.072 CET : LOG: 00000: redo starts at 0/8C45FFF8 2024-11-06 14:55:10.072 CET : LOCATION: PerformWalRecovery, xlogrecovery.c:1689 2024-11-06 14:55:10.110 CET : LOG: 00000: invalid record length at 0/8C8BFA20: expected at least 24, got 0 2024-11-06 14:55:10.110 CET : LOCATION: ReadRecord, xlogrecovery.c:3137 2024-11-06 14:55:10.113 CET : LOG: 00000: redo done at 0/8C8BF908 system usage: CPU: user: 0.00 s, system: 0.01 s, elapsed: 0.04 s 2024-11-06 14:55:10.113 CET : LOCATION: PerformWalRecovery, xlogrecovery.c:1827 2024-11-06 14:55:10.119 CET : LOG: 00000: checkpoint starting: end-of-recovery immediate wait 2024-11-06 14:55:10.119 CET : LOCATION: LogCheckpointStart, xlog.c:6251 2024-11-06 14:55:10.181 CET : LOG: 00000: checkpoint complete: wrote 957 buffers (5.8%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.041 s, sync=0.011 s, total=0.063 s; sync files=303, longest=0.005 s, average=0.001 s; distance=4478 kB, estimate=4478 kB; lsn=0/8C8BFA20, redo lsn=0/8C8BFA20 2024-11-06 14:55:10.181 CET : LOCATION: LogCheckpointEnd, xlog.c:6351 2024-11-06 14:55:10.337 CET : LOG: 00000: database system is ready to accept connections 2024-11-06 14:55:10.337 CET : LOCATION: process_pm_child_exit, postmaster.c:3110
pgsql-bugs by date: