Backend crash (long) - Mailing list pgsql-hackers

From Michael Paesold
Subject Backend crash (long)
Date
Msg-id 00dd01c25eea$b89e5e60$4201a8c0@beeblebrox
Whole thread Raw
Responses Re: Backend crash (long)  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: Backend crash (long)  (Robert Treat <xzilla@users.sourceforge.net>)
List pgsql-hackers
Hi all,

I have written a test function, that will create a sequence and a table,
than insert one million rows into the table, analyze the table and create an
index on one of the columns.
(so this will all happen inside on transaction)

After doing that, the backend will crash.
(but the data will be inserted)

If I comment out the table analyzing and the create index (I have not tested
which on leads to the crash), everything works fine. I have sent a copy of
the error log, the psql session, the function and some parts of my
postgresql.conf file.

My system is RedHat 7.2, Kernel 2.4.9-34, glibc-2.2.4, gcc 2.96, PostgreSQL
7.2.2 built from source.

If you want, I could try other combinations of create/insert/analyze etc. to
test the exact steps needed to crash the backend.

I know what I am doing is not really standard. This was rather a stability
test of postgres :). What do you think about this all?

Best Regards,
Michael Paesold


--> logfile:
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
'bench_big_pkey' for table 'bench_big'
DEBUG:  recycled transaction log file 000000000000009F
[...skipping: recycled transaction log file 00000000000000A0 to
00000000000000AE]
DEBUG:  recycled transaction log file 00000000000000B0
DEBUG:  Analyzing bench_big
DEBUG:  server process (pid 13840) was terminated by signal 11
DEBUG:  terminating any other active server processes
DEBUG:  all server processes terminated; reinitializing shared memory and
semaphores
DEBUG:  database system was interrupted at 2002-09-17 11:45:56 CEST
DEBUG:  checkpoint record is at 0/B41170A4
DEBUG:  redo record is at 0/B400DF34; undo record is at 0/0; shutdown FALSE
DEBUG:  next transaction id: 96959; next oid: 6282462
DEBUG:  database system was not properly shut down; automatic recovery in
progress
DEBUG:  redo starts at 0/B400DF34
DEBUG:  ReadRecord: record with zero length at 0/B495F754
DEBUG:  redo done at 0/B495F730
DEBUG:  recycled transaction log file 00000000000000B2
DEBUG:  recycled transaction log file 00000000000000B1
DEBUG:  recycled transaction log file 00000000000000B3
DEBUG:  database system is ready

The first time I tried the insert, there was an additional notice from
another backend, just after the line "DEBUG:  terminating any other active
server processes":
NOTICE:  Message from PostgreSQL backend:       The Postmaster has informed me that some other backend       died
abnormallyand possibly corrupted shared memory.       I have rolled back the current transaction and am       going to
terminateyour database system connection and exit.       Please reconnect to the database system and repeat your
query.

--> in psql:
billing=# select create_benchmark ();
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
'bench_big_pkey' for table 'bench_big'
server closed the connection unexpectedly       This probably means the server terminated abnormally       before or
whileprocessing the request.
 
The connection to the server was lost. Attempting reset: Failed.
!# \c
Password:
You are now connected to database billing as user billing.
billing=# select real_time from bench_big where int_id in (1, 1000000);          real_time
-------------------------------2002-09-17 11:32:22.63334+022002-09-17 11:46:16.601282+02
(2 rows)

--> all rows have definatly been inserted!


--> the trigger function:

CREATE OR REPLACE FUNCTION create_benchmark () RETURNS BOOLEAN AS '
DECLAREchar100 VARCHAR :=
\'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZäöüÄÖÜß1234567890!"§$%
&/()=?+*#<>|-_,;.:^°{}´`[]\';r1 INTEGER;r2 INTEGER;r3 INTEGER;
BEGIN CREATE SEQUENCE bench_seq;
 CREATE TABLE bench_big (   int_id INTEGER NOT NULL default nextval(\'bench_seq\'),   bigint_id BIGINT NOT NULL,
sometext1VARCHAR (50),   sometext2 VARCHAR (50),   sometext3 VARCHAR (50),   trx_time TIME WITHOUT TIME ZONE NOT NULL
defaultCURRENT_TIME,   trx_timestamp TIMESTAMP WITHOUT TIME ZONE NOT NULL default
 
CURRENT_TIMESTAMP,   trx_date DATE NOT NULL default CURRENT_DATE,   real_time TIMESTAMP NOT NULL default timeofday(),
someboolean1BOOLEAN NOT NULL,   someboolean2 BOOLEAN NOT NULL,   PRIMARY KEY (int_id) );
 
 FOR i IN 1..1000000 LOOP   r1 = CAST( RANDOM() * 49 AS INTEGER );   r2 = CAST( RANDOM() * 49 AS INTEGER );   r3 =
CAST(RANDOM() * 49 AS INTEGER );
 
   INSERT INTO bench_big     (bigint_id, sometext1, sometext2, sometext3, someboolean1,
someboolean2)   VALUES (     CAST(RANDOM() * 10000000000 AS BIGINT),     SUBSTR(char100, 50, 49), -- this should be r1,
r1(but doesn't work!)     SUBSTR(char100, 50, 49), -- this should be r2, r2 (but doesn't work!)     SUBSTR(char100, 50,
49),-- this should be r3, r3 (but doesn't work!)     CASE WHEN r1 > 25 THEN TRUE ELSE FALSE END,     CASE WHEN r3 > 10
THENTRUE ELSE FALSE END   ); END LOOP;
 
 -- WARNING: un-commenting these lines could crash your postgres -- CREATE INDEX bench_bigint_id_idx ON
bench_big(bigint_id);-- ANALYZE bench_big;
 
 RETURN TRUE;
END;
' LANGUAGE 'plpgsql';


--> Perhaps relevant parts of my postgresql.conf file:

#       Shared Memory Size
#
shared_buffers = 12288      # 2*max_connections, min 16 (one usually 8Kb)
max_fsm_relations = 100    # min 10, fsm is free space map (number of
tables)
max_fsm_pages = 20000      # min 1000, fsm is free space map (one about 8Kb)
max_locks_per_transaction = 64 # min 10
wal_buffers = 8            # min 4

#       Non-shared Memory Sizes
#
sort_mem = 4096            # min 32 (in Kb)
vacuum_mem = 16384         # min 1024

#       Write-ahead log (WAL)
#
wal_files = 8               # range 0-64, default 0
wal_sync_method = fdatasync # the default varies across platforms:
#                          # fsync, fdatasync, open_sync, or open_datasync
#wal_debug = 0             # range 0-16
#commit_delay = 0          # range 0-100000
#commit_siblings = 5       # range 1-1000
#checkpoint_segments = 3   # in logfile segments (16MB each), min 1, default
3
#checkpoint_timeout = 300  # in seconds, range 30-3600
#fsync = true






pgsql-hackers by date:

Previous
From: Sean Chittenden
Date:
Subject: Re: Open 7.3 items
Next
From: GB Clark
Date:
Subject: CVsup file