BUG #1116: "insert into" from "select" crashes backends - Mailing list pgsql-bugs
From | PostgreSQL Bugs List |
---|---|
Subject | BUG #1116: "insert into" from "select" crashes backends |
Date | |
Msg-id | 20040326164529.A0D38CF52B5@www.postgresql.com Whole thread Raw |
Responses |
Re: BUG #1116: "insert into" from "select" crashes backends
|
List | pgsql-bugs |
The following bug has been logged online: Bug reference: 1116 Logged by: Vic Ricker Email address: vicricker@charter.net PostgreSQL version: 7.4 Operating system: Linux (Redhat 9) Description: "insert into" from "select" crashes backends Details: I apologize in advance for being an idiot. :-) That said, we're having a problem with the backend crashing when we execute this query using psql: \set VERBOSITY verbose insert into option_history (option_symbol, stock_symbol, expiration_year, end_of_day, last_sale, volume, strike, bid, ask, open_interest) select option_symbol, stock_symbol, expiration_year, end_of_day, last_sale, volume, strike, bid, ask, open_interest from option_history_temp; The only output is: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. The logs indicate that it crashed with a signal 11: Mar 25 21:08:41 kringle postgres[29741]: [9-1] LOG: recycled transaction log file "0000001D0000000A" Mar 25 21:08:41 kringle postgres[29741]: [10-1] LOG: recycled transaction log file "0000001D0000000B" Mar 25 21:09:03 kringle postgres[27537]: [8-1] LOG: server process (PID 29359)was terminated by signal 11 Mar 25 21:09:03 kringle postgres[27537]: [9-1] LOG: terminating any other active server processes Mar 25 21:09:03 kringle postgres[29699]: [8-1] WARNING: terminating connection because of crash of another server process Mar 25 21:09:03 kringle postgres[29699]: [8-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server Mar 25 21:09:03 kringle postgres[29699]: [8-3] process exited abnormally and possibly corrupted shared memory. Mar 25 21:09:03 kringle postgres[29699]: [8-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. Mar 25 21:09:03 kringle postgres[29659]: [8-1] WARNING: terminating connection because of crash of another server process Mar 25 21:09:03 kringle postgres[29659]: [8-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, becausea nother server Mar 25 21:09:03 kringle postgres[29659]: [8-3] process exited abnormally and possibly corrupted shared memory. Mar 25 21:09:03 kringle postgres[29659]: [8-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. Mar 25 21:09:03 kringle postgres[29658]: [8-1] WARNING: terminating connection because of crash of another server process Mar 25 21:09:03 kringle postgres[29658]: [8-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, becausea nother server Mar 25 21:09:03 kringle postgres[29658]: [8-3] process exited abnormally and possibly corrupted shared memory. Mar 25 21:09:03 kringle postgres[29657]: [8-1] WARNING: terminating connection because of crash of another server process Mar 25 21:09:03 kringle postgres[29656]: [8-1] WARNING: terminating connection because of crash of another server process Mar 25 21:09:03 kringle postgres[29655]: [8-1] WARNING: terminating connection because of crash of another server process Mar 25 21:09:03 kringle postgres[29743]: [10-1] FATAL: the database system is in recovery mode Mar 25 21:09:03 kringle postgres[29658]: [8-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. Mar 25 21:09:04 kringle postgres[29657]: [8-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server Mar 25 21:09:04 kringle postgres[29656]: [8-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server Mar 25 21:09:04 kringle postgres[29655]: [8-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server Mar 25 21:09:04 kringle postgres[29657]: [8-3] process exited abnormally and possibly corrupted shared memory. Mar 25 21:09:04 kringle postgres[29656]: [8-3] process exited abnormally and possibly corrupted shared memory. Mar 25 21:09:04 kringle postgres[29655]: [8-3] process exited abnormally and possibly corrupted shared memory. Mar 25 21:09:04 kringle postgres[29657]: [8-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. Mar 25 21:09:04 kringle postgres[29656]: [8-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. Mar 25 21:09:04 kringle postgres[29655]: [8-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. Mar 25 21:09:04 kringle postgres[27537]: [10-1] LOG: all server processes terminated; reinitializing Mar 25 21:09:04 kringle postgres[29744]: [11-1] LOG: database system was interrupted at 2004-03-25 21:08:56 EST Mar 25 21:09:04 kringle postgres[29744]: [12-1] LOG: checkpoint record is at 1D/F377E30 Mar 25 21:09:04 kringle postgres[29744]: [13-1] LOG: redo record is at 1D/F0034FC; undo record is at 0/0; shutdown FALSE Mar 25 21:09:04 kringle postgres[29744]: [14-1] LOG: next transaction ID: 210524; next OID: 52161853 Mar 25 21:09:04 kringle postgres[29744]: [15-1] LOG: database system was not properly shut down; automatic recovery in progress Mar 25 21:09:04 kringle postgres[29744]: [16-1] LOG: redo starts at 1D/F0034FC Mar 25 21:09:09 kringle postgres[29744]: [17-1] LOG: unexpected pageaddr 1D/85A6000 in log file 29, segment 16, offset 5922816 Mar 25 21:09:09 kringle postgres[29744]: [18-1] LOG: redo done at 1D/105A58E4 Mar 25 21:09:17 kringle postgres[29744]: [19-1] LOG: recycled transaction log file "0000001D0000000C" Mar 25 21:09:17 kringle postgres[29744]: [20-1] LOG: recycled transaction log file "0000001D0000000D" Mar 25 21:09:17 kringle postgres[29744]: [21-1] LOG: recycled transaction log file "0000001D0000000E" Mar 25 21:09:17 kringle postgres[29744]: [22-1] LOG: database system is ready I would like to debug but I can't get it to generate a core file. I removed the ulimit from redhat's startup script but that didn't seem to help. We're running PostgreSQL 7.4.2 from the RPMs for Redhat 9 from postgresql.org. We started experiencing the problem after replacing a failed server and upgrading to Redhat 9 from 8 using whatever version of postgreSQL comes with RH8. The architecture is Intel (actually AMD), 512M RAM, the database is on a 56G RAID1. Kernel version is: 2.4.20-20.9 Here's a dump of the schema for the tables: -- -- PostgreSQL database dump -- SET client_encoding = 'SQL_ASCII'; SET check_function_bodies = false; SET SESSION AUTHORIZATION 'omoneydb'; SET search_path = public, pg_catalog; -- -- TOC entry 3 (OID 17178) -- Name: option_history; Type: TABLE; Schema: public; Owner: omoneydb -- CREATE TABLE option_history ( option_symbol character varying(10), stock_symbol character varying(20), expiration_year integer, end_of_day date, last_sale double precision, volume integer, strike double precision, bid double precision, ask double precision, open_interest integer ); -- -- TOC entry 4 (OID 17178) -- Name: option_history; Type: ACL; Schema: public; Owner: omoneydb -- REVOKE ALL ON TABLE option_history FROM PUBLIC; GRANT ALL ON TABLE option_history TO omoneywww; -- -- TOC entry 5 (OID 206422) -- Name: option_history_idx1; Type: INDEX; Schema: public; Owner: omoneydb -- CREATE UNIQUE INDEX option_history_idx1 ON option_history USING btree (option_sy mbol, stock_symbol, expiration_year, strike, end_of_day); -- -- PostgreSQL database dump -- SET client_encoding = 'SQL_ASCII'; SET check_function_bodies = false; SET SESSION AUTHORIZATION 'omoneydb'; SET search_path = public, pg_catalog; -- -- TOC entry 3 (OID 51842374) -- Name: option_history_temp; Type: TABLE; Schema: public; Owner: omoneydb -- CREATE TABLE option_history_temp ( option_symbol character varying(10) NOT NULL, stock_symbol character varying(10) NOT NULL, dummy1 character(1), dummy2 character(1), dummy3 character(1), dummy4 character(1), last_sale double precision, volume integer, strike double precision NOT NULL, bid double precision, ask double precision, open_interest integer, expiration_year integer NOT NULL, end_of_day date ); -- -- TOC entry 4 (OID 51842376) -- Name: option_history_temp_pkey; Type: CONSTRAINT; Schema: public; Owner: omoneydb -- ALTER TABLE ONLY option_history_temp ADD CONSTRAINT option_history_temp_pkey PRIMARY KEY (option_symbol, stock_symbol, expiration_year, strike); I've dumped the two tables out, unfortunately, the big one is over 200 megs compressed. I'm going to load them into a test server so I can reproduce the problem without affecting our production environment. The FAQ suggests to use the debugger to connect to the postgres process. This should help with my problem of not being able to get a core file. I've installed the debug RPM. Is there a special procedure for using the debug libs and stuff? any suggestions on how to proceed?
pgsql-bugs by date: