Thread: BUG #6444: Postgresql crash
The following bug has been logged on the website: Bug reference: 6444 Logged by: Petr Jedin=C3=BD Email address: petr.jediny@gmail.com PostgreSQL version: 8.4.10 Operating system: Debian 6 Squeeze, 64bit Description:=20=20=20=20=20=20=20=20 Postgresql-8.4.9 (from debian squeeze repositories) works ok, after latest debian update to 8.4.10 keeps crashing (oom kill) while working with large tables/long running queries. I can reliably reproduce the crash, I have aprox 30M row table dump (2GiB SQL file) that I'm trying to import to postgresql test system causing the crash... 8.4.9 works with no problems. System has 6GiB RAM available, there are only postgres and ssh running. --- dmesg --- [ 2379.741939] postgres invoked oom-killer: gfp_mask=3D0x201da, order=3D0, oom_adj=3D0 [ 2379.741943] postgres cpuset=3D/ mems_allowed=3D0 [ 2379.741945] Pid: 1969, comm: postgres Not tainted 2.6.32-5-amd64 #1 [ 2379.741947] Call Trace: [ 2379.741953] [<ffffffff810b6418>] ? oom_kill_process+0x7f/0x23f [ 2379.741956] [<ffffffff810b693c>] ? __out_of_memory+0x12a/0x141 [ 2379.741958] [<ffffffff810b6a93>] ? out_of_memory+0x140/0x172 [ 2379.741961] [<ffffffff810ba7f8>] ? __alloc_pages_nodemask+0x4ec/0x5fc [ 2379.741966] [<ffffffff812fb9ea>] ? io_schedule+0x93/0xb7 [ 2379.741969] [<ffffffff810bbd5d>] ? __do_page_cache_readahead+0x9b/0x1b4 [ 2379.741972] [<ffffffff81065058>] ? wake_bit_function+0x0/0x23 [ 2379.741975] [<ffffffff810bbe92>] ? ra_submit+0x1c/0x20 [ 2379.741978] [<ffffffff810b4b65>] ? filemap_fault+0x17d/0x2f6 [ 2379.741982] [<ffffffff810caada>] ? __do_fault+0x54/0x3c3 [ 2379.741985] [<ffffffff810cce2e>] ? handle_mm_fault+0x3b8/0x80f [ 2379.741989] [<ffffffff812ff166>] ? do_page_fault+0x2e0/0x2fc [ 2379.741992] [<ffffffff812fd005>] ? page_fault+0x25/0x30 [ 2379.741993] Mem-Info: [ 2379.741994] Node 0 DMA per-cpu: [ 2379.741996] CPU 0: hi: 0, btch: 1 usd: 0 [ 2379.741997] Node 0 DMA32 per-cpu: [ 2379.741999] CPU 0: hi: 186, btch: 31 usd: 61 [ 2379.742000] Node 0 Normal per-cpu: [ 2379.742002] CPU 0: hi: 186, btch: 31 usd: 57 [ 2379.742006] active_anon:1205594 inactive_anon:301380 isolated_anon:0 [ 2379.742007] active_file:6 inactive_file:0 isolated_file:0 [ 2379.742008] unevictable:0 dirty:0 writeback:823 unstable:0 [ 2379.742008] free:9451 slab_reclaimable:800 slab_unreclaimable:1607 [ 2379.742009] mapped:916 shmem:959 pagetables:6399 bounce:0 [ 2379.742011] Node 0 DMA free:15840kB min:24kB low:28kB high:36kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15280kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:4kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes [ 2379.742019] lowmem_reserve[]: 0 3000 6030 6030 [ 2379.742021] Node 0 DMA32 free:16984kB min:4936kB low:6168kB high:7404kB active_anon:2355684kB inactive_anon:588744kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3072160kB mlocked:0kB dirty:0kB writeback:1400kB mapped:1476kB shmem:1548kB slab_reclaimable:256kB slab_unreclaimable:24kB kernel_stack:8kB pagetables:8356kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:50 all_unreclaimable? yes [ 2379.742030] lowmem_reserve[]: 0 0 3030 3030 [ 2379.742032] Node 0 Normal free:4980kB min:4988kB low:6232kB high:7480kB active_anon:2466692kB inactive_anon:616776kB active_file:24kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3102720kB mlocked:0kB dirty:0kB writeback:1892kB mapped:2188kB shmem:2288kB slab_reclaimable:2944kB slab_unreclaimable:6400kB kernel_stack:584kB pagetables:17240kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:79 all_unreclaimable? yes [ 2379.742041] lowmem_reserve[]: 0 0 0 0 [ 2379.742043] Node 0 DMA: 2*4kB 3*8kB 2*16kB 3*32kB 3*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB =3D 15840kB [ 2379.742049] Node 0 DMA32: 216*4kB 179*8kB 104*16kB 49*32kB 27*64kB 18*128kB 9*256kB 2*512kB 0*1024kB 0*2048kB 1*4096kB =3D 16984kB [ 2379.742055] Node 0 Normal: 717*4kB 2*8kB 1*16kB 1*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB =3D 4980kB [ 2379.742061] 4505 total pagecache pages [ 2379.742062] 3518 pages in swap cache [ 2379.742064] Swap cache stats: add 1372955, delete 1369437, find 1539/1943 [ 2379.742065] Free swap =3D 0kB [ 2379.742066] Total swap =3D 5476344kB [ 2379.756825] 1572848 pages RAM [ 2379.756826] 39908 pages reserved [ 2379.756827] 1566 pages shared [ 2379.756828] 1522287 pages non-shared [ 2379.756831] Out of memory: kill process 1731 (postgres) score 1557674 or a child [ 2379.756974] Killed process 1733 (postgres) ---
On Wed, Feb 8, 2012 at 2:01 PM, <petr.jediny@gmail.com> wrote: > The following bug has been logged on the website: > > Bug reference: =A0 =A0 =A06444 > Logged by: =A0 =A0 =A0 =A0 =A0Petr Jedin=FD > Email address: =A0 =A0 =A0petr.jediny@gmail.com > PostgreSQL version: 8.4.10 > Operating system: =A0 Debian 6 Squeeze, 64bit > Description: > > Postgresql-8.4.9 (from debian squeeze repositories) works ok, after latest > debian update to 8.4.10 keeps crashing (oom kill) while working with large > tables/long running queries. Are you by any chance using the inet or cidr datatypes extensively? If not, can we see the table definitions and the queries? --=20 Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Hello, On Thu, Feb 9, 2012 at 9:12 PM, Robert Haas <robertmhaas@gmail.com> wrote: > On Wed, Feb 8, 2012 at 2:01 PM, <petr.jediny@gmail.com> wrote: >> The following bug has been logged on the website: >> >> Bug reference: 6444 >> Logged by: Petr Jediný >> Email address: petr.jediny@gmail.com >> PostgreSQL version: 8.4.10 >> Operating system: Debian 6 Squeeze, 64bit >> Description: >> >> Postgresql-8.4.9 (from debian squeeze repositories) works ok, after latest >> debian update to 8.4.10 keeps crashing (oom kill) while working with large >> tables/long running queries. > > Are you by any chance using the inet or cidr datatypes extensively? > Yes, you are right, we use network datatypes very extensively. We are storing arp tables and dhcp acks etc. > If not, can we see the table definitions and the queries? > If it's still needed I can provide the table dump, it's about 543M gziped. Thank you, Petr > -- > Robert Haas > EnterpriseDB: http://www.enterprisedb.com > The Enterprise PostgreSQL Company
On 09-02-2012 17:26, Petr Jediný wrote: > If it's still needed I can provide the table dump, it's about 543M gziped. > No, that bug was already fixed in our repository. I advise you to stay with 8.4.9 until 8.4.11 is released (in a month or so) unless you are suffering with another bug that was fixed in 8.4.10 (in this case, just get the patch [1] and build a custom version). [1] http://git.postgresql.org/pg/commitdiff/81f4e6cd27d538bc27e9714a9173e4df353a02e5 -- Euler Taveira de Oliveira - Timbira http://www.timbira.com.br/ PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento