On 4/2/15 2:18 PM, TonyS wrote:
> On Wed, April 1, 2015 5:50 pm, Tom Lane-2 [via PostgreSQL] wrote:
> >
>
> >
> > TonyS <[hidden email]
> </user/SendEmail.jtp?type=node&node=5844517&i=0>> writes:
> >
> >> The analyze function has crashed again while the overcommit entries
> >> were as above. The last bit of the PostgreSQL log shows: MdSmgr:
> 41934848
> >> total in 14 blocks; 639936 free (0 chunks); 41294912 used ident parser
> >> context: 0 total in 0 blocks; 0 free (0 chunks); 0 used
> >> hba parser context: 7168 total in 3 blocks; 2288 free (1 chunks); 4880
> >> used LOCALLOCK hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512
> >> used Timezones: 83472 total in 2 blocks; 3744 free (0 chunks); 79728
> >> used ErrorContext: 8192 total in 1 blocks; 8160 free (6 chunks); 32
> used
> >> 2015-04-01 14:23:27 EDT ERROR: out of memory
> >> 2015-04-01 14:23:27 EDT DETAIL: Failed on request of size 80.
> >> 2015-04-01 14:23:27 EDT STATEMENT: analyze verbose;
> >>
> >
> > We need to see all of that memory map, not just the last six lines of
> it.
> >
> >
> > regards, tom lane
> >
>
>
> I have used the procedures from this web page to try to get a core dump:
> https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD
>
> If I follow the procedure and kill the postmaster pid while psql is
> connected to it, it does generate a core dump; however, no core dump is
> generated when the error I have been experiencing occurs.
>
> I guess at this point I am just going to rebuild from the Linux
> installation up. I also tried changing the work_mem to 16MB, but that
> didn't seem to make a difference.
I don't know that a core dump will be helpful here. What Tom was talking
about were all those lines in your log file, talking about blah context:
xxx total in xxx blocks;... That's diagnostics about where PG has used
all it's memory. That's what we need here.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com