Re: Out of memory with ODBC - Mailing list pgsql-general

From Richard Huxton
Subject Re: Out of memory with ODBC
Date
Msg-id 470F9DE1.20602@archonet.com
Whole thread Raw
In response to Re: Out of memory with ODBC  ("Relyea, Mike" <Mike.Relyea@xerox.com>)
Responses Re: Out of memory with ODBC  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: Out of memory with ODBC  ("Relyea, Mike" <Mike.Relyea@xerox.com>)
List pgsql-general
Relyea, Mike wrote:
>> From: Richard Huxton [mailto:dev@archonet.com]
>>
>> Relyea, Mike wrote:
>>> If I execute this query in PGAdmin III it runs without any errors and
> returns no records.
>> Hmm. So what's different about the queries?
>
> Nothing.  The SQL is identical.  I copied out of the log file and pasted
> into PGAdmin.

Must be context then.

>> [145.188]ERROR from backend during send_query: 'SERROR'
>> [145.188]ERROR from backend during send_query: 'C53200'
>> [145.188]ERROR from backend during send_query: 'Mout of memory'
>> [145.188]ERROR from backend during send_query: 'DFailed on
>> request of size 16.'
>> [145.188]ERROR from backend during send_query: 'Faset.c'
>> [145.188]ERROR from backend during send_query: 'L712'
>> [145.188]ERROR from backend during send_query: 'RAllocSetAlloc'
>>
>> OK, so this seems to be a server-side error, which means
>> something should be in the server logs. Is there anything?
>
> I've pasted below what I found immediately before the error.

Thanks

>> Oh, and I'd probably split that query into about a dozen
>> smaller ones - one per statement.
>
> What do you mean one per statement?  One per transaction?

Send one query for each sql statement. That way if you get an error you
know which failed without having to work through the SQL.

> TopMemoryContext: 475184 total in 11 blocks; 12016 free (27 chunks);
> 463168 used
> Local Buffer Lookup Table: 8192 total in 1 blocks; 1776 free (0 chunks);
> 6416 used
> TopTransactionContext: 122880 total in 4 blocks; 14064 free (5 chunks);
> 108816 used
> Type information cache: 8192 total in 1 blocks; 1800 free (0 chunks);
> 6392 used
> MessageContext: 1946198040 total in 258 blocks; 26624 free (43 chunks);
> 1946171416 used

Well, I don't have to be a developer to know that if there's a memory
problem it's that big number starting 1946... that's the problem. If
that's bytes, it's ~ 1.9GB

Do you see a backend process growing to 2GB+ before failure?

A quick rummage through the source and I find this file,
backend/utils/mmgr/README containing:

MessageContext --- this context holds the current command message from
the frontend, as well as any derived storage that need only live as long
as the current message (for example, in simple-Query mode the parse and
plan trees can live here).  This context will be reset, and any children
deleted, at the top of each cycle of the outer loop of PostgresMain.
This is kept separate from per-transaction and per-portal contexts
because a query string might need to live either a longer or shorter
time than any single transaction or portal.

Hmm - I can't think how that could reach 1.9GB in size, especially since
it has to be something different between a "raw" connection and how ODBC
is doing things.

Can you reproduce this immediately (connect, query, crash), or does the
system have to run for a while first?

--
   Richard Huxton
   Archonet Ltd

pgsql-general by date:

Previous
From: Brad Nicholson
Date:
Subject: Re: Missing files under pg_data following power failure
Next
From: Clemens Schwaighofer
Date:
Subject: Re: Query problem