Re: Threading crash using ODBC - Mailing list pgsql-odbc

From markw
Subject Re: Threading crash using ODBC
Date
Msg-id 3DD90514.6060704@mohawksoft.com
Whole thread Raw
In response to Threading crash using ODBC  (markw <markw@mohawksoft.com>)
List pgsql-odbc

Hiroshi Inoue wrote:

>markw wrote:
>
>
>>Hiroshi Inoue wrote:
>>
>>
>>
>>>markw wrote:
>>>
>>>
>
>[snip]
>
>
>
>>>>Actually, I tracked it down. It may be something that you are
>>>>interested in, I had my stack set to 64K, when I bumped it up to 256K,
>>>>the problem went away.
>>>>
>>>>Both the unixODBC and PostgreSQL drivers are similar, or at least have
>>>>a similar origin. The stack utilization deserve at least a little
>>>>scrutiny.
>>>>
>>>>
>>>>
>>>>
>>>I've improved a little about the stack utilization of
>>>our ODBC driver and some applications could work with
>>>the improvement though I'm not sure if it can work with
>>>your application.
>>>
>>>
>>>
>>>
>>well, I'm flexable, and I'm sure it would be helpful to many other
>>developers, what is a good estimate for thread stack size?
>>
>>
>
>Though the unixODBC and PostgreSQL drivers have a similar origin,
>they are pretty different now. As for stack utilization, I removed
>MAX_MESSAGE_LEN(65536) byte buffers completely and so most
>applications may be able to work with 64K stack size.
>
>
That's great news. Did you use malloc or alloca?

The reason I ask, I know alloca is "discouraged" and not "posix" but it
is supported on a lot of platforms (Including Windows as _alloca), and
if wrapped in a macro, should be extreamly portable. Given a choice, I'd
rather use stack memory for short term allocations and just beef up the
stack, than use malloc.

For server programs that expect to be running long term, think months or
years, heap fragmentation is a serious issue. Stack based allocations
are a good way to avoid this problem.

>
>



pgsql-odbc by date:

Previous
From: "Hiroshi Inoue"
Date:
Subject: Re: FATAL 1: Sorry, too many clients already
Next
From: "Robert John Shepherd"
Date:
Subject: Re: FATAL 1: Sorry, too many clients already