Re: Re: [COMMITTERS] pgsql: Fix mapping of PostgreSQL encodings to Python encodings. - Mailing list pgsql-hackers

From Heikki Linnakangas
Subject Re: Re: [COMMITTERS] pgsql: Fix mapping of PostgreSQL encodings to Python encodings.
Date
Msg-id 5006D380.1060202@enterprisedb.com
Whole thread Raw
In response to Re: Re: [COMMITTERS] pgsql: Fix mapping of PostgreSQL encodings to Python encodings.  (Jan Urbański <wulczer@wulczer.org>)
Responses Re: Re: [COMMITTERS] pgsql: Fix mapping of PostgreSQL encodings to Python encodings.  (Jan Urbański <wulczer@wulczer.org>)
List pgsql-hackers
On 14.07.2012 17:50, Jan Urbański wrote:
> On 13/07/12 13:38, Jan Urbański wrote:
>> On 12/07/12 11:08, Heikki Linnakangas wrote:
>>> On 07.07.2012 00:12, Jan Urbański wrote:
>>>> So you're in favour of doing unicode -> bytes by encoding with UTF-8
>>>> and
>>>> then using the server's encoding functions?
>>>
>>> Sounds reasonable to me. The extra conversion between UTF-8 and UCS-2
>>> should be quite fast, and it would be good to be consistent in the way
>>> we do conversions in both directions.
>>>
>>
>> I'll implement that than (sorry for not following up on that eariler).
>
> Here's a patch that always encodes Python unicode objects using UTF-8
> and then uses Postgres's internal functions to produce bytes in the
> server encoding.

Thanks.

If pg_do_encoding_conversion() throws an error, you don't get a chance 
to call Py_DECREF() to release the string. Is that a problem?

If an error occurs in PLy_traceback(), after incrementing 
recursion_depth, you don't get a chance to decrement it again. I'm not 
sure if the Py* function calls can fail, but at least seemingly trivial 
things like initStringInfo() can throw an out-of-memory error.

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: Event Triggers reduced, v1
Next
From: Pavel Stehule
Date:
Subject: Re: enhanced error fields