Sorry Marten, but i am using the ODBC interface on Smalltalk level only, so i am not able to answer your questions. The only fact i can say is, that the ODBC driver accept strings as an one byte array and answer strings as two byte array.
Sorry,
Josef Springer
Marten Feldtmann wrote:
Actually this is still a point I have to investigate. To made
all stuff working in a correct way I need another information
about how much bytes actually used ... or I need an information
that I have to deal with UNICODE stuff.
Within the ODBC driver interface I deal with cbLength and
cbPrecision (information delivered by the ODBC drivers). The
first one tells the interface, how much bytes are reserved
for this column and cbPrecision tells me, what was the length
when executing the create statement.
Therefore under PostgreSQL it's:
For CHAR(15), cbLength is ALWAYS 30 and cbPrecision is always
15.
Both information are not enough - I need to know, how much of
cbLength of this buffer I must use to create the string (and
of course how to interpret the byte stream: single byte character
or doubule byte character) - or cbLength changes it's value
depending on the
Perhaps it would be better to have
CHAR(15) -> cbLength = 15, cbPrecision = 15 (if single-byte database)
CHAR(15) -> cbLength = 30, cbPrecision = 15 (if unicode database)
but there also seem to be other parameters within the ODBC
specs to get this information - I just have not found the overall
picture.
Marten
Josef Springer schrieb:
Hi Marten,
i am using PostgreSQL 8.0.3 an ODBC 8.0.1.2 with VisualWorks 7. I am using a UNICODE database (with the same client connect datatype) and all works fine because of the result data aspect. It seems, that other than UNICODE make problems in any case.
Josef Springer
--
mit freundlichen Grüssen,
Josef Springer
(Geschäftsleitung)
-- the software company --
Orlando-di-Lasso Str. 2
D-85640 Putzbrunn