Hi all,
After I have had recently updated a number of components on my system
(W2K, Cygwin), I have today started to pick up a PostgreSQL project
that I had lying around. First I also got a recent ODBC driver.
After that one of the first things I noticed was a new bug, that I
promptly investigated.
The symptom is that the expression "SELECT COUNT(*) FROM anytable"
returns an empty string. Actually the same happens with any aggregate
function that I have tried.
Delving into the C source of the ODBC wrapper I am using, I find that
this function call
rc = SQLColAttributes(stmt, col, SQL_COLUMN_DISPLAY_SIZE,
NULL, 0, NULL, &cbValueMax);
puts 0 into "cbValueMax".
The implementation of SQLColAttributes() in psqlodbc/results.c has
this handler (code re-wrapped to 80-character lines)
case SQL_COLUMN_DISPLAY_SIZE: /* == SQL_DESC_DISPLAY_SIZE */
value = fi ? fi->display_size : pgtype_display_size(stmt,
field_type, col_idx, unknown_sizes);
mylog("PGAPI_ColAttributes: col %d, display_size= %d\n", col_idx, value);
break;
And indeed I find that log entry in my log files:
[2124]PGAPI_ColAttributes: col 0, display_size= 0
The variable "fi" is only set, when "ci->drivers.parse" is true, so I
deactivate the checkbox "Parse Statements" on the data source and my
aggregates begin to work again. So I have a workaround.
Questions:
- Why is "fi->display_size" not set for aggregate function results?
- If that is intentional or unavoidable, shouldn't the code in
psqlodbc/results.c read
value = fi && 0 != fi->display_size
? fi->display_size
: pgtype_display_size(...
or something similar?
benny