If a PostgreSQL timestamp column is fetched using SQLGetData, into
string buffer (SQL_C_CHAR), and the output buffer's length is exactly 20
bytes, and the timestamp value has a year larger than 10000 or smaller
than 0, the output buffer will be overflown.
The core of the problem is this, in copy_and_convert_field():
> case PG_TYPE_ABSTIME:
> case PG_TYPE_DATETIME:
> case PG_TYPE_TIMESTAMP_NO_TMZONE:
> case PG_TYPE_TIMESTAMP:
> len = 19;
> if (cbValueMax > len)
> {
> /* sprintf(rgbValueBindRow, "%.4d-%.2d-%.2d %.2d:%.2d:%.2d",
> std_time.y, std_time.m, std_time.d, std_time.hh, std_time.mm, std_time.ss); */
> stime2timestamp(&std_time, rgbValueBindRow, FALSE,
> PG_VERSION_GE(conn, 7.2) ? (int) cbValueMax - len - 2 : 0);
> len = strlen(rgbValueBindRow);
> }
> break;
It checks if the output buffer is at least 20 bytes wide, and bails out
if it isn't. But 20 bytes isn't enough for some timestamp values that
might come from a PostgreSQL server, e.g:
postgres=# select length('1011-02-15 15:49:18 BC'::timestamp::text);
length
--------
22
(1 row)
A better approach to this would be to pass the max length to
stime2timestamp, and let it truncate it. stime2timestamp uses sprintf,
which is easy to change to snprintf. As a bonus, the SQL standard
behavior is to truncate the string anyway, rather than refuse to return
anything if the whole value doesn't fit.
While looking at this, I noticed that the SQL_C_WCHAR conversion code
doesn't add a NULL-terminator to the string, if the output buffer's size
is not divisible by two. Now, that's an even more obscure corner case,
but I think we should make sure that the returned string is always
null-terminated, even if the buffer length is a bit strange.
I've pushed fixes for these bugs to the git repository.
- Heikki