On Sun, Sep 29, 2019 at 3:38 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
>
> The UTF8 bits looks reasonable to me. I guess the other part of that
> question is whether we support any other multibyte encoding that
> supports combining characters. Maybe for cases other than UTF8 we can
> test for 0-width chars (using pg_encoding_dsplen() perhaps?) and drive
> the upper/lower decision off that? (For the UTF8 case, I don't know if
> Juanjo's proposal is better than pg_encoding_dsplen. Both seem to boil
> down to a bsearch, though unicode_norm.c's table seems much larger than
> wchar.c's).
>
Using pg_encoding_dsplen() looks like the way to go. The normalizarion
logic included in ucs_wcwidth() already does what is need to avoid the
issue, so there is no need to use unicode_norm_table.h. UTF8 is the
only multibyte encoding that can return a 0-width dsplen, so this
approach would also works for all the other encodings that do not use
combining characters.
Please find attached a patch with this approach.
Regards,
Juan José Santamaría Flecha