On Thu, 2021-08-19 at 13:49 -0400, John Naylor wrote:
> I had a couple further thoughts:
>
> 1. The coding historically used normal comparison and branching for
> everything, but recently it only does that to detect control
> characters, and then goes through a binary search (and with this
> patch, two binary searches) for everything else. Although the
> performance regression of the current patch seems negligible,
If I'm reading the code correctly, ASCII characters don't go through
the binary searches; they're already short-circuited at the beginning
of mbbisearch(). On my machine that's enough for the patch to be a
performance _improvement_ for ASCII, not a regression.
Does adding another short-circuit at the top improve the
microbenchmarks noticeably? I assumed the compiler had pretty well
optimized all that already.
> we could use almost the same branches to fast-path printable ascii
> text, like this:
>
> + /* fast path for printable ASCII characters */
> + if (ucs >= 0x20 || ucs < 0x7f)
> + return 1;
Should be && instead of ||, I think.
> +
> /* test for 8-bit control characters */
> if (ucs == 0)
> return 0;
>
> - if (ucs < 0x20 || (ucs >= 0x7f && ucs < 0xa0) || ucs > 0x0010ffff)
> + if (ucs < 0xa0 || ucs > 0x0010ffff)
> return -1;
>
> 2. As written, the patch adds a script that's very close to an
> existing one, and emits a new file that has the same type of contents
> as an existing one, both of which are #included in one place. I
> wonder if we should consider having just one script that ingests both
> files and emits one file. All we need is for mbinterval to encode the
> character width, but we can probably do that with a bitfield like the
> normprops table to save space. Then, we only do one binary search.
> Opinions?
I guess it just depends on what the end result looks/performs like.
We'd save seven hops or so in the worst case?
--Jacob