Yes, you are right. The presented index usage data is caused by recursive queries, which check the integrity of hierarchical structures from the bottom up. Your explanation has clarified what is going on. Thank you.
My experiments with indexes are caused by the appearance of significant variance (1-180ms) in these operations, which appeared when I switched from version 12 to 14, which increased the checking time by ~250% with the existing implementation and ~40% after I rewrote the functions to run as dynamic SQL. This decision was due to the obvious correlation between the level of variance and the primary dataset obtained when the non-dynamic function was first called. I don't think my communication experience will allow me to properly describe the problem, but the information I received was useful. Thank you.
Your statement seems obvious to me. But what I see doesn't seem like a conscious choice. It turns out that it is better to have a lighter general-purpose index than to strive to create a target covering index for a certain kind of operation.
If both indexes are expected to be hit only once in the query and return only one row, their expected costs will be the same. In this case, the tie is broken arbitrarily, and that often means the most-recently created index will get chosen.
As the expected number of leaf page accesses in a given query goes up, the smaller index will start to look less expensive.
Cheers,
Jeff