Nathan Bossart <nathandbossart@gmail.com> writes:
> On Wed, Sep 28, 2022 at 06:56:20PM -0400, Tom Lane wrote:
>> A point that still bothers me a bit about pg_stat_get_backend_idset is
>> that it could miss or duplicate some backend IDs if the user calls
>> pg_stat_clear_snapshot() partway through the SRF's run, and we reload
>> a different set of backend entries than we had before. I added a comment
>> about that, with an argument why it's not worth working harder, but
>> is the argument convincing? If not, what should we do?
> Isn't this an existing problem? Granted, it'd manifest differently with
> this patch, but ISTM we could already end up with too many or too few
> backend IDs if there's a refresh partway through.
Right. I'd been thinking the current code wouldn't generate duplicate IDs
--- but it might produce duplicate query output anyway, in case a given
backend's entry is later in the array than it was before. So really
there's not much guarantees here in any case.
>> - if (beid < 1 || beid > localNumBackends)
>> - return NULL;
> The reason I'd kept this in was because I was worried about overflow in the
> comparator function. Upon further inspection, I don't think there's
> actually any way that will produce incorrect results. And I'm not sure we
> should worry about such cases too much, anyway.
Ah, I see: if the user passes in a "backend ID" that is close to INT_MIN,
then the comparator's subtraction could overflow. We could fix that by
writing out the comparator code longhand ("if (a < b) return -1;" etc),
but I don't really think it's necessary. bsearch is guaranteed to
correctly report that such a key is not present, even if it takes a
strange search path through the array due to inconsistent comparator
results. So the test quoted above just serves to fail a bit more quickly,
but we certainly shouldn't be optimizing for the case of a bad ID.
> Overall, LGTM.
OK. Will push shortly.
regards, tom lane