On Tue, Feb 14, 2023 at 10:44 AM Andres Freund <andres@anarazel.de> wrote:
>
> Hi,
>
> On 2023-02-14 09:26:47 +1100, Peter Smith wrote:
> > I've observed suggested test cases get rejected as being overkill, or
> > because they would add precious seconds to the test execution. OTOH, I
> > felt such tests would still help gain some additional percentages from
> > the "code coverage" stats. The kind of tests I am thinking of don't
> > necessarily need a huge disk/CPU - but they just take longer to run
> > than anyone has wanted to burden the build-farm with.
>
> I'd say it depend on the test whether it's worth adding. Code coverage for its
> own sake isn't that useful, they have to actually test something useful. And
> tests have costs beyond runtime, e.g. new tests tend to fail in some edge
> cases.
>
> E.g. just having tests hit more lines, without verifying that the behaviour is
> actually correct, only provides limited additional assurance. It's also not
> very useful to add a very expensive test that provides only a very small
> additional amount of coverage.
>
> IOW, even if we add more test categories, it'll still be a tradeoff.
>
>
> > Sorry for the thread interruption -- but I thought this might be the
> > right place to ask: What is the recommended way to deal with such
> > tests intended primarily for better code coverage?
>
> I don't think that exists today.
>
> Do you have an example of the kind of test you're thinking of?
No, nothing specific in mind. But maybe like these:
- tests for causing obscure errors that would never otherwise be
reached without something deliberately designed to fail a certain way
- tests for trivial user errors apparently deemed not worth bloating
the regression tests with -- e.g. many errorConflictingDefElem not
being called [1].
- timing-related or error tests where some long (multi-second) delay
is a necessary part of the setup.
------
[1] https://coverage.postgresql.org/src/backend/commands/subscriptioncmds.c.gcov.html
Kind Regards,
Peter Smith.
Fujitsu Australia