On Sat, Jun 13, 2020 at 2:13 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
> The performance can vary based on qualification where some workers
> discard more rows as compared to others, with the current system with
> step-size as one, the probability of unequal work among workers is
> quite low as compared to larger step-sizes.
It seems like this would require incredibly bad luck, though. If the
step size is less than 1/1024 of the relation size, and we ramp down
for, say, the last 5% of the relation, then the worst case is that
chunk 972 of 1024 is super-slow compared to all the other blocks, so
that it takes longer to process chunk 972 only than it does to process
chunks 973-1024 combined. It is not impossible, but that chunk has to
be like 50x worse than all the others, which doesn't seem like
something that is going to happen often enough to be worth worrying
about very much. I'm not saying it will never happen. I'm just
skeptical about the benefit of adding a GUC or reloption for a corner
case like this. I think people will fiddle with it when it isn't
really needed, and won't realize it exists in the scenarios where it
would have helped. And then, because we have the setting, we'll have
to keep it around forever, even as we improve the algorithm in other
ways, which could become a maintenance burden. I think it's better to
treat stuff like this as an implementation detail rather than
something we expect users to adjust.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company