On 2016-05-10 13:36:32 -0400, Robert Haas wrote:
> On Tue, May 10, 2016 at 12:31 PM, Tomas Vondra
> <tomas.vondra@2ndquadrant.com> wrote:
> > The following table shows the differences between the disabled and reverted
> > cases like this:
> >
> > sum('reverted' results with N clients)
> > ---------------------------------------- - 1.0
> > sum('disabled' results with N clients)
> >
> > for each scale/client count combination. So for example 4.83% means with a
> > single client on the smallest data set, the sum of the 5 runs for reverted
> > was about 1.0483x than for disabled.
> >
> > scale 1 16 32 64 128
> > 100 4.83% 2.84% 1.21% 1.16% 3.85%
> > 3000 1.97% 0.83% 1.78% 0.09% 7.70%
> > 10000 -6.94% -5.24% -12.98% -3.02% -8.78%
>
> /me scratches head.
>
> That doesn't seem like noise, but I don't understand the
> scale-factor-10000 results either.
Hm. Could you change max_connections by 1 and 2 and run the 10k tests
again for each value? I wonder whether we're seing the affect of changed
shared memory alignment.
Greetings,
Andres Freund