* Claudio Freire (klaussfreire@gmail.com) wrote:
> On Wed, Jan 16, 2013 at 12:13 AM, Stephen Frost <sfrost@snowman.net> wrote:
> > Sequentially scanning the *same* data over and over is certainly
> > counterprouctive. Synchroscans fixed that, yes. That's not what we're
> > talking about though- we're talking about scanning and processing
> > independent sets of data using multiple processes.
>
> I don't see the difference. Blocks are blocks (unless they're cached).
Not quite. Having to go out to the kernel isn't free. Additionally,
the seq scans used to pollute our shared buffers prior to
synch-scanning, which didn't help things.
> > It's certainly
> > possible that in some cases that won't be as good
>
> If memory serves me correctly (and it does, I suffered it a lot), the
> performance hit is quite considerable. Enough to make it "a lot worse"
> rather than "not as good".
I feel like we must not be communicating very well.
If the CPU is pegged at 100% and the I/O system is at 20%, adding
another CPU at 100% will bring the I/O load up to 40% and you're now
processing data twice as fast overall. If you're running a single CPU
at 20% and your I/O system is at 100%, then adding another CPU isn't
going to help and may even degrade performance by causing problems for
the I/O system. The goal of the optimizer will be to model the plan to
account for exactly that, as best it can.
> > but there will be
> > quite a few cases where it's much, much better.
>
> Just cached segments.
No, certainly not just cached segments. Any situation where the CPU is
the bottleneck.
Thanks,
Stephen