On Fri, 2013-04-05 at 21:39 +0300, Ants Aasma wrote:
> Yes, I just managed to get myself some time so I can look at it some
> more. I was hoping that someone would weigh in on what their
> preferences are on the performance/effectiveness trade-off and the
> fact that we need to use assembler to make it fly so I knew how to go
> forward.
My opinion is that we don't need to be perfect as long as we catch 99%
of random errors and we don't have any major blind spots. Also, the
first version doesn't necessarily need to perform well; we can leave
optimization as future work. Requiring assembly to achieve those
optimizations is a drawback in terms of maintainability, but it seems
isolated so I don't think it's a major problem.
Ideally, the algorithm would also be suitable for WAL checksums, and we
could eventually use it for that as well.
> The worst blind spot that I could come up with was an even number of
> single bit errors that are all on the least significant bit of 16bit
> word. This type of error can occur in memory chips when row lines go
> bad, usually stuck at zero or one.
We're not really trying to catch memory errors anyway. Of course it
would be nice, but I would rather have more people using a slightly
flawed algorithm than fewer using it because it has too great a
performance impact.
> Unless somebody tells me not to waste my time I'll go ahead and come
> up with a workable patch by Monday.
Sounds great to me, thank you.
Regards,Jeff Davis