Thomas:
On Fri, Mar 22, 2019 at 2:03 PM Thomas Güttler
<guettliml@thomas-guettler.de> wrote:
> > I'm not too sure, but I read ( in the code ) you are measuring a
> > nearly not compressible urandom data againtst a highly compressible (
...
> for this case toast-tables/wal is a detail of the implementation.
> This tests does not care about the "why it takes longer". It just generates
> a performance chart.
> Yes, it does exactly what you say: it compares
> nearly not compressible urandom data against a highly compressible data.
> In my case, will get nearly random data (binary PDF, JPG, ...). And that's why
> I wanted to benchmark it.
Well, if all you wanted is to benchmark, a performance chart, you know
it. I assumed you wanted to know more, like where the bottleneck is
and how to try to avoid it. My fault.
I was specifically confused because, IIRC, you said ascii data took
much longer than binary, which is a completely different test ( you
can test ascii vs binary by generating
chunks of random numbers in say, the 0 to 127 and sending that data
once as is and once more shifting left by one every byte. That should
test ascii-binary differences, but
if you test random vs uniform I think the problem is in the
randomness, you could just test sending '\x92'*i, or something
similar, but if you are just interested in the benchmark, you have it
done.
Just one thing, a single graph with two labels "ascii" and "random" is
misleading, as constant/random is orthogonal with ascii/binary, but as
I said, my fault, all the text about psycopg and other stuff led me to
think you wanted some kind of diagnosis and improvements.
Regards.