Hi Michael,
thanks for taking the time of reading it and for your feedback.
On 17/06/2019 04:01, Michael Paquier wrote:
>
> pgbench data is rather compressible per the format of its attributes,
> hence I am ready to bet that the compressibility would much much less
> if you use random text data for example.
Having compression enabled on production on several dbs, I can tell you that WAL production goes down around 50% in my
case.
In my post, compression brings down WAL files to 1/2 when wal_log_hints is not enabled, and 1/3 when it is.
So, about compressibility, I think that pgbench data behaves similarly to production data, orat least the prod data I
havein my databases.
I am curious to know other people's experience in this ML.
>
> The amount of WAL generated also depends on the time it takes to run a
> transactions, and on the checkpoint timing, so I think that would you
> get better results by using a fixed number of transactions if you use
> pgbench, but that won't compare as much as a given workload in one
> session so as you can make sure that the same amount of WAL and full
> pages get generated.
That's a good remark, thanks. I did not think about it and I will keep it in mind next time. I instead averaged the
resultsover multiple runs, but setting an explicit number of transactions is the way to go.
Results, by the way, were quite stable over all the runs (in terms of generated WAL files and TPS).
regards,
fabio pardi