Re: POC: make mxidoff 64 bits - Mailing list pgsql-hackers

From wenhui qiu
Subject Re: POC: make mxidoff 64 bits
Date
Msg-id CAGjGUALW4f=r-NJyXqaSbw-HR+=v60Un=89fEttQOwm5Vy0sgQ@mail.gmail.com
Whole thread Raw
In response to Re: POC: make mxidoff 64 bits  (Maxim Orlov <orlovmg@gmail.com>)
List pgsql-hackers
Hi 
> As a software developer, I definitely want to >  implement compression and
> save a few gigabytes. However, given my previous experience using
> Postgres in real-world applications, reliability at the cost of several
> gigabytes would not have caused me any trouble. Just saying.
Agree +1, If this had been done twenty years ago, the cost might have been unacceptable. But with today’s hardware—especially disk random and sequential I/O performance improving by hundreds of thousands of times, and memory capacity increasing by several hundred times—it’s almost unimaginable that we now have single 256-GB DIMMs. So this kind of overhead is negligible for modern hardware.


Thanks


On Wed, 3 Dec 2025 at 17:54, Maxim Orlov <orlovmg@gmail.com> wrote:
The biggest problem with compression, in my opinion, is that losing
even one byte causes the loss of the entire compressed block in the
worst case scenario. After all, we still don't have checksums for the
SLRU's, which is a shame by itself.

Again, I'm not against the idea of compression, but the risks need to
be considered.

As a software developer, I definitely want to implement compression and
save a few gigabytes. However, given my previous experience using
Postgres in real-world applications, reliability at the cost of several
gigabytes would not have caused me any trouble. Just saying.

--
Best regards,
Maxim Orlov.

pgsql-hackers by date:

Previous
From: Masahiko Sawada
Date:
Subject: Re: Proposal: Conflict log history table for Logical Replication
Next
From: Amit Langote
Date:
Subject: Re: Segmentation fault on proc exit after dshash_find_or_insert