Re: [REVIEW] Re: Compression of full-page-writes - Mailing list pgsql-hackers

From Robert Haas
Subject Re: [REVIEW] Re: Compression of full-page-writes
Date
Msg-id CA+TgmoYSJhgBa4CeXVW0JJjodsNgti-8HmC83E6vKzDvTm_qTA@mail.gmail.com
Whole thread Raw
In response to Re: [REVIEW] Re: Compression of full-page-writes  (Rahila Syed <rahilasyed90@gmail.com>)
Responses Re: [REVIEW] Re: Compression of full-page-writes
List pgsql-hackers
On Thu, Jul 3, 2014 at 3:58 PM, Rahila Syed <rahilasyed90@gmail.com> wrote:
> Updated version of patches are attached.
> Changes are as follows
> 1. Improved readability of the code as per the review comments.
> 2. Addition of block_compression field in BkpBlock structure to store
> information about compression of block. This provides for switching
> compression on/off and changing compression algorithm as required.
> 3.Handling of OOM in critical section by checking for return value of malloc
> and proceeding without compression of FPW if return value is NULL.

So, it seems like you're basically using malloc to work around the
fact that a palloc failure is an error, and we can't throw an error in
a critical section.  I don't think that's good; we want all of our
allocations, as far as possible, to be tracked via palloc.  It might
be a good idea to add a new variant of palloc or MemoryContextAlloc
that returns NULL on failure instead of throwing an error; I've wanted
that once or twice.  But in this particular case, I'm not quite seeing
why it should be necessary - the number of backup blocks per record is
limited to some pretty small number, so it ought to be possible to
preallocate enough memory to compress them all, perhaps just by
declaring a global variable like char wal_compression_space[8192]; or
whatever.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Andres Freund
Date:
Subject: Re: [REVIEW] Re: Compression of full-page-writes
Next
From: Robert Haas
Date:
Subject: Re: pg_shmem_allocations view