<p><font size="2">> > I definitely think it's worth it, even if it doesn't handle an<br /> > >
inline-compresseddatum.<br /> ><br /> > Yeah. I'm not certain how much benefit we could get there anyway.<br />
>If the datum isn't out-of-line then there's a small upper limit on how<br /> > big it can be and hence a small
upperlimit on how long it takes to<br /> > decompress. It's not clear that a complicated caching scheme would<br />
>pay for itself.<br /><br /> Well there's a small upper limit per-instance but the aggregate could still be
significantif you have a situation like btree scans which are repeatedly detoasting the same datum. Note that the
"inlinecompressed" case includes packed varlenas which are being copied just to get their alignment right. It would be
niceto get rid of that palloc/pfree bandwidth.<br /><br /> I don't really see a way to do this though. If we hook into
theoriginal datum's mcxt we could use the pointer itself as a key. But if the original datum comes from a buffer that
doesn'twork.<br /><br /> One thought I had -- which doesn't seem to go anywhere, but I thought was worth mentioning in
caseyou see a way to leverage it that I don't -- is that if the toast key is already in the cache then deform_tuple
couldsubstitute the cached value directly instead of waiting for someone to detoast it. That means we can save all the
subsequenttrips to the toast cache manager. I'm not sure that would give us a convenient way to know when to unpin the
toastcache entry though. It's possible that some code is aware that deform_tuple doesn't allocate anything currently
andtherefore doesn't set the memory context to anything that will live as long as the data it returns.<br /><br /><br
/>Incidentally, I'm on vacation and reading this via an awful webmail interface. So I'm likely to miss some interesting
stufffor a couple weeks. I suppose the Snr ratio of the list is likely to move but I'm not sure which
direction...</font>