Re: [HACKERS] compression in LO and other fields - Mailing list pgsql-hackers

From Tom Lane
Subject Re: [HACKERS] compression in LO and other fields
Date
Msg-id 25684.942387283@sss.pgh.pa.us
Whole thread Raw
In response to Re: [HACKERS] compression in LO and other fields  (Tatsuo Ishii <t-ishii@sra.co.jp>)
Responses Re: [HACKERS] compression in LO and other fields  (Karel Zak - Zakkr <zakkr@zf.jcu.cz>)
Re: [HACKERS] compression in LO and other fields  (Tatsuo Ishii <t-ishii@sra.co.jp>)
List pgsql-hackers
Tatsuo Ishii <t-ishii@sra.co.jp> writes:
>> LO is a dead end.  What we really want to do is eliminate tuple-size
>> restrictions and then have large ordinary fields (probably of type
>> bytea) in regular tuples.  I'd suggest working on compression in that
>> context, say as a new data type called "bytez" or something like that.

> It sounds ideal but I remember that Vadim said inserting a 2GB record
> is not good idea since it will be written into the log too. If it's a
> necessary limitation from the point of view of WAL, we have to accept
> it, I think.

LO won't make that any better: the data still goes into a table.
You'd have 2GB worth of WAL entries either way.

The only thing LO would do for you is divide the data into block-sized
tuples, so there would be a bunch of little WAL entries instead of one
big one.  But that'd probably be easy to duplicate too.  If we implement
big tuples by chaining together disk-block-sized segments, which seems
like the most likely approach, couldn't WAL log each segment as a
separate log entry?  If so, there's almost no difference between LO and
inline field for logging purposes.
        regards, tom lane


pgsql-hackers by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: failure of \e in psql
Next
From: Thomas Lockhart
Date:
Subject: Re: AWL: Re: tm1