Tom Lane wrote:
> Tatsuo Ishii <t-ishii@sra.co.jp> writes:
> >> LO is a dead end. What we really want to do is eliminate tuple-size
> >> restrictions and then have large ordinary fields (probably of type
> >> bytea) in regular tuples. I'd suggest working on compression in that
> >> context, say as a new data type called "bytez" or something like that.
>
> > It sounds ideal but I remember that Vadim said inserting a 2GB record
> > is not good idea since it will be written into the log too. If it's a
> > necessary limitation from the point of view of WAL, we have to accept
> > it, I think.
>
> LO won't make that any better: the data still goes into a table.
> You'd have 2GB worth of WAL entries either way.
>
> The only thing LO would do for you is divide the data into block-sized
> tuples, so there would be a bunch of little WAL entries instead of one
> big one. But that'd probably be easy to duplicate too. If we implement
> big tuples by chaining together disk-block-sized segments, which seems
> like the most likely approach, couldn't WAL log each segment as a
> separate log entry? If so, there's almost no difference between LO and
> inline field for logging purposes.
>
I don't know LO well.
But seems LO allows partial update.
Big tuples
If so,isn't it a significant difference ?
Regards.
Hiroshi Inoue
Inoue@tpf.co.jp