Re: Avoiding io penalty when updating large objects - Mailing list pgsql-hackers

From Alvaro Herrera
Subject Re: Avoiding io penalty when updating large objects
Date
Msg-id 20050629025317.GC7196@surnet.cl
Whole thread Raw
In response to Avoiding io penalty when updating large objects  (Mark Dilger <pgsql@markdilger.com>)
Responses Re: [GENERAL] Avoiding io penalty when updating large objects
List pgsql-hackers
On Tue, Jun 28, 2005 at 07:38:43PM -0700, Mark Dilger wrote:
> I would like to write a postgres extension type which represents a btree of
> data and allows me to access and modify elements within that logical btree.
> Assume the type is named btree_extension, and I have the table:
>
> CREATE TABLE example (
>     a   TEXT,
>     b   TEXT,
>     c   BTREE_EXTENSION,
>     UNIQUE(a,b)
> );
>
> If, for a given row, the value of c is, say, approximately 2^30 bytes
> large, then I would expect it to be divided up into 8K chunks in an
> external table, and I should be able to fetch individual chunks of that
> object (by offset) rather than having to detoast the whole thing.

I don't think you can do this with the TOAST mechanism.  The problem is
that there's no API which allows you to operate on only certain chunks
of data.  You can do it with large objects though -- those you create
with lo_creat().  You can do lo_seek(), lo_read() and lo_write() as you
see fit.  Of course, this allows you to change the LO by chunks.

--
Alvaro Herrera (<alvherre[a]surnet.cl>)
"No hay hombre que no aspire a la plenitud, es decir,
la suma de experiencias de que un hombre es capaz"

pgsql-hackers by date:

Previous
From: Stephen Frost
Date:
Subject: Re: Open items
Next
From: Satoshi Nagayasu
Date:
Subject: Re: Open items