"Michael A. Olson" wrote:
> You get another benefit from Berkeley DB -- we eliminate the 8K limit
> on tuple size. For large records, we break them into page-sized
> chunks for you, and we reassemble them on demand. Neither PostgreSQL
> nor the user needs to worry about this, it's a service that just works.
>
> A single record or a single key may be up to 4GB in size.
That's certainly nice. But if you don't access a BIG column, you have to
retrieve the whole record? A very nice idea of the Postgres TOAST idea
is that you don't. You can have...
CREATE TABLE image (name TEXT, size INTEGER, giganticTenMegImage GIF);
As long as you don't select the huge column you don't lift it off disk.
That's pretty nice. In other databases I've had to do some annoying
refactoring of data models to avoid this.