Re: TOAST usage setting - Mailing list pgsql-hackers

From Gregory Stark
Subject Re: TOAST usage setting
Date
Msg-id 874plvpsat.fsf@oxford.xeocode.com
Whole thread Raw
In response to Re: TOAST usage setting  (Bruce Momjian <bruce@momjian.us>)
Responses Re: TOAST usage setting
Re: TOAST usage setting
List pgsql-hackers
"Bruce Momjian" <bruce@momjian.us> writes:

> Gregory Stark wrote:
>> "Bruce Momjian" <bruce@momjian.us> writes:
>> 
>> > I tested TOAST using a method similar to the above method against CVS
>> > HEAD, with default shared_buffers = 32MB and no assert()s.  I created
>> > backends with power-of-2 seetings for TOAST_TUPLES_PER_PAGE (4(default),
>> > 8, 16, 32, 64) which gives TOAST/non-TOAST breakpoints of 2k(default),
>> > 1k, 512, 256, and 128, roughly.
>> >
>> > The results are here:
>> >
>> >     http://momjian.us/expire/TOAST/
>> >
>> > Strangely, 128 bytes seems to be the break-even point for TOAST and
>> > non-TOAST, even for sequential scans of the entire heap touching all
>> > long row values.  I am somewhat confused why TOAST has faster access
>> > than inline heap data.

Is your database initialized with C locale? If so then length(text) is
optimized to not have to detoast:
if (pg_database_encoding_max_length() == 1)    PG_RETURN_INT32(toast_raw_datum_size(str) - VARHDRSZ);

Also, I think you have to run this for small datasets like you have well as
large data sets where the random access seek time of TOAST will really hurt.

--  Gregory Stark EnterpriseDB          http://www.enterprisedb.com



pgsql-hackers by date:

Previous
From: "Jeroen T. Vermeulen"
Date:
Subject: Re: What is the maximum encoding-conversion growth rate, anyway?
Next
From: Gregory Stark
Date:
Subject: Re: TOAST usage setting