Hi hackers,
Recently in one discussion a user complained [1] about
counterintuitive behavior of toast_tuple_target. Here is a quote:
"""
Table size 177.74 GB
Toast table size 12 GB
Indexes size 33.49 GB
This table is composed of small columns "id", "hash", "size", and a
mid~big (2~512kb) jsonb.
I don't want to be forced to read the big column when doing seq scans,
so I tried to set toast_tuple_target = 128, to exclude the big column,
but even after a VACUUM FULL i couldn't get pg to toast the big
column. Am I doing something wrong?
"""
Arguably in this case the user may actually want to store the JSONB
fields by the foreign key.
However the user may have a good point that setting toast_tuple_target
< TOAST_TUPLE_THRESHOLD effectively does nothing. This happens because
[2]:
"""
The TOAST management code is triggered only when a row value to be
stored in a table is wider than TOAST_TUPLE_THRESHOLD bytes (normally
2 kB). The TOAST code will compress and/or move field values
out-of-line until the row value is shorter than toast_tuple_target
bytes (also normally 2 kB, adjustable) or no more gains can be had.
"""
... TOAST is _triggered_ by TOAST_TUPLE_THRESHOLD but tries to
compress the tuple until toast_tuple_target bytes. This is indeed
somewhat confusing.
I see several ways of solving this.
1. Forbid setting toast_tuple_target < TOAST_TUPLE_THRESHOLD
2. Consider using something like RelationGetToastTupleTarget(rel,
TOAST_TUPLE_THRESHOLD) in heapam.c:2250, heapam.c:3625 and
rewriteheap.c:636 and modify the documentation accordingly.
3. Add a separate user-defined table setting toast_tuple_threshold
similar to toast_tuple_target.
Thoughts?
[1]: https://t.me/pg_sql/62265
[2]: https://www.postgresql.org/docs/current/storage-toast.html
--
Best regards,
Aleksander Alekseev