As I'm sure many people know, check_postgres.pl has a wonderful (if rather arcane) query to check table bloat, which has been copied all over the intarwebz. When I try to use this query one one of my databases I'm told my table (which has had no deletes) is wasting a whole lot of bytes, but no amount of vacuuming or even clustering will make it less bloated. I suspect this might be due to how the table bloat query is calculating things.... would a schema like this confuse it?
Column | Type | Modifiers
--------------+-----------------------------+-----------------------------------------------------------------
id | bigint | not null default nextval('canvas.failed_jobs_id_seq'::regclass)
priority | integer | default 0
attempts | integer | default 0
handler | character varying(512000) |
original_id | bigint |
last_error | text |
queue | character varying(255) |
run_at | timestamp without time zone |
locked_at | timestamp without time zone |
failed_at | timestamp without time zone |
locked_by | character varying(255) |
created_at | timestamp without time zone |
updated_at | timestamp without time zone |
tag | character varying(255) |
max_attempts | integer |
strand | character varying(255) |
I'm wondering mostly about that handler column.