Re: json datatype and table bloat? - Mailing list pgsql-general

From Chris Travers
Subject Re: json datatype and table bloat?
Date
Msg-id CAKt_Zfu5WF+ByzvEuuDPB7LJ_qRrfp2jJz8BBh6UpX7as=KxtQ@mail.gmail.com
Whole thread Raw
In response to json datatype and table bloat?  (Gregory Haase <haaseg@onefreevoice.com>)
Responses Re: json datatype and table bloat?  (Merlin Moncure <mmoncure@gmail.com>)
List pgsql-general



On Mon, Oct 28, 2013 at 4:17 PM, Gregory Haase <haaseg@onefreevoice.com> wrote:
I have a table that is triggering my nagios database bloat alert regularly. Usually, I have to give it the vacuum full more than once to get it under the threshold. Today I tried repeatedly and cannot get the alert to resolve.

I had a discussion with one of the primary developers about how the table is utilized, and it turns out they are basically only ever inserting into it. This sort of flies in the face of conventional wisdom about bloat being caused by frequent updates and deletes.

As I understand it, vacuuming only removes the tuples removed by delete and update operations. 

The question is how this is being inserted and if there is anything that ever updates the rows in any way.  Maybe this is an manual process?  But for 5x bloat, you have to have it be a repeated process.

Maybe there was bad data that had to be corrected? 

We were looking at it, and one of the things that struck me is that this table has a column with a json datatype. I looked through information_schema.columns and there is only one other table with a json datatype, and I recall having bloat issues with this table in the past as well.

I'm wondering if the json datatype is just naturally more bloated than other types, or if the query in the check_postgresql.pl nagios script is not accurate, or if maybe my thresholds are simply too low?

Next time this happens it would be worth seeing what VACUUM FULL VERBOSE output is for that table.
 
--
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor lock-in.

pgsql-general by date:

Previous
From: Jov
Date:
Subject: Re: copy 5.3000000000e+01 to bigint column error while Insert ok
Next
From: Tom Lane
Date:
Subject: Re: INSERT/UPDATE statements sometimes choosing not to use PK index?