"Logan Bowers" <logan@zillow.com> writes:
> In my case, the "raw" data is on the order of hundreds of gigabytes and
> the increased write activity is a HUGE penalty.
And you think the extra activity from repeated clog tests would not be a
huge penalty?
AFAICS this would only be likely to be a win if you were sure that no
row would be visited more than once before you drop (or truncate) the
containing table. Which leads me to wonder why you inserted the row
into the database in the first place, instead of doing the data
aggregation on the client side.
regards, tom lane