> "Pavel Rotek" <pavel.rotek@gmail.com> writes:
>> No i do not mean long running transactions... Update of log entry (update of
>> row in dataaction) is performed in series of short transactions, but during
>> short transaction there is a lot of change log value, flush, change log
>> value ,flush ..... change log value, flush actions (flush means perform
>> flush operation via JDBC driver). I'm not sure if this flush affects toast
>> space... Maybe this is the reason.
On Fri, 7 Mar 2008, Tom Lane wrote:
> You mean that you build up the 5MB log entry by adding a few lines at a
> time? That's going to consume horrid amounts of toast space, because
> each time you add a few lines, an entire new toasted field value is
> created.
Moreover, what is the point of flushing data to Postgres without
committing the transaction, if you're only going to overwrite the data
later. You don't get any level of protection for your data just by
flushing it to Postgres - you have to commit the transaction for that to
happen. In my opinion, you should just be generating the log entry in
memory entirely, and then flushing it in a transaction commit when it's
finished, since you're obviously holding it all in memory all the time
anyway.
> If you have to do it that way, you'll need very frequent vacuums on this
> table (not vacuum full, as noted already) to keep the toast space from
> bloating too much. And make sure you've got max_fsm_pages set high
> enough.
Agreed, this is kind of the worst-case-scenario for table bloat.
Matthew
--
Now the reason people powdered their faces back then was to change the values
"s" and "n" in this equation here. - Computer science lecturer