Re: BUG #5196: Excessive memory consumption when using csvlog - Mailing list pgsql-bugs

From Thomas Poindessous
Subject Re: BUG #5196: Excessive memory consumption when using csvlog
Date
Msg-id 1e0e09af0911182159g5d87b27rcd3197cd49980beb@mail.gmail.com
Whole thread Raw
In response to Re: BUG #5196: Excessive memory consumption when using csvlog  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-bugs
Hi,

for csv output, we have a 750 Mo logfile. But on another site, we have
an logfile of 1,6 Go and logger process was using more than 3 Go of
RAM.

Even with our configuration (log collector, silent mode and
csv/stderr), we launched potsgresql daemon like this :
pg_ctl -l ${HOME}/pgsql/logs/postgres.log start

so we have three logfiles :

postgresql.log (always empty)
postgresql-YYYY-MM-DD.csv (big file if set to csvlog)
postgresql-YYYY-MM-DD.log (always empty if set to csvlog)

Thanks.

2009/11/19 Tom Lane <tgl@sss.pgh.pa.us>:
> "Poindessous Thomas" <thomas@poindessous.com> writes:
>> we have a weird bug. When using csvlog instead of stderr, the postgres
>> logger process uses a lot of memory. We even had an OOM error with kerne=
l.
>
> I poked at this a bit and noted that if only one of the two possible
> output files is rotated, logfile_rotate() leaks a copy of the other
> file's name. =A0At the default settings this would only amount to one
> filename string for every 10MB of output ... how much log output
> does your test scenario generate?
>
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0regards, tom lane
>

pgsql-bugs by date:

Previous
From: "Lyamin Mikhail"
Date:
Subject: BUG #5198: Plain dump: wrong field order for inherited tables
Next
From: Tom Lane
Date:
Subject: Re: BUG #5198: Plain dump: wrong field order for inherited tables