On Thursday 30 October 2008, Tom=C3=A1=C5=A1 Sz=C3=A9pe wrote:
> > A pg_dump run is comparatively short-lived, so if Zdenek is right then
> > there's no important leak here -- we're counting on program exit to
> > release the memory. There's probably little point in releasing things
> > earlier than that.
>
> Well, I'd tend to consider any logical part of a program that fails to
> release the memory it uses to be bad coding practice. You never know
> when you're going to need to shuffle things around, change the context
> of the code in a way that makes it long-lived, in turn causing the leak
> to become a real problem. Also, don't you like seeing the free()s paired
> to their mallocs()s in a way that makes the allocations intuitively
> correct? :)
Unfreed memory is not the same as leaked memory. Leaked memory is unfreed=
=20
memory which should have been freed. Sometimes it's best not to do it. If=
=20
done it purposefully some times. Once I had to build a HUGE tree of data ju=
st=20
to print some reports. I alloced the nodes ( by carving small chunks from b=
ig=20
malloced blocks, but this was an optimization ) incrementally, finished=20
input, wrote output and just exit(), because I knew the OS will free the=20
memory, and keeping track of all the malloced blocks just to free them befo=
re=20
exiting to the OS is just wasting memory, and freeing them wastes time. The=
=20
OS guarantees on freeing memory, closing files and other autmoatic resource=
=20
releasing are there to be used when needed. Similarly nearly nobody bothers=
=20
to fclose() stdin/out/err or close 0/1/2, the OS will do it if needed. And=
=20
many malloc implementations grow the data segment using sbrk(), but don't=
=20
reduce it to the minimum when exiting to the OS.
Francisco Olarte.