Julien Rouhaud <rjuju123@gmail.com> writes:
> Here the amount of leaked memory is likely to be very small (I never heard of
> people having thousands of text search templates or parsers), and pg_dump isn't
> a long lived process so probably no one thought it was worth to extra code to
> free that memory, which I agree with.
Yeah. For context,
$ valgrind pg_dump regression >/dev/null
reports
==59330== HEAP SUMMARY:
==59330== in use at exit: 3,909,248 bytes in 70,831 blocks
==59330== total heap usage: 546,364 allocs, 475,533 frees, 34,377,280 bytes allocated
==59330==
==59330== LEAK SUMMARY:
==59330== definitely lost: 6 bytes in 6 blocks
==59330== indirectly lost: 0 bytes in 0 blocks
==59330== possibly lost: 0 bytes in 0 blocks
==59330== still reachable: 3,909,242 bytes in 70,825 blocks
==59330== suppressed: 0 bytes in 0 blocks
So as long as you're willing to take the regression database as
typical, pg_dump does not have any interesting leak problem.
There are certainly things that this simple test doesn't reach
--- publications/subscriptions are another area besides custom
text search configurations --- but it seems unlikely that a
database would have enough of any of those to justify much worry
about whether those code paths leak anything.
regards, tom lane