On Fri, Feb 10, 2023 at 09:23:11PM +0100, Pavel Stehule wrote:
> pá 10. 2. 2023 v 21:18 odesílatel Andres Freund <andres@anarazel.de> napsal:
> >
> > On 2023-02-10 21:09:06 +0100, Pavel Stehule wrote:
> > > Just a small note - I executed VACUUM ANALYZE on one customer's database,
> > > and I had to cancel it after a few hours, because it had more than 20GB RAM
> > > (almost all physical RAM).
> >
> > Just to make sure: You're certain this was an actual memory leak, not just
> > vacuum ending up having referenced all of shared_buffers? Unless you use huge
> > pages, RSS increases over time, as a process touched more and more pages in
> > shared memory. Of course that couldn't explain rising above
> > shared_buffers + overhead.
> >
> > > The memory leak is probably not too big. This database is a little bit
> > > unusual. This one database has more than 1 800 000 tables. and the same
> > > number of indexes.
> >
> > If you have 1.8 million tables in a single database, what you saw might just
> > have been the size of the relation and catalog caches.
>
> can be
Well, how big was shared_buffers on that instance ?
--
Justin