Thread: shared buffers
hello,
got something strange to me:
Same db ie. same data, around 1.2TB,one on pg13, one on pg16
same 16 GB of shared_buffers,
I am the single user.
both have track_io_timing on
on pg13, if I run a big request with explain (analyze,buffers),
I see around 6 GB read
if I do rerun the very same request, no more read(s), all data in the shared buffers cache. fine
If I check with pg_buffercache what's in it, I see the biggest tables of my request within the biggest users (in number of blocks used). All this is fine.
next, if I do the very same on the pg16 machine, whatever the number of times I rerun the explain (analyze, buffers) of the same request, each time, the explain shows the same volume of reads. again and again.
If I check with pg_buffercache, the set of objects stay the same, WITHOUT the objects of my request, just like if those objects where sticky.
any idea ?
thanks
Sorry,
'someone' launches some kind of batches without telling.
On Fri, Apr 25, 2025 at 3:42 PM Marc Millas <marc.millas@mokadb.com> wrote:
hello,got something strange to me:Same db ie. same data, around 1.2TB,one on pg13, one on pg16same 16 GB of shared_buffers,I am the single user.both have track_io_timing onon pg13, if I run a big request with explain (analyze,buffers),I see around 6 GB readif I do rerun the very same request, no more read(s), all data in the shared buffers cache. fineIf I check with pg_buffercache what's in it, I see the biggest tables of my request within the biggest users (in number of blocks used). All this is fine.next, if I do the very same on the pg16 machine, whatever the number of times I rerun the explain (analyze, buffers) of the same request, each time, the explain shows the same volume of reads. again and again.If I check with pg_buffercache, the set of objects stay the same, WITHOUT the objects of my request, just like if those objects where sticky.any idea ?thanks
On Fri, 2025-04-25 at 15:42 +0200, Marc Millas wrote: > got something strange to me: > Same db ie. same data, around 1.2TB,one on pg13, one on pg16 > same 16 GB of shared_buffers, > I am the single user. > both have track_io_timing on > > on pg13, if I run a big request with explain (analyze,buffers), > I see around 6 GB read > if I do rerun the very same request, no more read(s), all data in the shared buffers cache. fine > If I check with pg_buffercache what's in it, I see the biggest tables of my request within > the biggest users (in number of blocks used). All this is fine. > > next, if I do the very same on the pg16 machine, whatever the number of times I rerun the > explain (analyze, buffers) of the same request, each time, the explain shows the same volume > of reads. again and again. > If I check with pg_buffercache, the set of objects stay the same, WITHOUT the objects of my > request, just like if those objects where sticky. I can't see the plans, so I can only guess. Perhaps the v16 plan uses a sequential scan on a table that is more than a quarter of shared_buffers in size, so that PostgreSQL uses a ring buffer to read it instead of blowing out more than a quarter of its buffer cache. Yours, Laurenz Albe
On Sat, Apr 26, 2025 at 12:46 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:
On Fri, 2025-04-25 at 15:42 +0200, Marc Millas wrote:
> got something strange to me:
> Same db ie. same data, around 1.2TB,one on pg13, one on pg16
> same 16 GB of shared_buffers,
> I am the single user.
> both have track_io_timing on
>
> on pg13, if I run a big request with explain (analyze,buffers),
> I see around 6 GB read
> if I do rerun the very same request, no more read(s), all data in the shared buffers cache. fine
> If I check with pg_buffercache what's in it, I see the biggest tables of my request within
> the biggest users (in number of blocks used). All this is fine.
>
> next, if I do the very same on the pg16 machine, whatever the number of times I rerun the
> explain (analyze, buffers) of the same request, each time, the explain shows the same volume
> of reads. again and again.
> If I check with pg_buffercache, the set of objects stay the same, WITHOUT the objects of my
> request, just like if those objects where sticky.
I can't see the plans, so I can only guess.
Perhaps the v16 plan uses a sequential scan on a table that is more than a quarter of
shared_buffers in size, so that PostgreSQL uses a ring buffer to read it instead of
blowing out more than a quarter of its buffer cache.
Yours,
Laurenz Albe