Re: Possible explanations for catastrophic performance deterioration? - Mailing list pgsql-performance

From Gregory Stark
Subject Re: Possible explanations for catastrophic performance deterioration?
Date
Msg-id 874phlugn8.fsf@oxford.xeocode.com
Whole thread Raw
In response to Re: Possible explanations for catastrophic performance deterioration?  (Carlos Moreno <moreno_pg@mochima.com>)
List pgsql-performance
"Carlos Moreno" <moreno_pg@mochima.com> writes:

> I'm now thinking that the problem with my logic is that the system does
> not keep anything in memory (or not all tuples, in any case), since it
> is only counting, so it does not *have to* keep them

That's really not how it works. When Postgres talks to the OS they're just
bits. There's no cache of rows or values or anything higher level than bits.
Neither the OS's filesystem cache nor the Postgres shared memory knows the
difference between live or dead rows or even pages that don't contain any
rows.

>  and since the total amount of reading from the disk exceeds the amount of
> physical memory, then the valid tuples are "pushed out" of memory.

That's right.

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com

pgsql-performance by date:

Previous
From: "Jonah H. Harris"
Date:
Subject: Re: Possible explanations for catastrophic performance deterioration?
Next
From: Ow Mun Heng
Date:
Subject: Re: REPOST: Nested loops row estimates always too high