On 24 Wrz, 23:59, si...@2ndquadrant.com (Simon Riggs) wrote:
> On Thu, 2007-09-20 at 19:07 +0400, wrote:
> > I am newbie in Postgres, but previously worked with Oracle. Now I am
> > puzzled with heap_blks_hit/heap_blks_read ratio. I make select from
> > table (whose size is larger than shared_buffer) and saw that it was
> > wholly read from disk after second, third and so on runs. Isn't is
> > wrong that RDBMS tries to cache table data blocks at least PARTIALLY
> > in the case of insufficient cache buffer?
>
> It does cache partially, but its always the wrong part of the table.
>
Let's say - you have very big table. Even if you are querying data
with index -
table read will be done... If your queries are quite randomly hitting
table pages -
you *will* see that effect. You can think about partitions - idea
looks good -
you can save some IO bandwidth paying with processor time. Partitions
can
help if you do seq-scans, but... with index scans and heap_blks_hit -
does not.
Instead you can prepare 'index cluster' a specially when there are not
too many
writes...
--
Regards,
Micha Zaborowski (TeXXaS)