On 20 Jan 2003 at 2:14, Tom Lane wrote:
> "Shridhar Daithankar" <shridhar_daithankar@persistent.co.in> writes:
> > Assuming that one knows what he/she is doing, would it help in such cases i.e.
> > the linear search thing, to bump up page size to day 16K/32K?
>
> You mean increase page size and decrease the number of buffers
> proportionately? It'd save on buffer-management overhead, but
> I wouldn't assume there'd be an overall performance gain. The
> system would have to do more I/O per page read or written; which
> might be a wash for sequential scans, but I bet it would hurt for
> random access.
Right. But it has its own applications. If I am saving huge data blocks like
say gene stuff, I might be better off living with a relatively bigger page
fragmentation.
> > and that is also the only way to make postgresql use more than couple of gigs
> > of RAM, isn't it?
>
> It seems quite unrelated. The size of our shared memory segment is
> limited by INT_MAX --- chopping it up differently won't change that.
Well, if my page size is doubled, I can get double amount of shared buffers.
That was the logic nothing else.
> In any case, I think worrying because you can't push shared buffers
> above two gigs is completely wrongheaded, for the reasons already
> discussed in this thread. The notion that Postgres can't use more
> than two gig because its shared memory is limited to that is
> *definitely* wrongheaded. We can exploit however much memory your
> kernel can manage for kernel disk cache.
Well, I agree completely. However there are folks and situation which demands
things because they can be done. This is just to check out the absolute limit
what it can manage.
Bye
Shridhar
--
Bagdikian's Observation: Trying to be a first-rate reporter on the average
American newspaper is like trying to play Bach's "St. Matthew Passion" on a
ukelele.