James Bottomley <James.Bottomley@HansenPartnership.com> wrote:
>> We start by creating a chunk of shared memory that all processes
>> (we do not use threads) will have mapped at a common address,
>> and we read() and write() into that chunk.
>
> Yes, that's what I was thinking: it's a cache. About how many
> files comprise this cache? Are you thinking it's too difficult
> for every process to map the files?
It occurred to me that I don't remember seeing any indication of
how many processes we're talking about. There is once process per
database connection, plus some administrative processes, like the
checkpoint process and the background writer. At the low end,
about 10 processes would be connected to the shared memory. The
highest I've personally seen is about 3000; I don't know how far
above that people might try to push it. I always recommend a
connection pool to limit the number of database connections to
something near ((2 * core count) + effective spindle count), since
that's where I typically see best performance; but people don't
always do that.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company