Re: pg_stat_*_columns? - Mailing list pgsql-hackers

From Robert Haas
Subject Re: pg_stat_*_columns?
Date
Msg-id CA+TgmobpzaO2rgCeu-Uhcwie59ToJQSFsPaj_CPPHWX9A=p1uQ@mail.gmail.com
Whole thread Raw
In response to Re: pg_stat_*_columns?  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: pg_stat_*_columns?  (Magnus Hagander <magnus@hagander.net>)
List pgsql-hackers
On Sat, Jun 20, 2015 at 7:01 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> If we arranged things so that the processes could use the data in the
>> DSM directly rather than having to copy it out, we'd presumably save
>> quite a bit of memory, since the whole structure would be shared
>> rather than each backend having its own copy.
>
> That is not going to happen, because it would imply locking out the stats
> collector from doing any more updates for the entire time that any backend
> is looking at the results.  We *do* need to copy.

No, it doesn't mean that at all.  As often as needed, the stats
collector would build and publish a new copy of the data in a new DSM
segment.  The old one would stick around until the last reader drops
off.  So it would be almost totally lock-free. That's a very appealing
design in my book.

Stuffing all of the data through a message queue will be very
inefficient by comparison.  If the queue is big enough to fit all the
info in a single chunk, it will be nice and fast.  If not, the process
receiving the data will have to read a chunk and then go to sleep
while it waits for the next chunk to be sent.  Even if you make the
message queue pretty big, like 1MB, a large stats file, say 80MB, will
require ~160 context switches to transfer.  That's probably going to
suck, especially if the stats collector does anything other than
trying to service writes to the queues.

>> But if the structure
>> got too big to map (on a 32-bit system), then you'd be sort of hosed,
>> because there's no way to attach just part of it.  That might not be
>> worth worrying about, but it depends on how big it's likely to get - a
>> 32-bit system is very likely to choke on a 1GB mapping, and maybe even
>> on a much smaller one.
>
> Yeah, I'm quite worried about assuming that we can map a data structure
> that might be of very significant size into shared memory on 32-bit
> machines.  The address space just isn't there.

Considering the advantages of avoiding message queues, I think we
should think a little bit harder about whether we can't find some way
to skin this cat.  As I think about this a little more, I'm not sure
there's really a problem with one stats DSM per database.  Sure, the
system might have 100,000 databases in some crazy pathological case,
but the maximum number of those that can be in use is bounded by
max_connections, which means the maximum number of stats file DSMs we
could ever need at one time is also bounded by max_connections.  There
are a few corner cases to think about, like if the user writes a
client that connects to all 100,000 databases in very quick
succession, we've got to jettison the old DSMs fast enough to make
room for the new DSMs before we run out of slots, but that doesn't
seem like a particularly tough nut to crack.  If the stats collector
ensures that it never attaches to more than MaxBackends stats DSMs at
a time, and each backend ensures that it never attaches to more than
one stats DSM at a time, then 2 * MaxBackends stats DSMs is always
enough.  And that's just a matter of bumping
PG_DYNSHMEM_SLOTS_PER_BACKEND from 2 to 4.

In more realistic cases, it will probably be normal for many or all
backends to be connected to the same database, and the number of stats
DSMs required will be far smaller.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Alvaro Herrera
Date:
Subject: Re: pg_stat_*_columns?
Next
From: Robert Haas
Date:
Subject: Re: Inheritance planner CPU and memory usage change since 9.3.2