> ... and a lot more load on the CPU. Same-machine "network" connections
> are much cheaper (on most kernels, anyway) than real network
> connections.
>
> I think all of this discussion is vast overkill. No one has yet
> demonstrated that it's not sufficient to have *one* collector process
> and a lossy transmission method. Let's try that first, and if it really
> proves to be unworkable then we can get out the lily-gilding equipment.
> But there is tons more stuff to do before we have useful stats at all,
> and I don't think that this aspect is the most critical part of the
> problem.
Agreed. Sounds like overkill.
How about a per-backend shared memory area for stats, plus a global
shared memory area that each backend can add to when it exists. That
meets most of our problem.
The only open issue is per-table stuff, and I would like to see some
circular buffer implemented to handle that, with a collection process
that has access to shared memory. Even better, have an SQL table
updated with the per-table stats periodically. How about a collector
process that periodically reads though the shared memory and UPDATE's
SQL tables with the information.
-- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610)
853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill,
Pennsylvania19026