Re: shared-memory based stats collector - Mailing list pgsql-hackers

From Robert Haas
Subject Re: shared-memory based stats collector
Date
Msg-id CA+TgmoYQhr30eAcgJCi1v0FhA+3RP1FZVnXqSTLe=6fHy9e5oA@mail.gmail.com
Whole thread Raw
In response to shared-memory based stats collector  (Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>)
Responses Re: shared-memory based stats collector  (Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>)
List pgsql-hackers
On Fri, Jun 29, 2018 at 4:34 AM, Kyotaro HORIGUCHI
<horiguchi.kyotaro@lab.ntt.co.jp> wrote:
> Nowadays PostgreSQL has dynamic shared hash (dshash) so we can
> use this as the main storage of statistics. We can share data
> without a stress using this.
>
> A PoC previously posted tried to use "locally copied" dshash but
> it doesn't looks fine so I steered to different direction.
>
> With this patch dshash can create a local copy based on dynhash.

Copying the whole hash table kinds of sucks, partly because of the
time it will take to copy it, but also because it means that memory
usage is still O(nbackends * ntables).  Without looking at the patch,
I'm guessing that you're doing that because we need a way to show each
transaction a consistent snapshot of the data, and I admit that I
don't see another obvious way to tackle that problem.  Still, it would
be nice if we had a better idea.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


pgsql-hackers by date:

Previous
From: Andrew Gierth
Date:
Subject: Re: Should contrib modules install .h files?
Next
From: Robert Haas
Date:
Subject: Re: Add --include-table-data-where option to pg_dump, to export onlya subset of table data