Re: Hash tables in dynamic shared memory - Mailing list pgsql-hackers

From Magnus Hagander
Subject Re: Hash tables in dynamic shared memory
Date
Msg-id CABUevEz=fR9__vE0GMVsrEftmMCqHrne62KQh23vLx-k2_5hAQ@mail.gmail.com
Whole thread Raw
In response to Re: Hash tables in dynamic shared memory  (Thomas Munro <thomas.munro@enterprisedb.com>)
Responses Re: Hash tables in dynamic shared memory  (Thomas Munro <thomas.munro@enterprisedb.com>)
Re: Hash tables in dynamic shared memory  (Andres Freund <andres@anarazel.de>)
List pgsql-hackers
<p dir="ltr"><p dir="ltr">On Oct 5, 2016 1:23 AM, "Thomas Munro" <<a
href="mailto:thomas.munro@enterprisedb.com">thomas.munro@enterprisedb.com</a>>wrote:<br /> ><br /> > On Wed,
Oct5, 2016 at 12:11 PM, Thomas Munro<br /> > <<a
href="mailto:thomas.munro@enterprisedb.com">thomas.munro@enterprisedb.com</a>>wrote:<br /> > > On Wed, Oct 5,
2016at 11:22 AM, Andres Freund <<a href="mailto:andres@anarazel.de">andres@anarazel.de</a>> wrote:<br /> >
>>>Potential use cases for DHT include caches, in-memory database objects<br /> > >>> and working
statefor parallel execution.<br /> > >><br /> > >> Is there a more concrete example, i.e. a user we'd
convertto this at<br /> > >> the same time as introducing this hashtable?<br /> > ><br /> > > A
colleagueof mine will shortly post a concrete patch to teach an<br /> > > existing executor node how to be
parallelaware, using DHT.  I'll let<br /> > > him explain.<br /> > ><br /> > > I haven't looked into
whetherit would make sense to convert any<br /> > > existing shmem dynahash hash table to use DHT.  The reason
fordoing<br /> > > so would be to move it out to DSM segments and enable dynamically<br /> > > growing.  I
suspectthat the bounded size of things like the hash<br /> > > tables involved in (for example) predicate locking
isconsidered a<br /> > > feature, not a bug, so any such cluster-lifetime core-infrastructure<br /> > >
hashtable would not be a candidate.  More likely candidates would be<br /> > > ephemeral data used by the
executor,as in the above-mentioned patch,<br /> > > and long lived caches of dynamic size owned by core code
or<br/> > > extensions.  Like a shared query plan cache, if anyone can figure out<br /> > > the
invalidationmagic required.<br /> ><br /> > Another thought: it could be used to make things like<br /> >
pg_stat_statementsnot have to be in shared_preload_libraries.<br /> ><p dir="ltr">That would indeed be a great
improvement.And possibly also allow the changing of the max number of statements it can track without a restart? <p
dir="ltr">Iwas also wondering if it might be useful for a replacement for some of the pgstats stuff to get rid of the
costof spooling to file and then rebuilding the hash tables in the receiving end. I've been waiting for this patch to
figureout if that's useful. I mean keep the stats collector doing what it does now over udp, but present the results in
sharedhash tables instead of files. <p dir="ltr">/Magnus <br /> 

pgsql-hackers by date:

Previous
From: Vitaly Burovoy
Date:
Subject: Re: Proposal: ON UPDATE REMOVE foreign key action
Next
From: Rushabh Lathia
Date:
Subject: Gather Merge