Re: [HACKERS] [BUGS] BUG #14825: enum type: unsafe use? - Mailing list pgsql-bugs
From | Thomas Munro |
---|---|
Subject | Re: [HACKERS] [BUGS] BUG #14825: enum type: unsafe use? |
Date | |
Msg-id | CAEepm=2fzMEzjeJNdaADx501G9QtF2UfZ=Hi50cDb7daE31W0Q@mail.gmail.com Whole thread Raw |
In response to | Re: [HACKERS] [BUGS] BUG #14825: enum type: unsafe use? (Robert Haas <robertmhaas@gmail.com>) |
Responses |
Re: [HACKERS] [BUGS] BUG #14825: enum type: unsafe use?
Re: [HACKERS] [BUGS] BUG #14825: enum type: unsafe use? |
List | pgsql-bugs |
On Fri, Jan 12, 2018 at 4:19 PM, Robert Haas <robertmhaas@gmail.com> wrote: > On Thu, Jan 11, 2018 at 6:01 PM, Thomas Munro > <thomas.munro@enterprisedb.com> wrote: >> [ the data isn't session lifetime ] >> >> So I agree with Tom's suggestion: >> >> On Wed, Oct 4, 2017 at 2:29 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >>> Perhaps serialize the contents into an array in DSM, then rebuild a hash >>> table from that in the worker. Robert might have a better idea though. >> >> I'd happily volunteer to write or review a patch to do that. Is there >> a rebase of the stuff that got reverted, to build on? > > Those seem like reasons not to use Session, but not necessarily > reasons not to have the leader directly build the dshash that the > workers access rather than building a separate hash table in every > worker. Are you saying we should do the work now to create a per-transaction DSM segment + DSA area + thing that every backend attaches to? I guess that would be much like the per-session one, except that we'd reset it and end of xact (a DSA facility we don't have yet but which would amount to zapping all superblocks, freeing all but one segment or something). Like the per-session one, I suppose it would be created on demand when you run your first parallel query, and then survive as long as the leader backend. I assumed that we'd do that work when we really need it for writable parallel query, but that it'd be overkill for this. Note that the shared record thing still has the backend local cache in front of the shared registry, to avoid acquiring locks, so doesn't actually avoid creating a per-backend hash table, though its entries point directly to TupleDesc objects in shm. > Maybe having every worker build a separate hash table is a good idea > for some reason, but it's not clear to me that you've stated such a > reason. I didn't think creating backend local hash tables would be a problem because it's a vanishingly rare occurrence for the hash table to be created at all (ie when you've altered an enum), and if created, to have more than a couple of entries in it. But here's another idea, at the small end of the how-much-work spectrum: 1. Put the OIDs into a sorted array in the DSM segment. 2. Arrange for EnumBlacklisted() (assuming we're talking about the infrastructure from commit 1635e80d that was later reverted) to get its hands on a pointer to that + size and binary search it, instead of looking in the hash table (enum_blacklist), when running in a worker. -- Thomas Munro http://www.enterprisedb.com
pgsql-bugs by date: