Re: COPY FROM WHEN condition - Mailing list pgsql-hackers

From Tomas Vondra
Subject Re: COPY FROM WHEN condition
Date
Msg-id eb58de96-0722-bbff-c08a-758dc818473b@2ndquadrant.com
Whole thread Raw
In response to Re: COPY FROM WHEN condition  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
Responses Re: COPY FROM WHEN condition
Re: COPY FROM WHEN condition
List pgsql-hackers

On 1/21/19 11:15 PM, Tomas Vondra wrote:
> 
> 
> On 1/21/19 7:51 PM, Andres Freund wrote:
>> Hi,
>>
>> On 2019-01-21 16:22:11 +0100, Tomas Vondra wrote:
>>>
>>>
>>> On 1/21/19 4:33 AM, Tomas Vondra wrote:
>>>>
>>>>
>>>> On 1/21/19 3:12 AM, Andres Freund wrote:
>>>>> On 2019-01-20 18:08:05 -0800, Andres Freund wrote:
>>>>>> On 2019-01-20 21:00:21 -0500, Tomas Vondra wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 1/20/19 8:24 PM, Andres Freund wrote:
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> On 2019-01-20 00:24:05 +0100, Tomas Vondra wrote:
>>>>>>>>> On 1/14/19 10:25 PM, Tomas Vondra wrote:
>>>>>>>>>> On 12/13/18 8:09 AM, Surafel Temesgen wrote:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Wed, Dec 12, 2018 at 9:28 PM Tomas Vondra
>>>>>>>>>>> <tomas.vondra@2ndquadrant.com <mailto:tomas.vondra@2ndquadrant.com>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>      Can you also update the docs to mention that the functions called from
>>>>>>>>>>>      the WHERE clause does not see effects of the COPY itself?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> /Of course, i  also add same comment to insertion method selection
>>>>>>>>>>> /
>>>>>>>>>>
>>>>>>>>>> FWIW I've marked this as RFC and plan to get it committed this week.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Pushed, thanks for the patch.
>>>>>>>>
>>>>>>>> While rebasing the pluggable storage patch ontop of this I noticed that
>>>>>>>> the qual appears to be evaluated in query context. Isn't that a bad
>>>>>>>> idea? ISMT it should have been evaluated a few lines above, before the:
>>>>>>>>
>>>>>>>>         /* Triggers and stuff need to be invoked in query context. */
>>>>>>>>         MemoryContextSwitchTo(oldcontext);
>>>>>>>>
>>>>>>>> Yes, that'd require moving the ExecStoreHeapTuple(), but that seems ok?
>>>>>>>>
>>>>>>>
>>>>>>> Yes, I agree. It's a bit too late for me to hack and push stuff, but I'll
>>>>>>> fix that tomorrow.
>>>>>>
>>>>>> NP. On second thought, the problem is probably smaller than I thought at
>>>>>> first, because ExecQual() switches to the econtext's per-tuple memory
>>>>>> context. But it's only reset once for each batch, so there's some
>>>>>> wastage. At least worth a comment.
>>>>>
>>>>> I'm tired, but perhaps its actually worse - what's being reset currently
>>>>> is the ESTate's per-tuple context:
>>>>>
>>>>>         if (nBufferedTuples == 0)
>>>>>         {
>>>>>             /*
>>>>>              * Reset the per-tuple exprcontext. We can only do this if the
>>>>>              * tuple buffer is empty. (Calling the context the per-tuple
>>>>>              * memory context is a bit of a misnomer now.)
>>>>>              */
>>>>>             ResetPerTupleExprContext(estate);
>>>>>         }
>>>>>
>>>>> but the quals are evaluated in the ExprContext's:
>>>>>
>>>>> ExecQual(ExprState *state, ExprContext *econtext)
>>>>> ...
>>>>>     ret = ExecEvalExprSwitchContext(state, econtext, &isnull);
>>>>>
>>>>>
>>>>> which is created with:
>>>>>
>>>>> /* Get an EState's per-output-tuple exprcontext, making it if first use */
>>>>> #define GetPerTupleExprContext(estate) \
>>>>>     ((estate)->es_per_tuple_exprcontext ? \
>>>>>      (estate)->es_per_tuple_exprcontext : \
>>>>>      MakePerTupleExprContext(estate))
>>>>>
>>>>> and creates its own context:
>>>>>     /*
>>>>>      * Create working memory for expression evaluation in this context.
>>>>>      */
>>>>>     econtext->ecxt_per_tuple_memory =
>>>>>         AllocSetContextCreate(estate->es_query_cxt,
>>>>>                               "ExprContext",
>>>>>                               ALLOCSET_DEFAULT_SIZES);
>>>>>
>>>>> so this is currently just never reset.
>>>>
>>>> Actually, no. The ResetPerTupleExprContext boils down to
>>>>
>>>>     MemoryContextReset((econtext)->ecxt_per_tuple_memory)
>>>>
>>>> and ExecEvalExprSwitchContext does this
>>>>
>>>>     MemoryContextSwitchTo(econtext->ecxt_per_tuple_memory);
>>>>
>>>> So it's resetting the right context, although only on batch boundary.
>>
>>>>> Seems just using ExecQualAndReset() ought to be sufficient?
>>>>>
>>>>
>>>> That may still be the right thing to do.
>>>>
>>>
>>> Actually, no, because that would reset the context far too early (and
>>> it's easy to trigger segfaults). So the reset would have to happen after
>>> processing the row, not this early.
>>
>> Yea, sorry, I was too tired yesterday evening. I'd spent 10h splitting
>> up the pluggable storage patch into individual pieces...
>>
>>
>>> But I think the current behavior is actually OK, as it matches what we
>>> do for defexprs. And the comment before ResetPerTupleExprContext says this:
>>>
>>>     /*
>>>      * Reset the per-tuple exprcontext. We can only do this if the
>>>      * tuple buffer is empty. (Calling the context the per-tuple
>>>      * memory context is a bit of a misnomer now.)
>>>      */
>>>
>>> So the per-tuple context is not quite per-tuple anyway. Sure, we might
>>> rework that but I don't think that's an issue in this patch.
>>
>> I'm *not* convinced by this. I think it's bad enough that we do this for
>> normal COPY, but for WHEN, we could end up *never* resetting before the
>> end. Consider a case where a single tuple is inserted, and then *all*
>> rows are filtered.  I think this needs a separate econtext that's reset
>> every round. Or alternatively you could fix the code not to rely on
>> per-tuple not being reset when tuples are buffered - that actually ought
>> to be fairly simple.
>>
> 
> I think separating the per-tuple and per-batch contexts is the right
> thing to do, here. It seems the batching was added somewhat later and
> using the per-tuple context is rather confusing.
> 

OK, here is a WIP patch doing that. It creates a new "batch" context,
and allocates tuples in it (instead of the per-tuple context). The
per-tuple context is now reset always, irrespectedly of nBufferedTuples.
And the batch context is reset every time the batch is emptied.

It turned out to be a tad more complex due to partitioning, because when
we find the partitions do not match, the tuple is already allocated in
the "current" context (be it per-tuple or batch). So we can't just free
the whole context at that point. The old code worked around this by
alternating two contexts, but that seems a bit too cumbersome to me, so
the patch simply copies the tuple to the new context. That allows us to
reset the batch context always, right after emptying the buffer. I need
to do some benchmarking to see if the extra copy causes any regression.

Overall, separating the contexts makes it quite a bit clearer. I'm not
entirely happy about the per-tuple context being "implicit" (hidden in
executor context) while the batch context being explicitly created, but
there's not much I can do about that.

The patch also includes the fix correcting the volatility check on WHERE
clause, although that shall be committed separately.

regards
-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachment

pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD
Next
From: Isaac Morland
Date:
Subject: Strange query behaviour