On Tuesday, June 04, 2013 12:37 AM Noah Misch wrote:
> A colleague, Korry Douglas, observed a table partitioning scenario
> where deserializing pg_constraint.ccbin is a hot spot. The following
> test case, a simplification of a typical partitioning setup, spends 28%
> of its time in
> stringToNode() and callees thereof:
>
>
>
>
> The executor caches each CHECK constraint in ResultRelInfo as a planned
> expression. That cache is highly effectively for long-running
> statements, but the trivial INSERTs effectively work without a cache.
> Korry devised this patch to cache the stringToNode() form of the
> constraint in the relcache. It improves the benchmark's partitioned
> scenario by 33%:
>
> -- Timings (seconds) --
> master, INSERT parent: 14.2, 14.4, 14.4
> patched, INSERT parent: 9.6, 9.7, 9.7
>
> master, INSERT*10 child: 9.9, 9.9, 10.2
> patched, INSERT*10 child: 10.0, 10.2, 10.2
>
> There's still not much to like about that tenfold overhead from use of
> the partition routing trigger, but this patch makes a nice cut into
> that overhead without doing anything aggressive.
This patch can give good performance gain in the scenario described by you.
Infact I had taken the readings with patch, it shows similar gain.
-- Timings (seconds) --
master, INSERT parent: 14.9, 15.4, 15.4
patched, INSERT parent: 9.9, 9.6, 9.5
master, INSERT*10 child: 13.8, 14.5, 15.6
patched, INSERT*10 child: 13.0, 14.3, 14.6
This patch would increase the relcache size, as for each constraint on table
it would increase 4 bytes irrespective of whether that can give performance
benefit or not.
Why in function CheckConstraintFetch(), the node is not formed from string?
>
> Some call sites need to modify the node tree, so the patch has them do
> copyObject(). I ran a microbenchmark of copyObject() on the cached
> node tree vs. redoing stringToNode(), and copyObject() still won by a
> factor of four.
I have not tried any performance run to measure if extra copyObject() has
added any benefit.
What kind of benchmark you use to validate it?
With Regards,
Amit Kapila.