Thread: proposal - log_full_scan

proposal - log_full_scan

From
Pavel Stehule
Date:
Hi

today I worked on postgres's server used for critical service. Because the application is very specific, we had to do final tuning on production server. I fix lot of queries, but I am not able to detect fast queries that does full scan of middle size tables - to 1M rows. Surely I wouldn't log all queries. Now, there are these queries with freq 10 per sec.

Can be nice to have a possibility to set a log of  queries that do full scan and read more tuples than is specified limit or that does full scan of specified tables.

What do you think about the proposed feature?

Regards

Pavel

Re: proposal - log_full_scan

From
Justin Pryzby
Date:
On Sat, Apr 17, 2021 at 04:36:52PM +0200, Pavel Stehule wrote:
> today I worked on postgres's server used for critical service. Because the
> application is very specific, we had to do final tuning on production
> server. I fix lot of queries, but I am not able to detect fast queries that
> does full scan of middle size tables - to 1M rows. Surely I wouldn't log
> all queries. Now, there are these queries with freq 10 per sec.
> 
> Can be nice to have a possibility to set a log of  queries that do full
> scan and read more tuples than is specified limit or that does full scan of
> specified tables.
> 
> What do you think about the proposed feature?

Are you able to use auto_explain with auto_explain.log_min_duration ?

Then you can search for query logs with
message ~ 'Seq Scan .* \(actual time=[.0-9]* rows=[0-9]{6,} loops=[0-9]*)'

Or can you use pg_stat_all_tables.seq_scan ?

But it seems to me that filtering on the duration would be both a more
important criteria and a more general one, than "seq scan with number of rows".

| (split_part(message, ' ', 2)::float/1000 AS duration ..) WHERE duration>2222;

-- 
Justin



Re: proposal - log_full_scan

From
Pavel Stehule
Date:


so 17. 4. 2021 v 17:09 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:
On Sat, Apr 17, 2021 at 04:36:52PM +0200, Pavel Stehule wrote:
> today I worked on postgres's server used for critical service. Because the
> application is very specific, we had to do final tuning on production
> server. I fix lot of queries, but I am not able to detect fast queries that
> does full scan of middle size tables - to 1M rows. Surely I wouldn't log
> all queries. Now, there are these queries with freq 10 per sec.
>
> Can be nice to have a possibility to set a log of  queries that do full
> scan and read more tuples than is specified limit or that does full scan of
> specified tables.
>
> What do you think about the proposed feature?

Are you able to use auto_explain with auto_explain.log_min_duration ?

Unfortunately,  I cannot use it. This server executes 5K queries per seconds, and I am afraid to decrease log_min_duration.

The logs are forwarded to the network and last time, when users played with it, then they had problems with the network.

I am in a situation where I know there are queries faster than 100ms, I see so there should be fullscans from pg_stat_user_tables, but I don't see the queries.

The fullscan of this table needs about 30ms and has 200K rows. So decreasing log_min_duration to this value is very risky.



Then you can search for query logs with
message ~ 'Seq Scan .* \(actual time=[.0-9]* rows=[0-9]{6,} loops=[0-9]*)'

Or can you use pg_stat_all_tables.seq_scan ?

I use  pg_stat_all_tables.seq_scan and I see seq scans there. But I need to know the related queries.


But it seems to me that filtering on the duration would be both a more
important criteria and a more general one, than "seq scan with number of rows".

| (split_part(message, ' ', 2)::float/1000 AS duration ..) WHERE duration>2222;

--
Justin

Re: proposal - log_full_scan

From
Julien Rouhaud
Date:
On Sat, Apr 17, 2021 at 05:22:59PM +0200, Pavel Stehule wrote:
> 
> The fullscan of this table needs about 30ms and has 200K rows. So
> decreasing log_min_duration to this value is very risky.
> 
> [...]
> 
> I use  pg_stat_all_tables.seq_scan and I see seq scans there. But I need to
> know the related queries.

Maybe you could use pg_qualstats ([1]) for that?  It will give you the list of
quals (with the underlying queryid) with a tag to specify if they were executed
as an index scan or a sequential scan.  It wouldn't detect queries doing
sequential scan that don't have any qual for the underlying relations, but
those shouldn't be a concern in your use case.

If you setup some sampling, the overhead should be minimal.

[1]: https://github.com/powa-team/pg_qualstats/



Re: proposal - log_full_scan

From
Pavel Stehule
Date:


so 17. 4. 2021 v 18:54 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:
On Sat, Apr 17, 2021 at 05:22:59PM +0200, Pavel Stehule wrote:
>
> The fullscan of this table needs about 30ms and has 200K rows. So
> decreasing log_min_duration to this value is very risky.
>
> [...]
>
> I use  pg_stat_all_tables.seq_scan and I see seq scans there. But I need to
> know the related queries.

Maybe you could use pg_qualstats ([1]) for that?  It will give you the list of
quals (with the underlying queryid) with a tag to specify if they were executed
as an index scan or a sequential scan.  It wouldn't detect queries doing
sequential scan that don't have any qual for the underlying relations, but
those shouldn't be a concern in your use case.

If you setup some sampling, the overhead should be minimal.

[1]: https://github.com/powa-team/pg_qualstats/

It has similar functionality - there is a problem with setting. The my idea is more simple - just

set

log_fullscall_min_tupples = 100000 

or

alter table xxx set log_fullscan_min_tupples = 0;

and then the complete query can be found in the log.

I think this can be really practical so it can be core functionality. And it can log the queries without
quals too. The productions systems can be buggy and it is important to find bugs

Regards

Pavel

Re: proposal - log_full_scan

From
Justin Pryzby
Date:
On Sat, Apr 17, 2021 at 05:22:59PM +0200, Pavel Stehule wrote:
> so 17. 4. 2021 v 17:09 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:
> 
> > On Sat, Apr 17, 2021 at 04:36:52PM +0200, Pavel Stehule wrote:
> > > today I worked on postgres's server used for critical service. Because the
> > > application is very specific, we had to do final tuning on production
> > > server. I fix lot of queries, but I am not able to detect fast queries that
> > > does full scan of middle size tables - to 1M rows. Surely I wouldn't log
> > > all queries. Now, there are these queries with freq 10 per sec.
> > >
> > > Can be nice to have a possibility to set a log of  queries that do full
> > > scan and read more tuples than is specified limit or that does full scan of
> > > specified tables.
> > >
> > > What do you think about the proposed feature?
> >
> > Are you able to use auto_explain with auto_explain.log_min_duration ?
> 
> Unfortunately,  I cannot use it. This server executes 5K queries per
> seconds, and I am afraid to decrease log_min_duration.
> 
> The logs are forwarded to the network and last time, when users played with
> it, then they had problems with the network.
..
> The fullscan of this table needs about 30ms and has 200K rows. So
> decreasing log_min_duration to this value is very risky.

auto_explain.sample_rate should allow setting a sufficiently low value of
log_min_duration.  It exists since v9.6.

-- 
Justin



Re: proposal - log_full_scan

From
Pavel Stehule
Date:


so 17. 4. 2021 v 20:36 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:
On Sat, Apr 17, 2021 at 05:22:59PM +0200, Pavel Stehule wrote:
> so 17. 4. 2021 v 17:09 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:
>
> > On Sat, Apr 17, 2021 at 04:36:52PM +0200, Pavel Stehule wrote:
> > > today I worked on postgres's server used for critical service. Because the
> > > application is very specific, we had to do final tuning on production
> > > server. I fix lot of queries, but I am not able to detect fast queries that
> > > does full scan of middle size tables - to 1M rows. Surely I wouldn't log
> > > all queries. Now, there are these queries with freq 10 per sec.
> > >
> > > Can be nice to have a possibility to set a log of  queries that do full
> > > scan and read more tuples than is specified limit or that does full scan of
> > > specified tables.
> > >
> > > What do you think about the proposed feature?
> >
> > Are you able to use auto_explain with auto_explain.log_min_duration ?
>
> Unfortunately,  I cannot use it. This server executes 5K queries per
> seconds, and I am afraid to decrease log_min_duration.
>
> The logs are forwarded to the network and last time, when users played with
> it, then they had problems with the network.
..
> The fullscan of this table needs about 30ms and has 200K rows. So
> decreasing log_min_duration to this value is very risky.

auto_explain.sample_rate should allow setting a sufficiently low value of
log_min_duration.  It exists since v9.6.


It cannot help - these queries are executed a few times per sec. In same time this server execute 500 - 1000 other queries per sec

Regards

Pavel
 
--
Justin

Re: proposal - log_full_scan

From
Pavel Stehule
Date:


so 17. 4. 2021 v 20:51 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:


so 17. 4. 2021 v 20:36 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:
On Sat, Apr 17, 2021 at 05:22:59PM +0200, Pavel Stehule wrote:
> so 17. 4. 2021 v 17:09 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:
>
> > On Sat, Apr 17, 2021 at 04:36:52PM +0200, Pavel Stehule wrote:
> > > today I worked on postgres's server used for critical service. Because the
> > > application is very specific, we had to do final tuning on production
> > > server. I fix lot of queries, but I am not able to detect fast queries that
> > > does full scan of middle size tables - to 1M rows. Surely I wouldn't log
> > > all queries. Now, there are these queries with freq 10 per sec.
> > >
> > > Can be nice to have a possibility to set a log of  queries that do full
> > > scan and read more tuples than is specified limit or that does full scan of
> > > specified tables.
> > >
> > > What do you think about the proposed feature?
> >
> > Are you able to use auto_explain with auto_explain.log_min_duration ?
>
> Unfortunately,  I cannot use it. This server executes 5K queries per
> seconds, and I am afraid to decrease log_min_duration.
>
> The logs are forwarded to the network and last time, when users played with
> it, then they had problems with the network.
..
> The fullscan of this table needs about 30ms and has 200K rows. So
> decreasing log_min_duration to this value is very risky.

auto_explain.sample_rate should allow setting a sufficiently low value of
log_min_duration.  It exists since v9.6.


It cannot help - these queries are executed a few times per sec. In same time this server execute 500 - 1000 other queries per sec

maybe this new option for server and for auto_explain can be just simple

log_seqscan = (minimum number of tuples from one relation)
auto_explain.log_seqscan = (minimum number of tuples from one relation)

This is a similar feature like log_temp_files. Next step can be implementing this feature like a table option.

What do you think about it?
 
Regards

Pavel

The extension like pg_qualstat is good, but it does different work. In complex applications I need to detect buggy (forgotten) queries - last week I found two queries over bigger tables without predicates. So the qualstat doesn't help me. This is an application for a government with few (but for government typical) specific: 1) the life cycle is short (one month), 2) there is not slow start - from first moment the application will be used by more hundred thousands people, 3) the application is very public - so any issues are very interesting for press and very unpleasant for politics, and in next step for all suppliers (there are high penalty for failures), and an admins are not happy from external extensions, 4) the budget is not too big - there is not any performance testing environment

First stages are covered well today. We can log and process very slow queries, and fix it immediately - with CREATE INDEX CONCURRENTLY I can do it well on production servers too without high risk.

But the detection of some bad not too slow queries is hard. And as an external consultant I am not able to install any external extensions to the production environment for fixing some hot issues, The risk is not acceptable for project managers and I understand. So I have to use only tools available in Postgres.




Regards

Pavel
 
--
Justin

Re: proposal - log_full_scan

From
Julien Rouhaud
Date:
On Sun, Apr 18, 2021 at 06:21:56AM +0200, Pavel Stehule wrote:
> 
> The extension like pg_qualstat is good, but it does different work.

Yes definitely.  It was just an idea if you needed something right now that
could more or less do what you needed, not saying that we shouldn't improve the
core :)

> In
> complex applications I need to detect buggy (forgotten) queries - last week
> I found two queries over bigger tables without predicates. So the qualstat
> doesn't help me. 

Also not totally helpful but powa was created to detect problematic queries in
such cases.  It wouldn't say if it's because of a seq scan or not (so yes again
we need to improve that), but it would give you the slowest (or top consumer
for any resource) for a given time interval.

> This is an application for a government with few (but for
> government typical) specific: 1) the life cycle is short (one month), 2)
> there is not slow start - from first moment the application will be used by
> more hundred thousands people, 3) the application is very public - so any
> issues are very interesting for press and very unpleasant for politics, and
> in next step for all suppliers (there are high penalty for failures), and
> an admins are not happy from external extensions, 4) the budget is not too
> big - there is not any performance testing environment
> 
> First stages are covered well today. We can log and process very slow
> queries, and fix it immediately - with CREATE INDEX CONCURRENTLY I can do
> it well on production servers too without high risk.
> 
> But the detection of some bad not too slow queries is hard. And as an
> external consultant I am not able to install any external extensions to the
> production environment for fixing some hot issues, The risk is not
> acceptable for project managers and I understand. So I have to use only
> tools available in Postgres.

Yes I agree that having additional and more specialized tool in core postgres
would definitely help in similar scenario.

I think that having some kind of threshold for seq scan (like the mentioned
auto_explain.log_seqscan = XXX) in auto_explain would be the best approach, as
you really need the plan to know why a seq scan was chosen and if it was a
reasonable choice or not.



Re: proposal - log_full_scan

From
Pavel Stehule
Date:


ne 18. 4. 2021 v 14:28 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:
On Sun, Apr 18, 2021 at 06:21:56AM +0200, Pavel Stehule wrote:
>
> The extension like pg_qualstat is good, but it does different work.

Yes definitely.  It was just an idea if you needed something right now that
could more or less do what you needed, not saying that we shouldn't improve the
core :)

> In
> complex applications I need to detect buggy (forgotten) queries - last week
> I found two queries over bigger tables without predicates. So the qualstat
> doesn't help me.

Also not totally helpful but powa was created to detect problematic queries in
such cases.  It wouldn't say if it's because of a seq scan or not (so yes again
we need to improve that), but it would give you the slowest (or top consumer
for any resource) for a given time interval.

> This is an application for a government with few (but for
> government typical) specific: 1) the life cycle is short (one month), 2)
> there is not slow start - from first moment the application will be used by
> more hundred thousands people, 3) the application is very public - so any
> issues are very interesting for press and very unpleasant for politics, and
> in next step for all suppliers (there are high penalty for failures), and
> an admins are not happy from external extensions, 4) the budget is not too
> big - there is not any performance testing environment
>
> First stages are covered well today. We can log and process very slow
> queries, and fix it immediately - with CREATE INDEX CONCURRENTLY I can do
> it well on production servers too without high risk.
>
> But the detection of some bad not too slow queries is hard. And as an
> external consultant I am not able to install any external extensions to the
> production environment for fixing some hot issues, The risk is not
> acceptable for project managers and I understand. So I have to use only
> tools available in Postgres.

Yes I agree that having additional and more specialized tool in core postgres
would definitely help in similar scenario.

I think that having some kind of threshold for seq scan (like the mentioned
auto_explain.log_seqscan = XXX) in auto_explain would be the best approach, as
you really need the plan to know why a seq scan was chosen and if it was a
reasonable choice or not.

I would like to write this for core and for auto_explain too. I was in a situation when I hadnot used auto_explain too. Although this extension is widely used and then the risk is low.

When I detect the query, then I can run the explanation manually. But sure I think so it can work well inside auto_explain

Regards

Pavel

Re: proposal - log_full_scan

From
Pavel Stehule
Date:


ne 18. 4. 2021 v 16:09 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:


ne 18. 4. 2021 v 14:28 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:
On Sun, Apr 18, 2021 at 06:21:56AM +0200, Pavel Stehule wrote:
>
> The extension like pg_qualstat is good, but it does different work.

Yes definitely.  It was just an idea if you needed something right now that
could more or less do what you needed, not saying that we shouldn't improve the
core :)

> In
> complex applications I need to detect buggy (forgotten) queries - last week
> I found two queries over bigger tables without predicates. So the qualstat
> doesn't help me.

Also not totally helpful but powa was created to detect problematic queries in
such cases.  It wouldn't say if it's because of a seq scan or not (so yes again
we need to improve that), but it would give you the slowest (or top consumer
for any resource) for a given time interval.

> This is an application for a government with few (but for
> government typical) specific: 1) the life cycle is short (one month), 2)
> there is not slow start - from first moment the application will be used by
> more hundred thousands people, 3) the application is very public - so any
> issues are very interesting for press and very unpleasant for politics, and
> in next step for all suppliers (there are high penalty for failures), and
> an admins are not happy from external extensions, 4) the budget is not too
> big - there is not any performance testing environment
>
> First stages are covered well today. We can log and process very slow
> queries, and fix it immediately - with CREATE INDEX CONCURRENTLY I can do
> it well on production servers too without high risk.
>
> But the detection of some bad not too slow queries is hard. And as an
> external consultant I am not able to install any external extensions to the
> production environment for fixing some hot issues, The risk is not
> acceptable for project managers and I understand. So I have to use only
> tools available in Postgres.

Yes I agree that having additional and more specialized tool in core postgres
would definitely help in similar scenario.

I think that having some kind of threshold for seq scan (like the mentioned
auto_explain.log_seqscan = XXX) in auto_explain would be the best approach, as
you really need the plan to know why a seq scan was chosen and if it was a
reasonable choice or not.

I would like to write this for core and for auto_explain too. I was in a situation when I hadnot used auto_explain too. Although this extension is widely used and then the risk is low.

When I detect the query, then I can run the explanation manually. But sure I think so it can work well inside auto_explain

I tried to write the patch. It is harder work for core, than I expected, because there is not any good information if the query is top or not, so it is not easy to decide if we should log info or not.

On second hand, the modification of auto_explain is easy.

I am sending the patch

Regards

Pavel


Regards

Pavel

Attachment

Re: proposal - log_full_scan

From
Daniel Gustafsson
Date:
Looking at this I like the idea in principle, but I'm not convinced that
auto_explain is the right tool for this.  auto_explain is for identifying slow
queries, and what you are proposing is to identify queries with a certain
"shape" (for lack of a better term) even if they aren't slow as per the
log_min_duration setting.  If log_min_duration is deemed to crude due to query
volume then sample_rate is the tool.  If sample_rate is also discarded, then
pg_stat_statements seems a better option.

Also, why just sequential scans (apart from it being this specific usecase)?
If the idea is to track aspects of execution which are deemed slow, then
tracking for example spills etc would be just as valid.  Do you have thoughts
on that?

That being said, a few comments on the patch:

-    (auto_explain_log_min_duration >= 0 && \
+    ((auto_explain_log_min_duration >= 0 || auto_explain_log_seqscan != -1) && \
Is there a reason to not follow the existing code and check for >= 0?

+    DefineCustomIntVariable("auto_explain.log_seqscan",
It's only a matter of time before another node is proposed for logging, and
then we'll be stuck adding log_XXXnode GUCs.  Is there a more future-proof way
to do this?

+    "Sets the minimum tuples produced by sequantial scans which plans will be logged",
s/sequantial/sequential/

-    es->analyze = (queryDesc->instrument_options && auto_explain_log_analyze);
+    es->analyze = (queryDesc->instrument_options && (auto_explain_log_analyze || auto_explain_log_seqscan != -1));
Turning on ANALYZE when log_analyze isn't set to True is a no-no IMO.

+ * Colllect relations where log_seqscan limit was exceeded
s/Colllect/Collect/

+    if (*relnames.data != '\0')
+        appendStringInfoString(&relnames, ",");
This should use appendStringInfoChar instead.

+    (errmsg("duration: %.3f ms, over limit seqscans: %s, plan:\n%s",
The "over limit" part is superfluous since it otherwise wouldn't be logged.  If
we're prefixing something wouldn't it be more helpful to include the limit,
like: "seqscans >= %d tuples returned:".  I'm not a fan of "seqscans" but
spelling it out is also quite verbose and this is grep-able.

Documentation and tests are also missing

--
Daniel Gustafsson        https://vmware.com/




Re: proposal - log_full_scan

From
Pavel Stehule
Date:
Hi

út 6. 7. 2021 v 16:07 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:
Looking at this I like the idea in principle, but I'm not convinced that
auto_explain is the right tool for this.  auto_explain is for identifying slow
queries, and what you are proposing is to identify queries with a certain
"shape" (for lack of a better term) even if they aren't slow as per the
log_min_duration setting.  If log_min_duration is deemed to crude due to query
volume then sample_rate is the tool.  If sample_rate is also discarded, then
pg_stat_statements seems a better option.

I don't think so pg_stat_statements can be used - a) it doesn't check execution plan, so this feature can have big overhead against current pg_stat_statements, that works just with AST, b) pg_stat_statements has one entry per AST - but this can be problem on execution plan level, and this is out of perspective of pg_stat_statements.

Also, why just sequential scans (apart from it being this specific usecase)?
If the idea is to track aspects of execution which are deemed slow, then
tracking for example spills etc would be just as valid.  Do you have thoughts
on that?

Yes, I thought about it more, and sometimes bitmap index scans are problematic too, index scans in nested loops can be a problem too.

For my last customer I had to detect queries with a large bitmap index scan. I can do it with a combination of pg_stat_statements and log checking, but this work is not very friendly.

My current idea is to have some extension that can be tran for generally specified executor nodes.

Sometimes I can say - I need to know all queries that does seq scan over tabx where tuples processed > N. In other cases can be interesting to know the queries that uses index x for bitmap index scan,
 

That being said, a few comments on the patch:

-       (auto_explain_log_min_duration >= 0 && \
+       ((auto_explain_log_min_duration >= 0 || auto_explain_log_seqscan != -1) && \
Is there a reason to not follow the existing code and check for >= 0?

+       DefineCustomIntVariable("auto_explain.log_seqscan",
It's only a matter of time before another node is proposed for logging, and
then we'll be stuck adding log_XXXnode GUCs.  Is there a more future-proof way
to do this?

+       "Sets the minimum tuples produced by sequantial scans which plans will be logged",
s/sequantial/sequential/

-       es->analyze = (queryDesc->instrument_options && auto_explain_log_analyze);
+       es->analyze = (queryDesc->instrument_options && (auto_explain_log_analyze || auto_explain_log_seqscan != -1));
Turning on ANALYZE when log_analyze isn't set to True is a no-no IMO.

+ * Colllect relations where log_seqscan limit was exceeded
s/Colllect/Collect/

+       if (*relnames.data != '\0')
+               appendStringInfoString(&relnames, ",");
This should use appendStringInfoChar instead.

+       (errmsg("duration: %.3f ms, over limit seqscans: %s, plan:\n%s",
The "over limit" part is superfluous since it otherwise wouldn't be logged.  If
we're prefixing something wouldn't it be more helpful to include the limit,
like: "seqscans >= %d tuples returned:".  I'm not a fan of "seqscans" but
spelling it out is also quite verbose and this is grep-able.

Documentation and tests are also missing

Unfortunately, this idea is not well prepared. My patch was a proof concept - but I think so it is not a good start point. Maybe it needs some tracing API on executor level and some tool like "perf top", but for executor. Post execution analysis is not a good direction with big overhead, and mainly it is not friendly in critical situations. I need some handy tool like perf, but for executor nodes. I don't know how to do it effectively.

Thank you for your review and for your time, but I think it is better to remove this patch from commit fest. I have no idea how to design this feature well :-/

Regards

Pavel


 

--
Daniel Gustafsson               https://vmware.com/

Re: proposal - log_full_scan

From
Daniel Gustafsson
Date:
> On 6 Jul 2021, at 18:14, Pavel Stehule <pavel.stehule@gmail.com> wrote:

> I thought about it more, and sometimes bitmap index scans are problematic too, index scans in nested loops can be a
problemtoo. 

Right.  Depending on the circumstances, pretty much anything in a plan can be
something deemed problematic in some production setting.

> Unfortunately, this idea is not well prepared. My patch was a proof concept - but I think so it is not a good start
point.Maybe it needs some tracing API on executor level and some tool like "perf top", but for executor. Post execution
analysisis not a good direction with big overhead, and mainly it is not friendly in critical situations. I need some
handytool like perf, but for executor nodes. I don't know how to do it effectively. 

These are hot codepaths so adding tracing instrumentation which collects enough
information to be useful, and which can be drained fast enough to not be a
resource problem is tricky.

> Thank you for your review and for your time, but I think it is better to remove this patch from commit fest. I have
noidea how to design this feature well :-/ 

No worries, I hope we see an updated approach at some time.  In the meantime
I'm marking this patch Returned with Feedback.

--
Daniel Gustafsson        https://vmware.com/