That won't work really well with something like I use to do when testing this patch, namely:
postgres=# select pid, array(select pg_cmdstatus(pid, 1, 10)) from pg_stat_activity where pid<>pg_backend_pid() \watch 1
while also running pgbench with -C option (to create new connection for every transaction). When a targeted backend exits before it can handle the signal, the receiving process keeps waiting forever.
no - every timeout you have to check, if targeted backend is living still, if not you will do cancel. It is 100% safe.
But then you need to make this internal timeout rather short, not 1s as originally suggested.
can be - 1 sec is max, maybe 100ms is optimum.
The statement_timeout in this case will stop the whole select, not just individual function call. Unless you wrap it to set statement_timeout and catch QUERY_CANCELED in plpgsql, but then you won't be able to ^C the whole select. The ability to set a (short) timeout for the function itself proves to be a really useful feature to me.
you cannot to handle QUERY_CANCELED in plpgsql.
Well, you can but its not that useful of course:
hmm, some is wrong - I remember from some older plpgsql, so CANCEL message is not catchable. Maybe I have bad memory. I have to recheck it.
it is ok. I didn't remeber well this behave. You cannot to catch CANCEL (and today ASSERT) in OTHER handler. It can be handled if it is explicitly mentioned.