I think it won't meet with favor if there are cases that require manual intervention for starting the server. That was the main argument for getting rid of the exclusive backup API, which had a similar problem.
In the rare case of a crash of the source database while one or more databases are in progress. Restoring the backup requires manual intervention with signal files today.
I get a desire for the live production server to not need intervention to recover from a crash but I can't help but feel that this requirement plus the goal of making this a non-interventionist as possible during recovery are incompatible. But I haven't given it a great amount of thought as I felt the limited scope and situation were an acceptable cost for keeping the process straight-forward (i.e., starting up a backup mode instance requires a signal file that dictates the kind of recovery to perform). We can either make the live backup contents invalid until something happens after pg_backup_stop ends that makes it valid or we have to make the current system being backed up invalid so long as it's in backup mode. The later seemed easier and doesn't require actions outside of our control.
Also, how do you envision two concurrent backups with your setup?
I don't know if I understand the question - if ensuring that "in backup" is turned on when the first backup starts and is turned off when the last backup ends isn't sufficient for concurrent usage I don't know what else I need to deal with. Apparently concurrent backups already work today and I'm not seeing how, aside from the process ids for the metadata directories (i.e., the user needs to remove all but their own process subdirectory from pg_backup_metadata) and state flag they wouldn't continue to work as-is.