Even before considering more dynamic SRFs in postgres, having a variable number of columns for a DML statement is problematic in general.
When an SQL statement is parsed/prepared, it's not supposed to change its output structure, unless a DDL statement intervenes. The dynamic pivot must break this promise, since a simple INSERT or UPDATE or DELETE in the pivoting rows may cause the number of output columns to change across invocations of the same statement. That would mean that PREPARE would be basically unusable or unreliable for such statements.
yes
polymorphic functions need two steps. First step returns structure, second data.
The prepared statements can be supported, but there should be a recheck if the result has expected structure. And maybe in future, the prepared statements can be more dynamic, and can be able to do replaning when it will be necessary.
I think the query parser is also not supposed to read data outside of the catalogs to determine the output structure of a query. This promise would also need to be broken by dynamic pivots implemented as a single-pass DML query.
On the other hand, dynamic pivots can be done by generating the SQL dynamically and getting at the results in a second step, or returning a resultset embedded in a scalar (json). Speaking of features that don't exist but might someday, ISTM that CALL crosstab_dynamic(...) was more plausible than SELECT * FROM crosstab_dynamic(...), since CALL doesn't have the same contingencies as SELECT.
Yes, it is the Sybase way and it can be useful. But you cannot work with the returned result more.
For users it isn't too important if they have to use polymorphic functions or dynamic recordset or a PIVOT clause. Important is performance and ergometry. Polymorphic functions can be used for more tasks than pivoting (reading some external sources, ...). Dynamic recordsets or multi recordsets can be nice features too. I like multi recordsets for reporting. And Oracle's PIVOT clause is just handy and doesn't require programming.