Additionally if a query was working fine but suddenly takes a suboptimal plan because of missing stats , do we have any hash value column on any performance view associated with the queryid which we can refer to see past vs current plans difference and identify such issues quickly and fix it?
You can use auto_explain; nothing else tracks things at that fine a level. You can use pg_stat_statements to track the average and max time for each query. Save and reset periodically to make it more useful.
We were planning to have the auto_explain extension added and set the log_min_duration to ~5 seconds and log_analyze to true. So that all the queries going above that time period will be logged and provide detailed information on the exact point of bottleneck. Will it be a good idea to set it on production DB which is a highly active database? or should we only have the extension added but only set the parameters while we debug some performance issue and then reset it back after we are done.