Hi,
As someone who has taken days to publish each blog upon so many reviews, corrections and edits while
attempting to make it as best/informative and as perfect as possible, it might seem slightly frustrating when
we see some AI generated content, especially when it is misleading readers.
However, I do agree with Lawrence that it is impossible to prove whether it is written by AI or a human.
AI can make mistakes and it might mistakenly point out that a blog is written by AI (which I know is difficult to implement).
I see Moderators spending some time reviewing content and accepting or warning if it is not related to Postgres.
AI may be adopted to help us score whether an article is related to Postgres and decline the submission/blog feed.
But, it is very impossible to use AI or some strategy to identify whether it is written by AI or human.
People may also use AI generated Images in their blogs, and they may be meaningful for their article.
Is it only the content or also the images ? It might get too complicated while implementing some rules.
Ultimately, Humans do make mistakes and we shouldn't discourage people assuming it is AI that made that mistake.