My impression is that moderators and users are using a kind of "mental Bayes' Theorem" to detect AI answers. We answer questions like the following, repeatedly updating our probability:
Is the answer written in the style of a native-English-speaking professor with 10+ years professional experience? (Is the user a native-English-speaking professor with 10+ years professional experience?)
Is it written with a level of personal closeness and friendliness unexpected between users of Stack Exchange?
Is it unnecessarily verbose?
Is the answer wrong?
Can I generate a similar answer using ChatGPT?
Has it been significantly edited?
Does it contain correct citations and/or links?
If it contains code, does it also contain the code's output?
Does the author appear aware of what site they're on?
Does the author appear to know other users exist?
Is the author a new user? Or do they have a history of AI-suspected answers?
Has the author meaningfully responded to comments?
Does the answer contain typos, slang, smileys, jokes, punctuation and grammar errors, etc.?
Does it contain images, tables, markdown, etc.?
Does the author write as if they're a human (e.g. "I'm not sure, but...", "This worked for me...")?
Does its overall structure resemble that commonly generated by AI?
Does it use phrases commonly used by AI?
(Oh, and: Does the answer say e.g. "ChatGPT wrote this"?)
So if the answer says e.g. "as an alternative to @user213's answer, ...", it's unlikely to be AI generated, so we increase our mental probability towards "human generated", and if the answer says e.g. "As an AI language model, ...", it's unlikely to be human generated, and increase the probability towards "AI generated".
Given that I've tried detecting AI answers myself now, I feel you can, in most cases, reasonably deduce whether an answer is or isn't human-written this way (assuming a binary yes/no answer to the question), especially if there's a lot of anomalies one way or the other.
This This doesn't always work; there are going to be cases where the result is "I don't know for sure", especially for short answers where there's just not enough data.