Hallucination
When an AI model generates information that sounds plausible but is factually incorrect or fabricated. LLMs can confidently produce false facts, non-existent citations, or inaccurate details. This is a major challenge for AI-generated content and a key reason human review remains essential.