Trust it the same way you'd trust a very smart colleague who reads a lot but sometimes misremembers details. Useful, often right, occasionally confidently wrong.
AI models can hallucinate — generating plausible-sounding facts, citations, statistics, or quotes that don't exist. The output always sounds fluent and confident, which makes errors harder to spot than a Google result that just says "no results."
When to verify: Any specific fact, number, date, name, or citation that you'll act on or share publicly.
When to trust it: Writing, structure, brainstorming, summarizing content you already understand, explaining concepts, drafting communications. These are lower-risk uses — but still warrant a human review.