

I don’t get why they’d be called hallucinations thought. What LM’s do is predict the next word(s). If it hasn’t trained on enough data sets, the prediction confidence will be low. Their whole output is a hallucination based on speculation. If they actually don’t know the next word order, they’ll start spewing nonsense. Though I guess that would only happen if they were forced to generate text indefinitely… at some point they’d cease making (human) sense.
LMs aren’t smart, they don’t think, they’re not really AI. There aren’t errors, there aren’t hallucinations, this is by design.
🤖 Reality Check #1 19/20
🔥 Perfect Streak: 18
I can spot AI images 95% of the time. 🧠
Ahh, almost! I did have to zoom into most photos though so a lot of them could pass on quick glance.