In a landmark study, OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limits.
… yes? This has been known since the beginning. Is it news because someone finally convinced Sam Altman?
Neural networks are universal estimators. “The estimate is wrong sometimes!*” is… what estimates are. The chatbot is not an oracle. It’s still bizarrely flexible, for a next-word-guesser, and it’s right often enough for these fuckups to become a problem.
What bugs me are the people going ‘see, it’s not reasoning.’ As if reasoning means you’re never wrong. Humans never misremember, or confidently espouse total nonsense. And we definitely understand brain chemistry and neural networks well enough to say none of these bajillion recurrent operations constitute the process of thinking.
Consciousness can only be explained in terms of unconscious events. Nothing else would be an explanation. So there is some sequence of operations which constitutes a thought. Computer science lets people do math with marbles, or in trinary, or on paper, so it doesn’t matter how exactly that work gets done.
Though it’s probably not happening here. LLMs are the wrong approach.
… yes? This has been known since the beginning. Is it news because someone finally convinced Sam Altman?
Neural networks are universal estimators. “The estimate is wrong sometimes!*” is… what estimates are. The chatbot is not an oracle. It’s still bizarrely flexible, for a next-word-guesser, and it’s right often enough for these fuckups to become a problem.
What bugs me are the people going ‘see, it’s not reasoning.’ As if reasoning means you’re never wrong. Humans never misremember, or confidently espouse total nonsense. And we definitely understand brain chemistry and neural networks well enough to say none of these bajillion recurrent operations constitute the process of thinking.
Consciousness can only be explained in terms of unconscious events. Nothing else would be an explanation. So there is some sequence of operations which constitutes a thought. Computer science lets people do math with marbles, or in trinary, or on paper, so it doesn’t matter how exactly that work gets done.
Though it’s probably not happening here. LLMs are the wrong approach.