In a landmark study, OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limits.
Insisting that someone could figure it out does not mean anyone has.
Twenty gigabytes of linear algebra is a whole fucking lot of stuff going on. Creating it by letting the computer train is orders of magnitude easier than picking it apart to say how it works. Sure - you can track individual instructions, all umpteen billion of them. Sure - you can describe broad sections of observed behavior. But if any programmer today tried recreating that functionality, from scratch, they would fail.
Absolutely nobody has looked at an LLM, gone ‘ah-ha, so that’s it,’ and banged out neat little C alternative. Lack of demand cannot be why.
Insisting that someone could figure it out does not mean anyone has.
Twenty gigabytes of linear algebra is a whole fucking lot of stuff going on. Creating it by letting the computer train is orders of magnitude easier than picking it apart to say how it works. Sure - you can track individual instructions, all umpteen billion of them. Sure - you can describe broad sections of observed behavior. But if any programmer today tried recreating that functionality, from scratch, they would fail.
Absolutely nobody has looked at an LLM, gone ‘ah-ha, so that’s it,’ and banged out neat little C alternative. Lack of demand cannot be why.