In a landmark study, OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limits.
While technically correct, there is a steep hand-wave gradient between “just” and “near-impossible.” Neural networks can presumably turn an accelerometer into a damn good position tracker. You can try filtering and double-integrating that data, using human code. Many humans have. Most wind up disappointed. None of our clever theories compete with beating the machine until it makes better guesses.
It’s like, ‘as soon as humans can photosynthesize, the food industry is cooked.’
If we knew what neural networks were doing, we wouldn’t need them.
But…we do know what they are doing…AI is based completely on calculations at the low level, that are well defined. And just because we didn’t find an algorithm for your example yet that doesn’t mean one doesn’t exist.
Knowing it exists doesn’t mean you’ll ever find it.
Meanwhile: we can come pretty close, immediately, using data alone. Listing all the math a program performs doesn’t mean you know what it’s doing. Decompiling human-authored programs is hard enough. Putting words to the algorithms wrenched out by backpropagation is a research project unto itself.
I really don’t know where you’re coming from with this…I took classes on AI that went into detail and we even made our own functional AI neural networks of different varieties…and I doubt we are the most knowledgeable about this in university. This tech isn’t some mistery. If we knew how it worked enough to make one from nothing else except a working IDE, AI engineers must know pretty damn well what it does…
Insisting that someone could figure it out does not mean anyone has.
Twenty gigabytes of linear algebra is a whole fucking lot of stuff going on. Creating it by letting the computer train is orders of magnitude easier than picking it apart to say how it works. Sure - you can track individual instructions, all umpteen billion of them. Sure - you can describe broad sections of observed behavior. But if any programmer today tried recreating that functionality, from scratch, they would fail.
Absolutely nobody has looked at an LLM, gone ‘ah-ha, so that’s it,’ and banged out neat little C alternative. Lack of demand cannot be why.
While technically correct, there is a steep hand-wave gradient between “just” and “near-impossible.” Neural networks can presumably turn an accelerometer into a damn good position tracker. You can try filtering and double-integrating that data, using human code. Many humans have. Most wind up disappointed. None of our clever theories compete with beating the machine until it makes better guesses.
It’s like, ‘as soon as humans can photosynthesize, the food industry is cooked.’
If we knew what neural networks were doing, we wouldn’t need them.
But…we do know what they are doing…AI is based completely on calculations at the low level, that are well defined. And just because we didn’t find an algorithm for your example yet that doesn’t mean one doesn’t exist.
Knowing it exists doesn’t mean you’ll ever find it.
Meanwhile: we can come pretty close, immediately, using data alone. Listing all the math a program performs doesn’t mean you know what it’s doing. Decompiling human-authored programs is hard enough. Putting words to the algorithms wrenched out by backpropagation is a research project unto itself.
I really don’t know where you’re coming from with this…I took classes on AI that went into detail and we even made our own functional AI neural networks of different varieties…and I doubt we are the most knowledgeable about this in university. This tech isn’t some mistery. If we knew how it worked enough to make one from nothing else except a working IDE, AI engineers must know pretty damn well what it does…
Insisting that someone could figure it out does not mean anyone has.
Twenty gigabytes of linear algebra is a whole fucking lot of stuff going on. Creating it by letting the computer train is orders of magnitude easier than picking it apart to say how it works. Sure - you can track individual instructions, all umpteen billion of them. Sure - you can describe broad sections of observed behavior. But if any programmer today tried recreating that functionality, from scratch, they would fail.
Absolutely nobody has looked at an LLM, gone ‘ah-ha, so that’s it,’ and banged out neat little C alternative. Lack of demand cannot be why.