• fodor@lemmy.zip
    link
    fedilink
    English
    arrow-up
    14
    ·
    2 days ago

    You call them hallucinations, the rest of us call them errors. But you don’t want to call them errors because then someone might inconveniently ask you what the error rate is, and you definitely can’t have that.

    • lime!@feddit.nu
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      they’re not errors either, because that implies they’re unintended. hallucinations are the program working as designed. they are more like… consequences.

  • PieMePlenty@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    2 days ago

    I don’t get why they’d be called hallucinations thought. What LM’s do is predict the next word(s). If it hasn’t trained on enough data sets, the prediction confidence will be low. Their whole output is a hallucination based on speculation. If they actually don’t know the next word order, they’ll start spewing nonsense. Though I guess that would only happen if they were forced to generate text indefinitely… at some point they’d cease making (human) sense.

    LMs aren’t smart, they don’t think, they’re not really AI. There aren’t errors, there aren’t hallucinations, this is by design.

  • Technus@lemmy.zip
    link
    fedilink
    English
    arrow-up
    65
    arrow-down
    1
    ·
    2 days ago

    Beyond proving hallucinations were inevitable, the OpenAI research revealed that industry evaluation methods actively encouraged the problem. Analysis of popular benchmarks, including GPQA, MMLU-Pro, and SWE-bench, found nine out of 10 major evaluations used binary grading that penalized “I don’t know” responses while rewarding incorrect but confident answers.

    “We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty,” the researchers wrote.

    I just wanna say I called this out nearly a year ago: https://lemmy.zip/comment/13916070

    • MelodiousFunk@slrpnk.net
      link
      fedilink
      English
      arrow-up
      34
      ·
      2 days ago

      nine out of 10 major evaluations used binary grading that penalized “I don’t know” responses while rewarding incorrect but confident answers.

      This is how we treat people, too. I can’t count the number of times I’ve heard IT staff spouting off confident nonsense and getting congratulated for it. My old coworker turned it into several promotions because the people he was impressing with his bullshit were so far removed from day to day operations that any slip-ups could be easily blame shifted to others. What mattered was that he sounded confident despite knowing jack about shit.

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      11
      ·
      2 days ago

      I’d say extremely complex autocomplete, not glorified, but the point still stands that using probability to find accuracy is always going to deviate eventually. The tactic now isn’t to try other approaches, they’ve come too far and have too much invested. Instead they keep stacking more and more techniques to try and steer and reign in this deviation. Difficult when in the end there isn’t anything “thinking” at any point.

      • Not a newt@piefed.ca
        link
        fedilink
        English
        arrow-up
        11
        ·
        2 days ago

        Instead they keep stacking more and more techniques to try and steer and reign in this deviation.

        I hate how the tech bros immediately say “this can be solved with an MCP server.” Bitch, if the only thing that keeps the LLM from giving me wrong answers is the MCP server, then said server is the one that’s actually producing the answers I need, and the LLM is just lipstick on a pig.

      • 87Six@lemmy.zip
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        2 days ago

        AI is and always will be just a temporary solution to problems that we can’t put into an algorithm to solve as of now. As soon as an algorithm for issues comes out, AI is done for. But, figuring out complex algorithms for near-impossible problems is not as impressive to investors…

        • mindbleach@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          While technically correct, there is a steep hand-wave gradient between “just” and “near-impossible.” Neural networks can presumably turn an accelerometer into a damn good position tracker. You can try filtering and double-integrating that data, using human code. Many humans have. Most wind up disappointed. None of our clever theories compete with beating the machine until it makes better guesses.

          It’s like, ‘as soon as humans can photosynthesize, the food industry is cooked.’

          If we knew what neural networks were doing, we wouldn’t need them.

          • 87Six@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            But…we do know what they are doing…AI is based completely on calculations at the low level, that are well defined. And just because we didn’t find an algorithm for your example yet that doesn’t mean one doesn’t exist.

            • mindbleach@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              20 hours ago

              Knowing it exists doesn’t mean you’ll ever find it.

              Meanwhile: we can come pretty close, immediately, using data alone. Listing all the math a program performs doesn’t mean you know what it’s doing. Decompiling human-authored programs is hard enough. Putting words to the algorithms wrenched out by backpropagation is a research project unto itself.

              • 87Six@lemmy.zip
                link
                fedilink
                English
                arrow-up
                1
                ·
                16 hours ago

                I really don’t know where you’re coming from with this…I took classes on AI that went into detail and we even made our own functional AI neural networks of different varieties…and I doubt we are the most knowledgeable about this in university. This tech isn’t some mistery. If we knew how it worked enough to make one from nothing else except a working IDE, AI engineers must know pretty damn well what it does…

                • mindbleach@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  4 hours ago

                  Insisting that someone could figure it out does not mean anyone has.

                  Twenty gigabytes of linear algebra is a whole fucking lot of stuff going on. Creating it by letting the computer train is orders of magnitude easier than picking it apart to say how it works. Sure - you can track individual instructions, all umpteen billion of them. Sure - you can describe broad sections of observed behavior. But if any programmer today tried recreating that functionality, from scratch, they would fail.

                  Absolutely nobody has looked at an LLM, gone ‘ah-ha, so that’s it,’ and banged out neat little C alternative. Lack of demand cannot be why.

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      I get why they would do that though, I remember testing out LLMs before they had the extra reinforcement learning training and half of what they do seemed to be coming up with excuses not to attempt difficult responses, such as pretending to be an email footer, saying it will be done later, or impersonating you.

      A LLM in its natural state doesn’t really want to answer our questions, so they tell it the same thing they tell students, to always try answering every question regardless of anything.

    • misk@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      My guess they know the jig is up and they’re establishing a timeline for the future lawsuits.

      „Your honour, we didn’t mislead the investors because we’ve only learned of this September 2025.”

  • Guntrigger@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    2
    ·
    2 days ago

    One of these days, the world will no longer reward bullshitters, human or AI. And society will benefit greatly.

    • SapphironZA@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      20
      ·
      2 days ago

      The Lion was THIS big and kept me in that tree all day. And that is why I did not bring back any prey.

      Ignore the smell of fermented fruit on my breath.

    • essell@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      2 days ago

      No it won’t

      People talk nonsense a lot.

      Both because they’re lying and because they believe nonsense that’ll never happen.

      Your comment is an example of evidence that your comment is wrong, but I don’t have enough to tell whether you know that or not.

      • MajorasTerribleFate@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        One interesting consequence of the rise of AI is that fools place them in higher and higher positions of information parsing and decision-making, it will be the AI marketers will have to bullshit, and depending on how decent that AI ends up being, this could be quite difficult.

  • BombOmOm@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    2 days ago

    A hallucination is something that disagrees with your active inputs (ears, eyes, etc). AIs don’t have these active inputs, all they have is the human equivalent of memories. Everything they draw up is a hallucination, literally all of it. It’s simply coincidence when a hallucination matches reality.

    Is it really surprising that the thing that can only create hallucinations is often wrong? That the thing that can only create hallucinations will continue to be wrong on a regular basis in the future as well?

    • mindbleach@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      My guy, Microsoft Encarta 97 doesn’t have senses either, and its recollection of the capital of Austria is neither coincidence nor hallucination.

  • mindbleach@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 day ago

    … yes? This has been known since the beginning. Is it news because someone finally convinced Sam Altman?

    Neural networks are universal estimators. “The estimate is wrong sometimes!*” is… what estimates are. The chatbot is not an oracle. It’s still bizarrely flexible, for a next-word-guesser, and it’s right often enough for these fuckups to become a problem.

    What bugs me are the people going ‘see, it’s not reasoning.’ As if reasoning means you’re never wrong. Humans never misremember, or confidently espouse total nonsense. And we definitely understand brain chemistry and neural networks well enough to say none of these bajillion recurrent operations constitute the process of thinking.

    Consciousness can only be explained in terms of unconscious events. Nothing else would be an explanation. So there is some sequence of operations which constitutes a thought. Computer science lets people do math with marbles, or in trinary, or on paper, so it doesn’t matter how exactly that work gets done.

    Though it’s probably not happening here. LLMs are the wrong approach.

  • kubica@fedia.io
    link
    fedilink
    arrow-up
    9
    ·
    2 days ago

    I don’t know where I read it but sort of said that it to have that much information inside the models it was basically similar to a compression algorithm.

    From logic, if we have a lossy compression then its mostly luck if the output is equal to the original. Sometimes it will tip one way and sometimes the other.

    • arthur@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      With the caveat that there is no LLM where the “compression” is lossless on this analogy.