• kubica@fedia.io
    link
    fedilink
    arrow-up
    9
    ·
    4 days ago

    I don’t know where I read it but sort of said that it to have that much information inside the models it was basically similar to a compression algorithm.

    From logic, if we have a lossy compression then its mostly luck if the output is equal to the original. Sometimes it will tip one way and sometimes the other.

    • arthur@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 days ago

      With the caveat that there is no LLM where the “compression” is lossless on this analogy.