Ouch.

    • serenissi@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      LLMs are inherently probabilistic. A response can’t be reliability reproduced with exact same tokens on exact same model with exact same params.

    • TachyonTele@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 hours ago

      Maybe it being 16 questions in had an effect on it? I don’t know how much it keeps on it’s “memory” for one person/conversation.