• Devanismyname@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    12
    ·
    21 hours ago

    It’ll just keep better at it over time though. The current ai is way better than 5 years ago and in 5 years it’ll be way better than now.

    • almost1337@lemm.ee
      link
      fedilink
      arrow-up
      13
      arrow-down
      1
      ·
      20 hours ago

      That’s certainly one theory, but as we are largely out of training data there’s not much new material to feed in for refinement. Using AI output to train future AI is just going to amplify the existing problems.

      • Devanismyname@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        11
        ·
        19 hours ago

        I mean, the proof is sitting there wearing your clothes. General intelligence exists all around us. If it can exist naturally, we can eventually do it through technology. Maybe there needs to be more breakthroughs before it happens.

        • Nalivai@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          10 hours ago

          Everything possible in theory. Doesn’t mean everything happened or just about to happen

          • mindbleach@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            arrow-down
            4
            ·
            16 hours ago

            I mean - have you followed AI news? This whole thing kicked off maybe three years ago, and now local models can render video and do half-decent reasoning.

            None of it’s perfect, but a lot of it’s fuckin’ spooky, and any form of “well it can’t do [blank]” has a half-life.

            • SaraTonin@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 hours ago

              If you follow AI news you should know that it’s basically out of training data, that extra training is inversely exponential and so extra training data would only have limited impact anyway, that companies are starting to train AI on AI generated data -both intentionally and unintentionally, and that hallucinations and unreliability are baked-in to the technology.

              You also shouldn’t take improvements at face value. The latest chatGPT is better than the previous version, for sure. But its achievements are exaggerated (for example, it already knew the answers ahead of time for the specific maths questions that it was denoted answering, and isn’t better than before or other LLMs at solving maths problems that it doesn’t have the answers already hardcoded), and the way it operates is to have a second LLM check its outputs. Which means it takes,IIRC, 4-5 times the energy (and therefore cost) for each answer, for a marginal improvement of functionality.

              The idea that “they’ve come on in leaps and bounds over the Last 3 years therefore they will continue to improve at that rate isn’t really supported by the evidence.

              • mindbleach@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                ·
                58 minutes ago

                We don’t need leaps and bounds, from here. We’re already in science fiction territory. Incremental improvement has has silenced a wide variety of naysaying.

                And this is with LLMs - which are stupid. We didn’t design them with logic units or factoid databases. Anything they get right is an emergent property from guessing plausible words, and they get a shocking amount of things right. Smaller models and faster training will encourage experimentation for better fundamental goals. Like a model that can only say yes, no, or mu. A decade ago that would have been an impossible sell - but now we know data alone can produce a network that’ll fake its way through explaining why the answer is yes or no. If we’re only interested in the accuracy of that answer, then we’re wasting effort on the quality of the faking.

                Even with this level of intelligence, where people still bicker about whether it is any level of intelligence, dumb tricks keep working. Like telling the model to think out loud. Or having it check its work. These are solutions an author would propose as comedy. And yet: it helps. It narrows the gap between “but right now it sucks it [blank]” and having to find a new [blank]. If that never lets it do math properly, well, buy a calculator.

            • Korhaka@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              2
              ·
              14 hours ago

              Seen a few YouTube channels now that just print out AI generated content. Usually audio only with a generated picture on screen. Vast amounts could be made so cheaply like that, Google is going to have fun storing all that when each only gets like 25 views. I think at some point they are going to have to delete stuff.

    • GenosseFlosse@feddit.org
      link
      fedilink
      arrow-up
      2
      ·
      14 hours ago

      To get better it would need better training data. However there are always more junior devs creating bad training data, than senior devs who create slightly better training data.

      • SaraTonin@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 hours ago

        And now LLMs being trained on data generated by LLMs. No possible way that could go wrong.