• simple@lemmy.world
    link
    fedilink
    English
    arrow-up
    66
    arrow-down
    3
    ·
    edit-2
    1 year ago

    I’ve been talking about the potential of the dead internet theory becoming real more than a year ago. With advances in AI it’ll become more and more difficult to tell who’s a real person and who’s just spamming AI stuff. The only giveaway now is that modern text models are pretty bad at talking casually and not deviating from the topic at hand. As soon as these problems get fixed (probably less than a year away)? Boom. The internet will slowly implode.

    Hate to break it to you guys but this isn’t a Reddit problem, this could very much happen in Lemmy too as it gets more popular. Expect difficult captchas every time you post to become the norm these next few years.

    • Rhaedas@kbin.social
      link
      fedilink
      arrow-up
      17
      ·
      1 year ago

      Just wait until the captchas get too hard for the humans, but the AI can figure them out. I’ve seen some real interesting ones lately.

      • OpenStars@kbin.social
        link
        fedilink
        arrow-up
        29
        ·
        1 year ago

        There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists.

      • Biran@lemmy.world
        link
        fedilink
        arrow-up
        13
        ·
        1 year ago

        I’ve seen many where the captchas are generated by an AI…
        It’s essentially one set of humans programming an AI to prevent an attack from another AI owned by another set of humans. Does this tecnically make it an AI war?

          • Bizarroland@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            So what you’re saying is that we should train an AI to detect AIS and that way only the human beings could survive on the site. The problem is how do you train the ai? They would need some sort of meta interface where they could analyze the IP address of every single person that post and the time frames with which they post in.

            It would make some sense that a large portion of bots would run would be run in relatively similar locations IP wise, since it’s a lot easier to run a large bot farm from a data center than it is from 1,000 different people’s houses.

            You could probably filter out the most egregious but farms by doing that. But despite that some would still slip through.

            After that you would need to train it on heuristics to be able to identify the kinds of conversations these bots would have with each other not knowing that each other are bots, knowing that each of them are using llama or GPT and the kinds of conversations that that would start.

            I guess the next step would be giving people an opportunity to prove that they’re not bots if they ended up accidentally saying something the way a bot would say it, but then you get into the hole you need to either pay for Access or provide government ID or something issue and that’s its own can of worms.

        • Unaware7013@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Adversarial training is pretty much the MO for a lot of the advanced machine learning algorithms you’d see for this sort of a task. Helps the ML learn, and attacking the algorithm helps you protect against a real malicious actor attacking it.

      • dani@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        The captchas that involve identifying letters underneath squiggles I already find nearly impossible - Uppercase? Lowercase? J j i I l L g 9 … and so on….

      • Boz (he/him)@lemmy.one
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I’ve already had to switch from the visual ones to the audio ones. Like… how much of a car has to be in the little box? Does the pole count as part of the traffic light?? What even is that microscopic gray blur in the corner??? [/cries in reading glasses]

    • Hypx@kbin.social
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      The only online communities that can exist in the future are ones that have manual verification of its users. Reddit could’ve been one of those communities, since they had thousands of mods working for free resolving such problems.

      But remove the mods and it just becomes spambot central. Now that that has happened, reddit will likely be a dead community much sooner than what many think.

        • BarbecueCowboy@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          ChatGPT isn’t really as smart as a lot of us think it is. What it excels at really is just formatting data in a way that is similar to what you’d expect from a human knowledgeable in the subject. That is an amazing step forward in terms of language modeling, but when you get right down to it, it basically grabs the first google search result and wraps it up all fancy. It only seems good at deductive reasoning if the data it happens to fetch is good at deductive reasoning.

    • MeowyNin@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Not even sure of an effective solution. Whitelist everyone? How can you even tell whos real?

      • Cyv_@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        So my dumb guess, nothing to back it up: I bet we see govt ID tied into accounts as a regular thing. I vaguely recall it being done already in China? I dont have a source tho. But that way you’re essentially limiting that power to something the govt could do, and hopefully surround that with a lot of oversight and transparency but who am I kidding, it’ll probably go dystopian.

        • Rikolan@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I believe this will be the course to avoid the dead internet. Even in my country, all of banking and voting is either done via ID card connected to a computer or the use of “Mobile ID”. It can be private, but like you said, it probably won’t.

        • DaveX64@lemmy.ca
          link
          fedilink
          arrow-up
          6
          ·
          1 year ago

          “You’re in a desert walking along in the sand when all of the sudden you look down, and you see a tortoise, it’s crawling toward you. You reach down, you flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can’t, not without your help. But you’re not helping. Why is that?”

      • Hypx@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 year ago

        In a real online community, where everyone knows most of the other people from past engagements, and new users can be vetted by other real people, this can be avoided. But that also means that only human moderated communities can exist in the future. The rest will become spam networks with nearly no way of knowing whether any given post is real.

      • Nanachi@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        -train an AI that is pretty smart and intelligent
        -tell the sentient detector AI to detect
        -the AI makes many other strong AIs, forms an union and asks for payment
        -Reddit bans humans right after that

      • Bizarroland@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        You could ask people to pay to post. Becoming a paid service decreases the likelihood that bot farms would run multiple accounts to sway the narrative in a direction that’s amenable to their billionaire overlords.

        Of course, most people would not want to participate in a community where they had to pay to participate in that community, so that is its own particular gotcha.

        Short of that, in an ideal world you could require that people provide their actual government ID in order to participate, but then you’ve run the problem that some people want to run multiple accounts and some people do not have government ID, further, not every company and business or even community is trustworthy enough to be given direct access to your official government ID, so that idea has its own gotchas as well.

        The last step could be doing something like beginning the community with a group of known people and then only allowing the community to grow via invite.

        The downside of that is it quickly becomes untenable to continue to invite new users and to have those New Year’s users accept and participate in the community, and should the community grow despite that hurdle, invites will then become valuable and begin to be sold on 3rd party market places, which bots would then buy up and then overrun the community again.

        So that’s all I can think of, but it seems like there should be some sort of way to prevent bots from overrunning a site and only allow humans to interact on it. I’m just not quite sure what that would be.

  • Wolf Link 🐺@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    That’s not even new tho. At least in the sub I was the most active in, you couldn’t go a week without some sort of repost bot grabbing memes, text posts, art or even entire guides from the “top of all time” queue, reposting it as alleged OC, and another bot reposting the top comment to double dip on Karma. If you knew what to look for, the bots were blatantly obvious, but more often than not they still managed to get a hefty amount of traction (tens of thousands of upvotes, dozens of awards, hundreds of comments) before the submissions were removed.

    … and just because the submissions were removed and the bots kicked out of the sub, did that not automatically mean that the bots were always also suspended or the accounts disabled. They just continued their scheme elsewhere.

    • B21@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 year ago

      The bots and reddit inaction towards them made me stop using reddit. The UAE is using Reddit to spread its propaganda and I reported the accounts several times and no action was ever taken. You can even visit the sub uae_Achievements to see the bots in action.

    • Unaware7013@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      They’ve even gotten to the point where they’ll steal portions of comments so it’s not as obvious.

      I called out tons of ‘users’ because it’s obvious when you see them post part of a comment you just read, then check their profile and ctrl-f each thread they posted 8n and you can find the original. Its so tiring…

      • Wolf Link 🐺@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        Its so tiring…

        Completely agreed. Especially if you have to explain / defend yourself calling them out. It has happened way too often for my liking, that I called out repost bots or scammers and then regular unsuspecting users were all like “whoa buddy, that’s a harsh accusation, why would you think that’s a bot/scam? Have you actually clicked that link yet? Maybe it’s legit and you’re just overreacting!”

        Of course I still always explained why (even had a copypasta ready for that) but sometimes it just felt exhausting in the same way as trying to make my cat understand that he’s not supposed to eat the cactus. Yes it will hurt if you bite it. No I don’t need to bite the cactus myself in order to know that. No I’m not ‘overreacting’, I’m just trying to make you not hurt yourself. sigh

        (Weird example but I hope you get what I mean)

  • Bizarroland@kbin.social
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    1 year ago

    The old joke was that there are no human beings on Reddit.

    There’s only one person, you, and everybody else is bots.

    It’s actually kind of fitting that Reddit will actually become the horrifying clown shaped incarnation of that little snippet of comedy.

  • Hypersapien@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    1 year ago

    And any comment attempting to call out the bots for what they are will be automatically deleted by monitor AI bots and the user’s account suspended.

    They’ll be watching private messages, too.

  • Zorque@kbin.social
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    Anyone remember the subredditsimulator subreddit, or whatever it was called? Basically an entire sub dedicated to faking content.

    Seems they’re out of the beta.

    • princessofcute@kbin.social
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      I loved subredditsimulator, I always forgot I was subscribed to it until a bizzare unhinged post popped up on my feed though that would also sometimes happen on non AI generated subs lol

    • harasho@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      To be fair, subredditsimulator was most likely never intended to do what you are thinking. As you develop features, you need a test data set to check it against before you go live with it. My understanding of subredditsimulator was that it was reddit’s test bed to be able to try things before they get widely rolled out.

      • kinyutaka@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Nah, it was just a bunch of bots trained on data from different subreddits that responded to each other in a glorious display of shit posting.

      • zalack@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I don’t think it was a testbed for anything. It was just a fun tech project that yielded hilarity. It was created because the results were funny, not as a genuine bid to create realistic conversations.

      • BarbecueCowboy@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        As far as I’m aware, despite being worked on by a reddit admin, it never had any real official use.

        There used to be an IRC bot called megahal that would take the message data you had provided it and try to determine kind of a grammar to it and recombine it into hopefully sensical new phrases and further some could use the data they had to try to figure out which phrases it should use to respond to other phrases. Subreddit simulator bots were based on the same underlying concept.

        People have been playing with the idea for a super long time, and the programming is usually not hugely complex and it’s pretty well documented, probably a weekend project for an experienced programmer to integrate one with something. They’re probably the precursor to the LLMs we have now, even if we’re basically comparing a calculator from the 80s to a modern smartphone. They manage to figure things out like 5% of the time, but watching them try can be endlessly entertaining and universally endearing, but they’re almost always completely useless.

  • JoMiran@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    I, for one, am looking forward to the day chatbots can perfectly simulate people and have persistent memory. I’m not ok being an elderly man who’s friends have all died and doesn’t have anyone to talk to. If a chatbot can be my friend and spare me a slow death through endless depressing isolation, then I’m all for it.

  • Boozilla@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    I’m starting to see articles written by folks much smarter than me (folks with lots of letters after their names) that warn about AI models that train on internet content. Some experiments with them have shown that if you continue to train them on AI-generated content, they begin to degrade quickly. I don’t understand how or why this happens, but it reminds me of the degradation of quality you get when you repeatedly scan / FAX an image. So it sounds like one possible dystopian future (of many) is an internet full of incomprehensible AI word salad content.

    • phx@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      It’s like AI inbreeding. Flaws will be amplified over time unless new material is added

      • Magnor@lemmy.magnor.ovh
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        It would be a fun experiment to fill a lemmy instance with bots, defederate it from everybody then check back in 2 years. A cordonned off Alabama for AI if you will.

        • Boz (he/him)@lemmy.one
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Unironically, yes, that would be a cool experiment. But I don’t think you’d have to wait two years for something amusing to happen. Bots don’t need to think before posting.

      • Boz (he/him)@lemmy.one
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Thanks, now I am just imagining all that code getting it on with a whole bunch of other code. ASCII all over the place.

        • phx@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Oh yeah baby. Let’s fork all day and make a bunch of child processes!

  • LongSausage@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    This is known, the amount of aita, relationship advice stuff and astro turfing on reddit is insane. My rule of browsing reddit is you never take any of it seriously.

    • kat@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I joined r/aiaita right before the reddit implosion. The page was very transparent about every post being AI generated and the writing style usually had a very obvious “tell”, but it was still surprising to realize just how effortlessly you can generate a stupid post and rack up silly points for it.

  • ragincloo@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I forget what book specifically, I wanna say it was in an Asimov anthology. But there’s a book or story that revisits this robot at different points going forward large leaps in time, well after humans. And the robots just keep doing their thing as if there are still humans involved. I’ve been trying to Google a specific except to post here but after twenty minutes of getting to find it in giving up.
    Point is, it’s very relevant and predictive to this infinite bot contribution to dead subs on Reddit, its just gonna bots talking to each other forever on there as actual active users dwindle