I’m in IT at an upper level and know painfully well what “AI” really is and that it’s not the disruptor people think it will be. However I feel like I can’t post it anywhere without being judged about it as almost every exec I know has bought into it hook, line and sinker. Even other people I talk to about the issues and limitations look at me like I’m completely weird “you’re in IT and you don’t embrace AI? wtf is wrong with you?”

So what do you all do? I don’t want to make things career limiting but I feel like I’m screaming in the dark seeing where things will really go. It reminds me a lot of the move to cloud and everyone going all in on it without knowing the real ramifications.

  • Canaconda@lemmy.ca
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    1 month ago

    Specialized, yes, but generative AI is an entirely different subject from cybersecurity applications. These are not general purpose models… they’re specifically trained and tasked to do cyber attacks.

    1. Hardware vulnerabilities. There a millions of devices that an AI could easily cross reference against a database of known hardware vulnerabilities. Consider the Windows 10 or Android 12 situation. IOT devices that are no longer being updated. AI could mass target devices with any known vulnerability.

    2. Brute forcing passwords and cross referencing libraries of credentials. Basically what scammers currently do but x1000000. Weak passwords below 12-16 character lengths may become obsolete.

    3. Scalability. 1 AI agent could be the equivalent of 100 human cyber security agents. Meanwhile the minimum skill level to activate these AI agents will be far, far, below the skill requirements of becoming a cyber security professional.

    AI is better at computer stuff just like humans are better at human stuff.

    • Opisek@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 month ago

      I agree with the cross referencing and scalability, but can you explain how at LLM might be faster at password bruteforcing at all? Those models are not known for their speed.

      • Canaconda@lemmy.ca
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        1 month ago

        LLM might be faster at password bruteforcing at al

        AI agents can use automation tools and are not limited to being chat bots. LLM just gives dumbasses like you and me the ability to communicate with them.

        An AI agent could triage vast libraries of vulnerable targets, designate server resources to facilitate multiple attack types in tandem, and effect cyber warfare on a scale that would require 100s, possibly 1000s, of human agents.

        AI could develop innovative malware that simultaneously causes harm and obfuscates it’s presence. It could coordinate DDOS attacks etc against rival cyber security assets. Attack power stations.

        I am far from the only person concerned about this. https://gizmodo.com/get-ready-the-ai-hacks-are-coming-2000639625