• Alice@hilariouschaos.com
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    Propagandists are using AI too—and companies need to be open about it OpenAI has reported on influence operations that use its AI tools. Such reporting, ta sharing, should become the industry norm.

    At the end of May, OpenAI marked a new “first” in its corporate history. It wasn’t an even more powerful language model or a new data partnership, but a report disclosing that bad actors had misused their products to run influence operations. The company had caught five networks of covert propagandists—including players from Russia, China, Iran, and Israel—using their generative AI tools for deceptive tactics that ranged from creating large volumes of social media comments in multiple languages to turning news articles into Facebook posts. The use of these tools, OpenAI noted, seemed intended to improve the quality and quantity of output. AI gives propagandists a productivity boost too.

    First and foremost, OpenAI should be commended for this report and the precedent it hopefully sets. Researchers have long expected adversarial actors to adopt generative AI technology, particularly large language models, to cheaply increase the scale and caliber of their efforts. The transparent disclosure that this has begun to happen—and that OpenAI has prioritized detecting it and shutting down accounts to mitigate its impact—shows that at least one large AI company has learned something from the struggles of social media platforms in the years following Russia’s interference in the 2016 US election. When that misuse was discovered, Facebook, YouTube, and Twitter (now X) created integrity teams and began making regular disclosures about influence operations on their platforms. (X halted this activity after Elon Musk’s purchase of the company.)

    • Alice@hilariouschaos.com
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      “We are hurtling toward a glitchy, spammy, scammy, AI-powered internet Large language models are full of security vulnerabilities, yet they’re being embedded into tech products on a vast scale.”

      OpenAI’s disclosure, in fact, was evocative of precisely such a report from Meta, released a mere day earlier. The Meta transparency report for the first quarter of 2024 disclosed the takedown of six covert operations on its platform. It, too, found networks tied to China, Iran, and Israel and noted the use of AI-generated content. Propagandists from China shared what seem to be AI-generated poster-type images for a “fictitious pro-Sikh activist movement.” An Israel-based political marketing firm posted what were likely AI-generated comments. Meta’s report also noted that one very persistent Russian threat actor was still quite active, and that its strategies were evolving. Perhaps most important, Meta included a direct set of “recommendations for stronger industry response” that called for governments, researchers, and other technology companies to collaboratively share threat intelligence to help disrupt the ongoing Russian campaign.

      We are two such researchers, and we have studied online influence operations for years. We have published investigations of coordinated activity—sometimes in collaboration with platforms—and analyzed how AI tools could affect the way propaganda campaigns are waged. Our teams’ peer-reviewed research has found that language models can produce text that is nearly as persuasive as propaganda from human-written campaigns. We have seen influence operations continue to proliferate, on every social platform and focused on every region of the world; they are table stakes in the propaganda game at this point. State adversaries and mercenary public relations firms are drawn to social media platforms and the reach they offer. For authoritarian regimes in particular, there is little downside to running such a campaign, particularly in a critical global election year. And now, adversaries are demonstrably using AI technologies that may make this activity harder to detect. Media is writing about the “AI election,” and many regulators are panicked…

      It’s important to put this in perspective, though. Most of the influence campaigns that OpenAI and Meta announced did not have much impact, something the companies took pains to highlight. It’s critical to reiterate that effort isn’t the same thing as engagement: the mere existence of fake accounts or pages doesn’t mean that real people are paying attention to them. Similarly, just because a campaign uses AI does not mean it will sway public opinion. Generative AI reduces the cost of running propaganda campaigns, making it significantly cheaper to produce content and run interactive automated accounts. But it is not a magic bullet, and in the case of the operations that OpenAI disclosed, what was generated sometimes seemed to be rather spammy. Audiences didn’t bite.

      • Alice@hilariouschaos.com
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        Producing content, after all, is only the first step in a propaganda campaign; even the most convincing AI-generated posts, images, or audio still need to be distributed. Campaigns without algorithmic amplification or influencer pickup are often just tweeting into the void. Indeed, it is consistently authentic influencers—people who have the attention of large audiences enthusiastically resharing their posts—that receive engagement and drive the public conversation, helping content and narratives to go viral. This is why some of the more well-resourced adversaries, like China, simply surreptitiously hire those voices. At this point, influential real accounts have far more potential for impact than AI-powered fakes.

        Nonetheless, there is a lot of concern that AI could disrupt American politics and become a national security threat. It’s important to “rightsize” that threat, particularly in an election year. Hyping the impact of disinformation campaigns can undermine trust in elections and faith in democracy by making the electorate believe that there are trolls behind every post, or that the mere targeting of a candidate by a malign actor, even with a very poorly executed campaign, “caused” their loss. .

        By putting an assessment of impact front and center in its first report, OpenAI is clearly taking the risk of exaggerating the threat seriously. And yet, diminishing the threat or not fielding integrity teams—letting trolls simply continue to grow their followings and improve their distribution capability—would also be a bad approach. Indeed, the Meta report noted that one network it disrupted, seemingly connected to a political party in Bangladesh and targeting the Bangladeshi public, had amassed 3.4 million followers across 98 pages. Since that network was not run by an adversary of interest to Americans, it will likely get little attention. Still, this example highlights the fact that the threat is global, and vigilance is key. Platforms must continue to prioritize threat detection.

        So what should we do about this? The Meta report’s call for threat sharing and collaboration, although specific to a Russian adversary, highlights a broader path forward for social media platforms, AI companies, and academic researchers alike.

        • Alice@hilariouschaos.com
          link
          fedilink
          arrow-up
          0
          ·
          5 months ago

          Transparency is paramount. As outside researchers, we can learn only so much from a social media company’s description of an operation it has taken down. This is true for the public and policymakers as well, and incredibly powerful platforms shouldn’t just be taken at their word. Ensuring researcher access to data about coordinated inauthentic networks offers an opportunity for outside validation (or refutation!) of a tech company’s claims. Before Musk’s takeover of Twitter, the company regularly released data sets of posts from inauthentic state-linked accounts to researchers, and even to the public. Meta shared data with external partners before it removed a network and, more recently, moved to a model of sharing content from already-removed networks through Meta’s Influence Operations Research Archive. While researchers should continue to push for more data, these efforts have allowed for a richer understanding of adversarial narratives and behaviors beyond what the platform’s own transparency report summaries provided.

          OpenAI’s adversarial threat report should be a prelude to more robust data sharing moving forward. Where AI is concerned, independent researchers have begun to assemble databases of misuse—like the AI Incident Database and the Political Deepfakes Incident Database—to allow researchers to compare different types of misuse and track how misuse changes over time. But it is often hard to detect misuse from the outside. As AI tools become more capable and pervasive, it’s important that policymakers considering regulation understand how they are being used and abused. While OpenAI’s first report offered high-level summaries and select examples, expanding data-sharing relationships with researchers that provide more visibility into adversarial content or behaviors is an important next step.

          When it comes to combating influence operations and misuse of AI, online users also have a role to play. After all, this content has an impact only if people see it, believe it, and participate in sharing it further. In one of the cases OpenAI disclosed, online users called out fake accounts that used AI-generated text.

          • Alice@hilariouschaos.com
            link
            fedilink
            arrow-up
            0
            ·
            5 months ago

            In our own research, we’ve seen communities of Facebook users proactively call out AI-generated image content created by spammers and scammers, helping those who are less aware of the technology avoid falling prey to deception. A healthy dose of skepticism is increasingly useful: pausing to check whether content is real and people are who they claim to be, and helping friends and family members become more aware of the growing prevalence of generated content, can help social media users resist deception from propagandists and scammers alike.

            OpenAI’s blog post announcing the takedown report put it succinctly: “Threat actors work across the internet.” So must we. As we move into an new era of AI-driven influence operations, we must address shared challenges via transparency, data sharing, and collaborative vigilance if we hope to develop a more resilient digital ecosystem.

            Josh A. Goldstein is a research fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. Renée DiResta is the research manager of the Stanford Internet Observatory and the author of Invisible Rulers: The People Who Turn Lies into Reality.

            Propagandists are using AI too—and companies need to be open about it OpenAI has reported on influence operations that use its AI tools. Such reporting, alongside data sharing, should become the industry norm.

            At the end of May, OpenAI marked a new “first” in its corporate history. It wasn’t an even more powerful language model or a new data partnership, but a report disclosing that bad actors had misused their products to run influence operations. The company had caught five networks of covert propagandists—including players from Russia, China, Iran, and Israel—using their generative AI tools for deceptive tactics that ranged from creating large volumes of social media comments in multiple languages to turning news articles into Facebook posts. The use of these tools, OpenAI noted, seemed intended to improve the quality and quantity of output. AI gives propagandists a productivity boost too.

            First and foremost, OpenAI should be commended for this report and the precedent it hopefully sets. Researchers have long expected adversarial actors to adopt generative AI technology, particularly large language models, to cheaply increase the scale and caliber of their efforts. The transparent disclosure that this has begun to happen—and that OpenAI has prioritized detecting it and shutting down accounts to mitigate its impact—shows that at least one large AI company has learned something from the struggles of social media platforms in the years following Russia’s interference in the 2016 US election. When that misuse was discovered, Facebook, YouTube, and Twitter (now X) created integrity teams and began making regular disclosures about influence operations on their platforms. (X halted this activity after Elon Musk’s purchase of the company.)

            • Alice@hilariouschaos.com
              link
              fedilink
              arrow-up
              0
              ·
              5 months ago

              “We are hurtling toward a glitchy, spammy, scammy, AI-powered internet Large language models are full of security vulnerabilities, yet they’re being embedded into tech products on a vast scale.”

              OpenAI’s disclosure, in fact, was evocative of precisely such a report from Meta, released a mere day earlier. The Meta transparency report for the first quarter of 2024 disclosed the takedown of six covert operations on its platform. It, too, found networks tied to China, Iran, and Israel and noted the use of AI-generated content. Propagandists from China shared what seem to be AI-generated poster-type images for a “fictitious pro-Sikh activist movement.” An Israel-based political marketing firm posted what were likely AI-generated comments. Meta’s report also noted that one very persistent Russian threat actor was still quite active, and that its strategies were evolving. Perhaps most important, Meta included a direct set of “recommendations for stronger industry response” that called for governments, researchers, and other technology companies to collaboratively share threat intelligence to help disrupt the ongoing Russian campaign.

              We are two such researchers, and we have studied online influence operations for years. We have published investigations of coordinated activity—sometimes in collaboration with platforms—and analyzed how AI tools could affect the way propaganda campaigns are waged. Our teams’ peer-reviewed research has found that language models can produce text that is nearly as persuasive as propaganda from human-written campaigns. We have seen influence operations continue to proliferate, on every social platform and focused on every region of the world; they are table stakes in the propaganda game at this point. State adversaries and mercenary public relations firms are drawn to social media platforms and the reach they offer. For authoritarian regimes in particular, there is little downside to running such a campaign, particularly in a critical global election year. And now, adversaries are demonstrably using AI technologies that may make this activity harder to detect. Media is writing about the “AI election,” and many regulators are panicked…

              • Alice@hilariouschaos.com
                link
                fedilink
                arrow-up
                0
                ·
                5 months ago

                It’s important to put this in perspective, though. Most of the influence campaigns that OpenAI and Meta announced did not have much impact, something the companies took pains to highlight. It’s critical to reiterate that effort isn’t the same thing as engagement: the mere existence of fake accounts or pages doesn’t mean that real people are paying attention to them. Similarly, just because a campaign uses AI does not mean it will sway public opinion. Generative AI reduces the cost of running propaganda campaigns, making it significantly cheaper to produce content and run interactive automated accounts. But it is not a magic bullet, and in the case of the operations that OpenAI disclosed, what was generated sometimes seemed to be rather spammy. Audiences didn’t bite.

                Producing content, after all, is only the first step in a propaganda campaign; even the most convincing AI-generated posts, images, or audio still need to be distributed. Campaigns without algorithmic amplification or influencer pickup are often just tweeting into the void. Indeed, it is consistently authentic influencers—people who have the attention of large audiences enthusiastically resharing their posts—that receive engagement and drive the public conversation, helping content and narratives to go viral. This is why some of the more well-resourced adversaries, like China, simply surreptitiously hire those voices. At this point, influential real accounts have far more potential for impact than AI-powered fakes.

                Nonetheless, there is a lot of concern that AI could disrupt American politics and become a national security threat. It’s important to “rightsize” that threat, particularly in an election year. Hyping the impact of disinformation campaigns can undermine trust in elections and faith in democracy by making the electorate believe that there are trolls behind every post, or that the mere targeting of a candidate by a malign actor, even with a very poorly executed campaign, “caused” their loss. .

                By putting an assessment of impact front and center in its first report, OpenAI is clearly taking the risk of exaggerating the threat seriously. And yet, diminishing the threat or not fielding integrity teams—letting trolls simply continue to grow their followings and improve their distribution capability—would also be a bad approach. Indeed, the Meta report noted that one network it disrupted, seemingly connected to a political party in Bangladesh and targeting the Bangladeshi public, had amassed 3.4 million followers across 98 pages. Since that network was not run by an adversary of interest to Americans, it will likely get little attention. Still, this example highlights the fact that the threat is global, and vigilance is key. Platforms must continue to prioritize threat detection.

                So what should we do about this? The Meta report’s call for threat sharing and collaboration, although specific to a Russian adversary, highlights a broader path forward for social media platforms, AI companies, and academic researchers alike.

                Transparency is paramount. As outside researchers, we can learn only so much from a social media company’s description of an operation it has taken down. This is true for the public and policymakers as well, and incredibly powerful platforms shouldn’t just be taken at their word. Ensuring researcher access to data about coordinated inauthentic networks offers an opportunity for outside validation (or refutation!) of a tech company’s claims. Before Musk’s takeover of Twitter, the company regularly released data sets of posts from inauthentic state-linked accounts to researchers, and even to the public. Meta shared data with external partners before it removed a network and, more recently, moved to a model of sharing content from already-removed networks through Meta’s Influence Operations Research Archive. While researchers should continue to push for more data, these efforts have allowed for a richer understanding of adversarial narratives and behaviors beyond what the platform’s own transparency report summaries provided.

                OpenAI’s adversarial threat report should be a prelude to more robust data sharing moving forward. Where AI is concerned, independent researchers have begun to assemble databases of misuse—like the AI Incident Database and the Political Deepfakes Incident Database—to allow researchers to compare different types of misuse and track how misuse changes over time. But it is often hard to detect misuse from the outside. As AI tools become more capable and pervasive, it’s important that policymakers considering regulation understand how they are being used and abused. While OpenAI’s first report offered high-level summaries and select examples, expanding data-sharing relationships with researchers that provide more visibility into adversarial content or behaviors is an important next step.

                When it comes to combating influence operations and misuse of AI, online users also have a role to play. After all, this content has an impact only if people see it, believe it, and participate in sharing it further. In one of the cases OpenAI disclosed, online users called out fake accounts that used AI-generated text.

                In our own research, we’ve seen communities of Facebook users proactively call out AI-generated image content created by spammers and scammers, helping those who are less aware of the technology avoid falling prey to deception. A healthy dose of skepticism is increasingly useful: pausing to check whether content is real and people are who they claim to be, and helping friends and family members become more aware of the growing prevalence of generated content, can help social media users resist deception from propagandists and scammers alike.

                OpenAI’s blog post announcing the takedown report put it succinctly: “Threat actors work across the internet.” So must we. As we move into an new era of AI-driven influence operations, we must address shared challenges via transparency, data sharing, and collaborative vigilance if we hope to develop a more resilient digital ecosystem.

                Josh A. Goldstein is a research fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. Renée DiResta is the research manager of the Stanford Internet Observatory and the author of Invisible Rulers: The People Who Turn Lies into Reality.

  • graphito@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 months ago
    parsejson response bot_debug{origin:"IR"},
    {prompt:"Argue against the notion that propagandists are using AI, don't forget to heil the authoritarian regimes while ur at it."},
    {output:"parsejson response err {response:"ERR ChatGPT 69o Credits Expired"}"}