GPT detectors frequently misclassify non-native English writing as AI generated, raising concerns about fairness and robustness. Addressing the biases in these detectors is crucial to prevent the marginalization of non-native English speakers in evaluative and educational settings and to create a more equitable digital landscape.

  • Voyajer@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    We’ve known those AI paper detection services were junk from the get go with their huge false positive rate on even native English papers.

  • addie@feddit.uk
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    Just include some properly-cited facts, some opinion, some insight, something original, and we’ll be certain that your writing wasn’t AI-generated shit.

    Also, these detectors are pretty worthless if that’s what they’re triggering on. AI never makes a spelling mistake. I’ve never seen it make a grammar mistake, because all it does is spew out the most likely next word based on copyright infringement on an unimaginable scale. It doesn’t produce the kind of non-idomatic constructions that break the flow of the text for native speakers, because it never generates anything worth reading.

  • InfiniteHench@kbin.social
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Huh. A bunch of brand new services built by a new handful of techbros has a bias against people who don’t look and sound like them? That is quite the lofty accusation! /s