• Nora@lemmygrad.ml
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    1 year ago

    Forums like this may die, but chat boards like Matrix, Discord, Slack will come out on top I believe.

    Anything with Voice chat. I think we’re still a little ways off from them being able to simulate a talking conversation in real time. The API delay with these AIs is what gives them away.

    Once you have talked with someone you know they are real. As well if you really wanted to confirm people in your community are real you could do voice chat vetting.

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      There are already successfully convincing phone scams with AI. https://www.npr.org/2023/03/22/1165448073/voice-clones-ai-scams-ftc

      This will likely get significantly easier, cheaper, and faster in the very near future. Voice generation is relatively easy. We’re going to need a whole new class of captchas and shibboleths to use online, but honestly, it’s such a fast-moving target that I think cutting-edge AI will forever be a step ahead. I think the best we can hope for is to have viable countermeasures for commoditized AI techniques. For now that might include logic problems (which ChatGPT and its current competitors are quite bad at) but I’m sure the big players already have more advanced language bots in development.

      I reallllly hate the idea of online IDs but it might be the only way.

      • Nora@lemmygrad.ml
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Convincing someone for a scam is one thing, convincing someone you’re having an actually thought out conversation with inflections and emotions and logic all making sense is another.

        If we get to that point the system as we know it will be over anyways.

        • GenderNeutralBro@lemmy.sdf.org
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I remember some years back there was a news story about some chatbot passing the Turing test. The researchers decided to make their chatbot impersonate a young Russian boy, which made its limitations harder to identify as non-human by the native-English-speaking test subjects. So it wasn’t actually that impressive.

          That will likely be the first kind of thing we’ll see for an artificial voice-chatbot as well. It’s a big world and many of the people I talk with on Discord (and even IRL) are not native English speakers and not from my country.

          I’m not intimately familiar with the accents and speech patterns from everywhere in the world, so I’m conditioned to shrug off a lot of “strange” language. Because of this wide range of human speech patterns, I’m not confident that I could validate voices with a low enough false-positive and false-negative rate in practice.

          I haven’t really dug into the latest voice generation AI yet so I’m not sure how capable off-the-shelf programs are. I am familiar with the general techniques, though, and I think adding realistic inflection is within reach. I don’t think it’s possible to automate the entire pipeline yet, at least not with publicly available programs, but the field is advancing quickly so I can’t take much solace in that.