• Th4tGuyII@fedia.io
    link
    fedilink
    arrow-up
    90
    arrow-down
    3
    ·
    2 months ago

    Image manipulation has always been a thing, and there are ways to counter it…

    But we already know that a shocking amount of people will simply take what they see at face value, even if it does look suspicious. The volume of AI generated misinformation online is already too damn high, without it getting more new strings in it’s bow.

    Governments don’t seem to be anywhere near on top of keeping up with these AI developments either, so by the time the law starts accounting for all of this, the damage will be long done already.

    • RubberDuck@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      2 months ago

      On our vacation 2 weeks ago my wife made an awesome picture just with one guy annoyingly in the background. She just tucked him and clicked the button… poof gone, perfect photo.

      • yamanii@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 months ago

        Yep, this is a problem of volume of misinformation, the truth can just get buried by one single person generating thousands of fake photos, it’s really easy to lie, it’s really time consuming to fact check.

        • gravitas_deficiency@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 months ago

          That’s precisely what I mean.

          The effort ratio between generating synthetic visual media and corroborating or disproving a given piece of visual media has literally inverted and then grown by an order of magnitude in the last 3-5 years. That is fucking WILD. And more than a bit scary, when you really start to consider the potential malicious implications. Which you can see being employed all over the place today.