• ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      Even if it worked, and it probably won’t because we don’t have blockchain encryption for image/video data, image files would be astronomically larger in size than the relatively small confirmations made for bitcoins, but it may be possible if clever people devoted to it.

      The problem though is much bigger than “verifying” an image’s authenticity. 99% of people are not going to go to a website to learn how to do some new thing to verify the authenticity of an image that confirms and validates positions they already hold.

      We can usually figure out how real material is still right now, but people don’t do that now, they’re not going to do it if you set up an extra step to check for every image, our age of intellectual politics and good-faith social discourse is over. It’s sad, it sucks, but this is the consequence of an unevolved primate creating advanced information sharing systems. It will be exploited to influence the monkeys every damn time.

      • NewNewAugustEast@lemmy.zip
        link
        fedilink
        arrow-up
        3
        ·
        1 day ago

        So true. Even if a video is real, a simple meme or still image and post to Facebook is all that is necessary to fool people.

      • aesthelete@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        The problem though is much bigger than “verifying” an image’s authenticity. 99% of people are not going to go to a website to learn how to do some new thing to verify the authenticity of an image that confirms and validates positions they already hold.

        Right, so once you have the technology to verify that a picture or video was captured by an actual camera, you show this seal of authenticity right next to the media itself in the feeds, and make it very easy to do so. Then, people can look at a glance and see it isn’t verified (similar to how you get all kinds of warnings that a website isn’t secured) and while that still doesn’t prevent people from thinking AI genned shit is real, it’ll help all the most willfully ignorant.

        I do think the authenticity problem is harder to crack than is assumed here, and I also think that GenAI companies (and their co-conspirators like X and Facebook and friends) are trying to make it hard to tell whether or not something is real on purpose to push the technology or their agenda.