The problem though is much bigger than “verifying” an image’s authenticity. 99% of people are not going to go to a website to learn how to do some new thing to verify the authenticity of an image that confirms and validates positions they already hold.
Right, so once you have the technology to verify that a picture or video was captured by an actual camera, you show this seal of authenticity right next to the media itself in the feeds, and make it very easy to do so. Then, people can look at a glance and see it isn’t verified (similar to how you get all kinds of warnings that a website isn’t secured) and while that still doesn’t prevent people from thinking AI genned shit is real, it’ll help all the most willfully ignorant.
I do think the authenticity problem is harder to crack than is assumed here, and I also think that GenAI companies (and their co-conspirators like X and Facebook and friends) are trying to make it hard to tell whether or not something is real on purpose to push the technology or their agenda.
Right, so once you have the technology to verify that a picture or video was captured by an actual camera, you show this seal of authenticity right next to the media itself in the feeds, and make it very easy to do so. Then, people can look at a glance and see it isn’t verified (similar to how you get all kinds of warnings that a website isn’t secured) and while that still doesn’t prevent people from thinking AI genned shit is real, it’ll help all the most willfully ignorant.
I do think the authenticity problem is harder to crack than is assumed here, and I also think that GenAI companies (and their co-conspirators like X and Facebook and friends) are trying to make it hard to tell whether or not something is real on purpose to push the technology or their agenda.