The labels “incorrect” and “correct” are what the evil maid is claiming. That’s the “just trust me bro” part of your “attack.” It’s implausible in the extreme. If you’re taking photos with a camera that’s designed to publish a timestamp within seconds of the photo being taken, and days later some random person is claiming that the first photo was a “fake” but this new one they’re just posting now is the real one they just didn’t get around to posting until now, who in their right mind will believe that?
Sure, you can posit a situation where everyone is stupid and doesn’t believe what the tech is telling them. The tech doesn’t matter in a situation like that. Doesn’t mean the tech is poorly designed, it just means that everyone in your posited scenario is stupid.
It doesn’t have to be a random person claiming that the first image is fake. You could get your private keys leaked, and then the attacker waits until you’re on vacation in a remote area without wifi/cell, and then they publish an image and say “oh, i got wifi for a bit and published this”. You then get back from vacation, see the fake image and claim that you didn’t have any wifi/cell service the whole time and couldn’t have published an image. Why should people trust you? Switch out vacation for “war zone” if you’d like for a relevant example. Right now many people in Gaza or Ukraine don’t exactly have reliable ways to use the internet, and that’s exactly the sort of situation where you’d want to be able to verify images.
Alternatively as I put in another comment, if it’s got the ability to publish stuff straight from the camera, it’s got the ability to be hacked and publish a fake image, straight from the camera.
Publishing things on the blockchain adds nothing here. The tech isn’t telling anyone anything useful, because the map is not the territory.
These are not implausible scenarios. They wouldn’t happen every day because they’re valuable attack vectors, but they’re 100% possible and would be saved to be used at the right time, like when it really matters, which is the worst possible time to incorrectly trust something.
It doesn’t have to be a random person claiming that the first image is fake.
Then we’re no longer talking about an “evil maid” attack. I’m not going to engage in further goalpost-shifting, you’re just adding and removing from the scenario arbitrarily and demanding that this system must satisfy every constraint you throw at it.
If you don’t want to use this system, fine, don’t use it. It’s not for you.
There’s no goalpost-shifting, the evil maid is still getting your keys. I’m not sure what you’re not getting here.
The point is that the system is useful for exactly nobody, because you still have to trust that someone hasn’t had their private keys compromised via an evil maid attack, and publishing timestamps on a blockchain is irrelevant to the problem.
The labels “incorrect” and “correct” are what the evil maid is claiming. That’s the “just trust me bro” part of your “attack.” It’s implausible in the extreme. If you’re taking photos with a camera that’s designed to publish a timestamp within seconds of the photo being taken, and days later some random person is claiming that the first photo was a “fake” but this new one they’re just posting now is the real one they just didn’t get around to posting until now, who in their right mind will believe that?
Sure, you can posit a situation where everyone is stupid and doesn’t believe what the tech is telling them. The tech doesn’t matter in a situation like that. Doesn’t mean the tech is poorly designed, it just means that everyone in your posited scenario is stupid.
It doesn’t have to be a random person claiming that the first image is fake. You could get your private keys leaked, and then the attacker waits until you’re on vacation in a remote area without wifi/cell, and then they publish an image and say “oh, i got wifi for a bit and published this”. You then get back from vacation, see the fake image and claim that you didn’t have any wifi/cell service the whole time and couldn’t have published an image. Why should people trust you? Switch out vacation for “war zone” if you’d like for a relevant example. Right now many people in Gaza or Ukraine don’t exactly have reliable ways to use the internet, and that’s exactly the sort of situation where you’d want to be able to verify images.
Alternatively as I put in another comment, if it’s got the ability to publish stuff straight from the camera, it’s got the ability to be hacked and publish a fake image, straight from the camera.
Publishing things on the blockchain adds nothing here. The tech isn’t telling anyone anything useful, because the map is not the territory.
These are not implausible scenarios. They wouldn’t happen every day because they’re valuable attack vectors, but they’re 100% possible and would be saved to be used at the right time, like when it really matters, which is the worst possible time to incorrectly trust something.
Then we’re no longer talking about an “evil maid” attack. I’m not going to engage in further goalpost-shifting, you’re just adding and removing from the scenario arbitrarily and demanding that this system must satisfy every constraint you throw at it.
If you don’t want to use this system, fine, don’t use it. It’s not for you.
There’s no goalpost-shifting, the evil maid is still getting your keys. I’m not sure what you’re not getting here.
The point is that the system is useful for exactly nobody, because you still have to trust that someone hasn’t had their private keys compromised via an evil maid attack, and publishing timestamps on a blockchain is irrelevant to the problem.