You know if you want to do something more effective than just putting copyright at the end of your comments you could try creating an adversarial suffix using this technique. It makes any LLM reading your comment begin its response with any specific output you specify (such as outing itself as a language model or calling itself a chicken).
It gives you the code necessary to be able to create it.
There are also other data poisoning techniques you could use just to make your data worthless to the AI but this is the one I thought would be the most funny if any LLMs were lurking on lemmy (I have already seen a few).
That’s a neat idea and I’ve considered it, but would need time to research and test. Time I don’t have, so this is the easiest thing I came up with.
If there were a bot, plugin, browser extension, or something that did the necessary modifications and kept up to date with new developments in AI, I’d use it.
You know if you want to do something more effective than just putting copyright at the end of your comments you could try creating an adversarial suffix using this technique. It makes any LLM reading your comment begin its response with any specific output you specify (such as outing itself as a language model or calling itself a chicken).
It gives you the code necessary to be able to create it.
There are also other data poisoning techniques you could use just to make your data worthless to the AI but this is the one I thought would be the most funny if any LLMs were lurking on lemmy (I have already seen a few).
Thanks for the link. This was a good read.
That’s a neat idea and I’ve considered it, but would need time to research and test. Time I don’t have, so this is the easiest thing I came up with. If there were a bot, plugin, browser extension, or something that did the necessary modifications and kept up to date with new developments in AI, I’d use it.
CC BY-NC-SA 4.0