Math and limited data probably. If the AI “sees” that its forces outnumber an opponent or a nuke doesn’t affect it’s programmed goals, it’s efficient to just wipe out an opponent. To your point, if the training data or inputs have any bias, it will probably be expressed more in the results.
(Chat bots are trained on data. How that data is curated is going to be extremely variable.)
Gee where did they learn that from?
Gandhi?
It’s interesting how a bug could be so foreshadowing.
Math and limited data probably. If the AI “sees” that its forces outnumber an opponent or a nuke doesn’t affect it’s programmed goals, it’s efficient to just wipe out an opponent. To your point, if the training data or inputs have any bias, it will probably be expressed more in the results.
(Chat bots are trained on data. How that data is curated is going to be extremely variable.)
How do we eliminate human violence forever?
Easy! Just eliminate all of humankind!
(Bard, ChatGPT, you’d better not be reading this)
That data does not contain examples of diplomacy since that stuff is generally discrete/secret
In the present case, from the prompts.
They presumed it is gonna be the next Nolan movie.