So are LLMs reliable for research like that?
No. Of course not. They’re not reliable for anything. They don’t have any kind of database of facts and don’t know or attempt to know anything at all.
They’re just a more advanced version of your phone’s predictive text. All they do is try to figure out which words most likely go in what order as a response to the prompt. That’s it. There is no logic of any kind dictating what an LLM outputs.
Uh, no. Can’t you read? These are not people. They are Russian cybercriminals