One of my former (and very long-term) freelance gigs, How Stuff Works, has replaced writers with ChatGPT-generated content and also laid off its excellent editorial staff.
It seems that going forward, when articles I wrote are updated by ChatGPT, my byline will still appear at the top of the article with a note at the bottom of the article saying that AI was used. So it will look as if I wrote the article using AI.
To be clear: I did not write articles using ChatGPT.
#AI #LLM #ChatGPT
Yeah, this is why I can’t really take anyone seriously when they say it’ll take over the world. It’s certainly cool, but it’s always going to be limited in usefulness.
Some areas I can see it being really useful are:
generating believable text - scams, placeholder text, and general structure
distilling existing information - especially if it can actually cite sources, but even then I’d take it with a grain of salt
generating believable text - scams, placeholder text, and general structure
LLM generated scams are going to such problem. Quality isn’t even a problem there as they specifically go for people with poor awareness of these scams, and having a bot that responds with reasonable dialogue will make it that much easier for people to buy into it.
AI tools can be very powerful, but they usually need to be tailored to a specific use case by competent people.
With LLMs it seems to be the opposite, where people not competent for ML are applying it for the broadest of use cases. Just that it looks so good they are easily fooled and lack the understanding to realize the limits.
But there is a very important Usecase too:
Writing stuff that is only read and evaluated by similiar AI tools. It makes sense to write cover letters with ChatGPT because they are demanded but never read by a human on the other side of the job application. Since the weights and stuff behind it serm to be similiar, writing it with ChatGPT helps to pass the automatic analysis.
Rationally that is complete nonsense, but you basically need an AI tool to jump through the hoops made by an AI tool applied by stupid people who need to make themselves look smart.
Yeah, this is why I can’t really take anyone seriously when they say it’ll take over the world. It’s certainly cool, but it’s always going to be limited in usefulness.
Some areas I can see it being really useful are:
That’s about it.
LLM generated scams are going to such problem. Quality isn’t even a problem there as they specifically go for people with poor awareness of these scams, and having a bot that responds with reasonable dialogue will make it that much easier for people to buy into it.
It isnt going to take over, its being put in control by idiots.
AI tools can be very powerful, but they usually need to be tailored to a specific use case by competent people.
With LLMs it seems to be the opposite, where people not competent for ML are applying it for the broadest of use cases. Just that it looks so good they are easily fooled and lack the understanding to realize the limits.
But there is a very important Usecase too:
Writing stuff that is only read and evaluated by similiar AI tools. It makes sense to write cover letters with ChatGPT because they are demanded but never read by a human on the other side of the job application. Since the weights and stuff behind it serm to be similiar, writing it with ChatGPT helps to pass the automatic analysis.
Rationally that is complete nonsense, but you basically need an AI tool to jump through the hoops made by an AI tool applied by stupid people who need to make themselves look smart.