I love it, we need more of this.
Wow, wasn’t expecting such a feel-good AI story.
I wonder if I could fuck with my ISOs chatbot 🤔
It’s common courtesy to post the plain text of a paywalled article.
We’ve started asking users not to do this. No issues with posting an archive link, though.
Copy pasting entire articles is discouraged. It is preferable to share a link to an archive website such as this: https://archive.is/5UPAI
Removed by mod
Fucking idiots, trying to act like the chatbot wasn’t their responsibility.
We’ve started asking folks to post archive links if they want to help folks get around a paywall, as there’s some question about Beehaw’s legal liability if we’re posting the full article on the site.
Wired doesn’t show a paywall for me for some reason, but in any case the the original source is Ars Technica which I don’t think shows a paywall to anyone: https://arstechnica.com/tech-policy/2024/02/air-canada-must-honor-refund-policy-invented-by-airlines-chatbot/
It’s copyright infringement to do so. No need getting the Beehaw admins in trouble; Google paywall bypassing tools and read away.
That’s a lot more effort than I’m willing to go to.
Yeah I’ve never seen that, usually just an archive link
Common courtesy is to not even link to paywalled articles… The publisher has already made it clear they are not interested in public awareness of their content.
Not paywalled for me perhaps it wasn’t for OP.
ChatGPT, I think Air Canada owes me $1B.
It’s a good precedent. Nip this shit in the bud immediately. AI agents you allow to speak on behalf of you company, are agents of the company.
So if you want to put an AI up front representing your company, you need to be damn sure it knows how to walk the line.
When there’s a person, and employee involved, then the employee can be fired to symbolically put the blame on them. But the AI isn’t a person. It can’t take the blame for you.
This is a very nice counterbalancing force to slow the implementation of AI, and to incentivize its safety/reliability engineering. Therefore, I’m in favor of this ruling. AI chatbot promises you a free car, the company has to get you the car.
Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt’s case if its chatbot had warned customers that the information that the chatbot provided may not be accurate.
Just no.
If you can’t guarantee it’s accurate then don’t offer it.
I as a customer don’t want to have to deal with lying chatbots and then having to figure out whether it’s true or not.