• cestvrai@lemm.ee
    link
    fedilink
    arrow-up
    28
    ·
    6 months ago

    Wow, wasn’t expecting such a feel-good AI story.

    I wonder if I could fuck with my ISOs chatbot 🤔

  • intensely_human@lemm.ee
    link
    fedilink
    arrow-up
    1
    ·
    6 months ago

    It’s a good precedent. Nip this shit in the bud immediately. AI agents you allow to speak on behalf of you company, are agents of the company.

    So if you want to put an AI up front representing your company, you need to be damn sure it knows how to walk the line.

    When there’s a person, and employee involved, then the employee can be fired to symbolically put the blame on them. But the AI isn’t a person. It can’t take the blame for you.

    This is a very nice counterbalancing force to slow the implementation of AI, and to incentivize its safety/reliability engineering. Therefore, I’m in favor of this ruling. AI chatbot promises you a free car, the company has to get you the car.

  • Zworf@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    6 months ago

    Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt’s case if its chatbot had warned customers that the information that the chatbot provided may not be accurate.

    Just no.

    If you can’t guarantee it’s accurate then don’t offer it.

    I as a customer don’t want to have to deal with lying chatbots and then having to figure out whether it’s true or not.