• mustbe3to20signs@feddit.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    AI models can outmatch most oncologists and radiologists in recognition of early tumor stages in MRI and CT scans.
    Further developing this strength could lead to earlier diagnosis with less-invasive methods saving not only countless live and prolonging the remaining quality life time for the individual but also save a shit ton of money.

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      That is a different kind of machine learning model, though.

      You can’t just plug in your pathology images into their multimodal generative models, and expect it to pop out something usable.

      And those image recognition models aren’t something OpenAI is currently working on, iirc.

      • mustbe3to20signs@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I’m fully aware that those are different machine learning models but instead of focussing on LLMs with only limited use for mankind, advancing on Image Recognition models would have been much better.

        • Grandwolf319@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          I agree but I also like to point out that the AI craze started with LLMs and those MLs have been around before OpenAI.

          So if openAI never released chat GPT, it wouldn’t have become synonymous with crypto in terms of false promises.

    • msage@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Wasn’t it proven that AI was having amazing results, because it noticed the cancer screens had doctors signature at the bottom? Or did they make another run with signatures hidden?

      • mustbe3to20signs@feddit.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 months ago

        There were more than one system proven to “cheat” through biased training materials. One model used to tell duck and chicken apart because it was trained with pictures of ducks in the water and chicken on a sandy ground, if I remember correctly.
        Since multiple medical image recognition systems are in development, I can’t imagine they’re all this faulty trained with unsuitable materials.

        • msage@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          They are not ‘faulty’, they have been fed wrong training data.

          This is the most important aspect of any AI - it’s only as good as the training dataset is. If you don’t know the dataset, you know nothing about the AI.

          That’s why every claim of ‘super efficient AI’ need to be investigated deeper. But that goes against line-goes-up principle. So don’t expect that to happen a lot.