• 2 Posts
  • 15 Comments
Joined 8 months ago
cake
Cake day: February 19th, 2024

help-circle

  • 80 steps too far down the capitalism ladder

    This is the result of capitalism - corporations (aka the rich selfish assholes running them) will always attempt to do horrible things to earn more money, so long as they can get away with it, and only perhaps pay relatively small fines. The people who did this face no jailtime, face no real consequences - this is what unregulated capitalism brings. Corporations should not have rights or protect the people who run them - the people who run them need to face prison and personal consequences. (edited for spelling and missing word)


  • That leads us to John Gabrield’s Greater Internet Fuckwad Theory

    I don’t have comments on the rest of your post, but I absolutely hate how that cartoon has been used by people to justify that they are otherwise “good” people who are simply assholes on the internet.

    The rebuttal is this: This person, in real life, chose to go on the internet and be a “total fuckwad”. It’s not that adding anonymity changed something about them, they were the fuckwads to begin with, but with a much lower chance of having to be held accountable, they are free to express it.












  • You don’t do what Google seems to have done - inject diversity artificially into prompts.

    You solve this by training the AI on actual, accurate, diverse data for the given prompt. For example, for “american woman” you definitely could find plenty of pictures of American women from all sorts of racial backgrounds, and use that to train the AI. For “german 1943 soldier” the accurate historical images are obviously far less likely to contain racially diverse people in them.

    If Google has indeed already done that, and then still had to artificially force racial diversity, then their AI training model is bad and unable to handle that a single input can match to different images, instead of the most prominent or average of its training set.


  • random9@lemmy.worldto196@lemmy.blahaj.zoneboomers
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    8 months ago

    Nah, I bought a house 3 years ago. I still hate how inaccessible the housing market is, how shitty conservatives are towards other people and how much they deny science. Owning property doesn’t magically make one conservative. Fuck conservatives, fuck the rich.


  • random9@lemmy.worldto196@lemmy.blahaj.zoneboomers
    link
    fedilink
    arrow-up
    1
    ·
    8 months ago

    As I have gotten older I have become more angry and cynical. But I’m very much more anti-conservative now than I was before, which in the US would be more left leaning, but honestly I never thought of myself as that, I just thought that I was being rational.

    But being rational these days is literally being anti-conservative, because of how conservatives are banning books, attacking LGBTQ+ people for just wanting to be themselves, denying global warming even exists, and yes, letting the rich get richer by being corrupt and cutting taxes for them.

    Though I also have some views that might make someone very left leaning think I’m against them (for example I do believe that some words shouldn’t be viewed as bad when not meant as personal attack against disabled people, like retard or fat or obese; and I also think people are allowed to choose their pronouns and in most cases I will respect it, but some people are just doing it for shits and giggles, not seriously actually considering themselves as what they choose). It’s easy to think someone who disagrees with those views as I do that they might be conservative, but I am far, far from it.


  • This is an interesting topic that I remember reading almost a decade ago - the trans-human AI-in-a-box experiment. Even a kill-switch may not be enough against a trans-human AI that can literally (in theory) out-think humans. I’m a dev, though not anywhere near AI-dev, but from what little I know, true general purpose AI would also be somewhat of a mystery box, similar to how actual neutral network behavior is sometimes unpredicable, almost by definition. So controlling an actual full AI may be difficult enough, let alone an actual true trans-human AI that may develop out of AI self-improvement.

    Also on unrelated note I’m pleasantly surprised to see no mention of chat gpt or any of the image generating algorithms - I think it’s a bit of a misnomer to call those AI, the best comparison I’ve heard is that “chat gpt is auto-complete on steroids”. But I suppose that’s why we have to start using terms like general-purpose AI, instead of just AI to describe what I’d say is true AI.