• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: June 19th, 2023

help-circle




  • andallthat@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 months ago

    I’m not sure we, as a society, are ready to trust ML models to do things that might affect lives. This is true for self-driving cars and I expect it to be even more true for medicine. In particular, we can’t accept ML failures, even when they get to a point where they are statistically less likely than human errors.

    I don’t know if this is currently true or not, so please don’t shoot me for this specific example, but IF we were to have reliable stats that everything else being equal, self-driving cars cause less accidents than humans, a machine error will always be weird and alien and harder for us to justify than a human one.

    “He was drinking too much because his partner left him”, “she was suffering from a health condition and had an episode while driving”… we have the illusion that we understand humans and (to an extent) that this understanding helps us predict who we can trust not to drive us to our death or not to misdiagnose some STI and have our genitals wither. But machines? Even if they were 20% more reliable than humans, how would we know which ones we can trust?








  • About 20 new cases of gender violence arrive every day, each requiring investigation. Providing police protection for every victim would be impossible given staff sizes and budgets.

    I think machine-learning is not the key part, the quote above is. All these 20 people a day come to the police for protection, a very small minority of them might be just paranoid, but I’m sure that most of them had some bad shit done to them by their partner already and (in an ideal world) would all deserve some protection. The algorithm’s “success” in defined in the article as reducing probability of repeat attacks, especially the ones eventually leading to death.

    The police are trying to focus on the ones who are deemed to be the most at risk. A well-trained algorithm can help reduce the risk vs the judgement of the possibly overworked or inexperienced human handling the complaint? I’ll take that. But people are going to die anyway. Just, hopefully, a bit less of them and I don’t think it’s fair to say that it’s the machine’s fault when they do.




  • I agree with you, and that’s what I choose to think when I feel like the “best” version of me.

    But there are moments (or a part of me) that has a way more violent disposition and feels differently about people who do terrible thing.

    I’m a very calm person and not at all violent so please don’t report me to the police on the base of these posts… That violent part of me is small and weak, but I just think it’s important to acknowledge it because it’s also the part that makes me recognize that a rapist or a murderer is a person like me and that it might be me, with the wrong set of circumstances, life choices and frame of mind.




  • Well, they probably didn’t do it very scientifically but if they could think of it and the tools existed, someone in history is likely to have tried it as a method for killing people.

    Impaled people, for instance, could allegedly take days to die. Being slowly eaten by ants or rats sounds pretty painful too

    There’s one called “life” that is pretty cruel too. It might take anywhere from seconds to more than a hundred years for it to eventually kill you and some.people get to experience a lot of pain throughout the experience.