• 0 Posts
  • 6 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle
  • It can be both, and I’m not sure I see the distinction. It’s a coping mechanism, and that’s not actually an awful thing.

    Growing up in church, nobody was creating hypotheticals and then trying to explain it using religion. It’s just not what it was about. But I guess if you brought up babies with cancer, then yeah the “mysterious ways” argument would have been a prime cop out to avoid challenging faith too much.

    Most commonly, people just wanted to know how to handle the (typically less hyperbolic) challenges in their own lives. They believed they were good and faithful and didn’t understand why God would allow bad things to happen in their lives. Ultimately the “mysterious ways” line was just a coping mechanism, that came with advice to search for the silver linings, and think about past challenges and how they resolved, as evidence of the mysterious ways. Of course it also served to avoid challenging their faith too.

    At the end of the day, religion has its very bad elements that I won’t defend. But it’s silly to ignore that for most people, they’re looking for ways to interpret life in order to find meaning, or maybe cope with struggles. For myself, I’m not religious, but if I were trying to help a friend dealing with something difficult in life, I would still encourage them to look for silver linings and to reflect on past challenges. Not to use it as evidence for some god working in mysterious ways, but just to give them perspective to realize that they have the strength to overcome challenges.



  • A classic use for them is spam filtering.

    Suppose you have a set of spam detection systems/rules which are somewhat expensive to execute, eg a ML model or keyword blocklist. Spam tends to come in waves, and frequently it can be as simple as reposting the same message dozens of times.

    Once your systems determine a piece of content is spam (or you manually flag content), it’s a good idea to insert the content into a bloom filter. This means that future posts of the identical content will be flagged without needing to execute the expensive checks, especially if there’s a surge of content stressing your systems.

    Since it’s probabilistic, you can’t use this unless you have some sort of manual reviewing queue or system, as it’s possible for false positives to be flagged. However, you can also run more intensive checks once you’ve flagged content, to detect false positives.

    The false positives can also be a feature, not a bug: with careful choice of hash functions, your bloom filter can actually detect slightly modified content, since most of the hashes may still be the same.

    I’ve worked at companies which use this strategy so it’s very real world.



  • They probably know what it is, but it’s a bad point if they’re trying to paint DAGs as esoteric CS stuff for the average programmer. I needed to use a topological sort for work coding 2 weeks ago, and any time you’re using a build system, even as simple as Make, you’re using DAGs. Acting like it’s a tough concept makes me wonder why I should accept the rest of the argument.

    Can’t say I have a strong feeling about Gradle though 🤷‍♀️


  • It’s a cathartic, but not particularly productive vent.

    Yes, there are stupid lines of time.sleep(1) written in some tests and codebases. But also, there are test setUp() methods which do expensive work per-test, so that the runtime grew too fast with the number of tests. There are situations where there was a smarter algorithm and the original author said “fuck it” and did the N^2 one. There are container-oriented workflows that take a long time to spin up in order to run the same tests. There are stupid DNS resolution timeouts because you didn’t realize that the third-party library you used would try to connect to an API which is not reachable in your test environment… And the list goes on…

    I feel like it’s the “easy way out” to create some boogeyman, the stupid engineer who writes slow, shitty code. I think it’s far more likely that these issues come about because a capable person wrote software under one set of assumptions, and then the assumptions changed, and now the code is slow because the assumptions were violated. There’s no bad guy here, just people doing their best.