“I love life on Earth… but I love capitalism more.”
“I love life on Earth… but I love capitalism more.”
You’ve linked into it, but I was just going to point at the Git book: https://git-scm.com/book/en/v2
It’s an afternoon’s reading; it does an excellent job of giving you the right mental model - and a crib aheet of commands to navigate it.
“Maybe our friend doesn’t like monads.”
I’m coming around to it.
That’s a cracking article.
My own use of jvm errors tends to follow the same kinds of patterns: I think the major fault with that model is having RuntimeException as a subclass of Exception, because it’s really intended for abandonment-style errors. (The problem is that lots of people use it instead as an exception system in order to cut down on boilerplate.)
I find it eye-opening that the author prefers callsite annotation with try
(although I’m not going to argue with their experience at the time). I can see this being either “no big deal” or even “a good thing” to Rust users in particular - mutability and borrowing annotations at both callsite and definition aren’t required to make the language work afaict (your ide will instantly carp if you miss 'em out) but the increased programmer visibility is typically seen as a good thing. (Perhaps this is down to people largely reviewing PRs in a browser, I dunno.) Certainly there’s tons of good food for thought there.
Have you seen pictures of the sub? What makes you think the wiring was all hidden?
You joke, but watch this:
https://archive.org/details/take-me-to-titanic
from 29 minutes in. A last-minute adjustment before launch plugged in a thruster backwards; no protocol to check the behaviour prelaunch. They doscovered it when they got to the bottom.
I’m not sure why it’s “obviously” good to move from one mechanism to two: as a user I now have to categorise every path to work out which is appropriate.
What I said was less about adding to a function signature than it was about adding to a facade - that is, a system boundary, although the implementation may be the same depending on language. People typically use exceptions pretty badly - a function signature with a baggage-train of internal exceptions that might be thrown by implementation guts is another antipattern that gives the approach a bad rep. Errors have types too (or they should have), and the typical exception constructor has a wrapper capability for good reason.
That’s fine, and for that there are sum types. My own opinion differs - it’s a question of taste. Being able to bundle the handling of exceptional situations aside from the straight-line logic (or use RAIi-style cleanup) is notationally convenient.
Yes, you can do the same with monads; use the tools available to you.
Checked exceptions are powerful but misunderstood. Exception types are a useful part of the facade to a module - they express to a caller how it can go wrong even if used correctly.
Runtime exceptions are typically there to express contract-breaking by callers; although as an alternative return mechanism I’ve seen them used to simplify the inner workings of some frameworks.
I think they get a bad rep because there aren’t a ton of good examples of how to use them - even the java classpath had some egregious misuse initially that helped turn people off the key ideas.
The other thing to watch out for is if you’re splitting state between volumes, but i think you’ve already ruled that out.
I’d be cautious about the “kill -9” reasoning. It isn’t necessarily equivalent to yanking power.
Contents of application memory lost, yes. Contents of unflushed OS buffers, no. Your db will be fsyncing (or moral equivalent thereof) if it’s worth the name.
This is an aside; backing up from a volume snapshot is half a reasonable idea. (The other half is ensuring that you can restore from the backup, regularly, automatically, and the third half is ensuring that your automated validation can be relied on.)
That depends entirely on the ability to execute change. CTO is the role that should be driving this.
Developers aren’t the ones at fault here.
Possibly the thing that was intended to be deployed was. What got pushed out was 40kB of all zeroes. Could’ve been corrupted some way down the CI chain.
Check Crowdstrike’s blurb about the 1-10-60 rule.
You can bet that they have a KPI that says they can deliver a patch in under 15m; that can preclude testing.
Although that would have caught it, what happened here is that 40k of nuls got signed and delivered as config. Which means that unparseable config on the path from CnC to ring0 could cause a crash and was never covered by a test.
It’s a hell of a miss, even if you’re prepared to accept the argument about testing on the critical path.
(There is an argument that in some cases you want security aystems to fail closed; however that’s an extreme case - PoS systems don’t fall into that - and you want to opt into that explicitly, not due to a test omission.)
…unless it’s running software that uses signed 32-bit timestamps, or stores data using that format.
The point about the “millennium bug” was that it was a category of problems that required (hundreds of) thousands of fixes. It didn’t matter if your OS was immune, because the OS isn’t where the value is.
Incidentally, this kind of passive-aggressive pressure is the kind of thing that might be considered a legitimate security threat, post xz. If you need to vent, vent in private. If “it works for you” but the maintainer is asking legitimate questions about the implementation, consider engaging with that in good faith and evaluating their questions with an open mind.
Which mantra is that? The ellipsis doesn’t offer a clue.
I’m a mathematician too. They’re probably speaking from an intuitive grasp of utility.