• 0 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • I’m not arguing for anything in the post above, just pointing out that a broken (or badly repaired) insulin pump is genuinely more dangerous than having no insulin pump. That doesn’t have to count against the right to repair one, as if you’ve got the right to repair an insulin pump, and do so badly, it doesn’t mean you’re legally forced to use it afterwards, just like I’ve got the right to inject all the insulin in my fridge with an insulin pen back to back, but I’m not legally forced to do so.

    I do think the right to repair should be universal, but as I think that medical stuff should be paid for by the state, NHS-style, that would end up meaning that the NHS could repair medical devices themselves if they deemed it more economical to do so and recertify things as safe than to get the manufacturer to repair or replace them. The NHS is buying the devices, and gets the right to repair them, and that saves the taxpayer money, as even if they don’t actually end up repairing anything, it stops manufacturers price gouging for repairs and replacements, and if the manufacturer goes bust or refuses to repair something, there’re still ways to keep things working. It doesn’t mean unqualified end users can’t use their new right to repair their medical devices and risk getting it wrong, but if you’ve got an option of a free repair/replacement, most people would choose the safe and certified repair over their own bodge.


  • If you’ve got a broken insulin pump, assuming you’re in a country with a functioning healthcare system, you should have been given a spare pump with the original, and probably some insulin pens, so when one breaks, you fall back to the spare, and get given a new one to be the new spare (or could get the broken one repaired). Using the spare is completely safe.

    If you don’t have a spare, your sugars would go up over several hours, but you’d have a day or two to get to a hospital and potentially several days after that for someone to find you and get you to a hospital, so it’s not safe, but also not something you’d die from if you had any awareness that there was a problem.

    If you’ve got an incorrectly-repaired pump, you could have it fail to give you enough insulin, and end up with higher sugars, notice the higher sugars, and then switch to the spare. That’d be inconvenient, but not a big deal. However, you could also have it dump its entire cartridge into you at once, and have your sugars plummet faster than you can eat. If you don’t have someone nearby, you could be dead in a couple of hours, or much less if you were, for example, driving. That’s much more dangerous than having no insulin at all.

    Prosthetic legs don’t have a failure mode that kills you, so a bad repair can’t make them worse than not having them at all, but insulin pumps do, so a bad repair could.



  • If you give a chip more voltage, its transistors will switch faster, but they’ll degrade faster. Ideally, you want just barely enough voltage that everything’s reliably finished switching and all signals have propagated before it’s time for the next clock cycle, as that makes everything work and last as long as possible. When the degradation happens, at first it means things need more voltage to reach the same speed, and then they totally stop working. A little degradation over time is normal, but it’s not unreasonable to hope that it’ll take ten or twenty years to build up enough that a chip stops working at its default voltage.

    The microcode bug they’ve identified and are fixing applies too much voltage to part of the chip under specific circumstances, so if an individual chip hasn’t experienced those circumstances very often, it could well have built up some degradation, but not enough that it’s stopped working reliably yet. That could range from having burned through a couple of days of lifetime, which won’t get noticed, to having a chip that’s in the condition you’d expect it to be in if it was twenty years old, which still could pass tests, but might keel over and die at any moment.

    If they’re not doing a mass recall, and can’t come up with a test that says how affected an individual CPU has been without needing to be so damaged that it’s no longer reliable, then they’re betting that most people’s chips aren’t damaged enough to die until the after warranty expires. There’s still a big difference between the three years of their warranty and the ten to twenty years that people expect a CPU to function for, and customers whose parts die after thirty-seven months will lose out compared to what they thought they were buying.





  • That would be annoying for people who work on files with a double extension for legitimate reasons, e.g. .tar.gz, and (this can’t be stressed strongly enough) Windows users do not pay attention to warning popups, so it wouldn’t actually help. Despite it being eighteen years since Windows Vista released, and therefore vanishing unlikely that any given software was written assuming that Windows didn’t have a permissions system, it’s still most people’s first troubleshooting step to try and run things as admin, and you still get loads of people (including ones who should know better, e.g. ones who also use Linux and would never log in as root) who disable UAC as one of the first things they do when setting up a windows install, and end up running everything as the equivalent of root just to suppress the mildly annoying pop-up when something asks for elevated permissions.

    So, your proposed popup:

    • would be annoying including for legitimate uses
    • wouldn’t help as anyone who already ignores the smart screen popup that shows up when running a dodgy application will ignore the new popup, too
    • would be disabled by huge swathes of users anyway


  • It’s a silly flag to use as it only works when running 32-bit Windows applications on 64-bit Windows, and if you’re compiling from source, you should also have the option to just build a 64-bit binary in the first place. It made a degree of sense years ago when people actually used 32-bit Windows sometimes (which was usually just down to OEMs installing the wrong version on prebuilt PCs could have supported 64-bit) if you really wanted to only have one binary or you consumed a precompiled third party library and had to match its architecture.





  • I think you’ve misunderstood my complaint. I know how you go about composing things in a Unix shell. Within your post, you’ve mentioned several distinct languages:

    • sh (I don’t see any Bash-specific extensions here)
    • Perl-compatible regular expressions, via grep -P
    • printf expressions
    • GNU ps’s format expressions
    • awk

    That’s quite a lot of languages for such a simple task, and there’s nothing forcing any consistency between them. Indeed, awk specifically avoids being like sh because it wants to be good at the things you use awk for. I don’t personally consider something to be doing its job well if it’s going to be wildly different from the things it’s supposed to be used with, though (which is where the disagreement comes from - the people designing Unix thought of it as a benefit). It’s important to remember that the people designing Unix were very clever and were designing it for other very clever people, but also under conditions where if they hit a confusing awk script, they could just yell Brian, and have the inventor of awk walk over to their desk and explain it. On the other hand, it’s a lot of stuff for a regular person to have in their head at once, and it’s not particularly easy to discover or learn about in the first place, especially if you’re just reading a script someone else has written that uses utilities you’ve not encountered before. If a general-purpose programming language had completely different conventions in different parts of its standard library, it’d be rightly criticised for it, and the Unix shell experience isn’t a completely un-analogous entity.

    So, I wouldn’t consider the various tools you used that don’t behave like the other tools you used to be doing their job well, as I’d say that’s a reasonable requirement for something to be doing its job well.

    On the other hand, PowerShell can do all of this without needing to call into any external tools while using a single language designed to be consistent with itself. You’ve actually managed to land on what I’d consider a pretty bad case for PowerShell as instead of using an obvious command like Get-ComputerInfo, you need:

    (Get-WmiObject Win32_ComputerSystem).FreePhysicalMemory / 1024
    

    Even so, you can tell at a glance that it’s getting the computer system, accessing it’s free physical memory, and dividing the number by 1024.

    To get the process ID with the largest working set, you’d use something like

    (Get-Process | Sort-Object WorkingSet | Select-Object -Last 1).Id
    # or
    (Get-Process | Sort-Object WorkingSet)[-1].Id
    

    I’m assuming either your ps is different to mine, or you’ve got a typo, as mine gives the parent process ID as the second column, not the process’ own ID, which is a good demonstration of the benefits of structured data in a shell - you don’t need sed/awk/grep incantations to extract the data you need, and don’t need to learn the right output flag for each program to get JSON output and pipe it to jq.

    There’s not a PowerShell builtin that does the same job as watch, but it’s not a standard POSIX tool, so I’m not going to consider it cheating if I don’t bother implementing it for this post.

    So overall, there’s still the same concept of composing something to do a specific task out of parts, and the way you need to think about it isn’t wildly different, but:

    • PowerShell sees its jurisdiction as being much larger than Bash does, so a lot of ancillary tools are unnecessary as they’re part of the one thing it aims to do well.
    • Because PowerShell is one thing, it’s got a pretty consistent design between different functions, so each one’s better at its job as you don’t need to know as much about it the first time you see it in order to make it work.
    • The verbosity of naming means you can understand what something is at first glace, and can discover it easily if you need it but don’t know what it’s called - Select-String does what it says on the tin. grep only does what it says on the tin if you already know it’s global regular expression print.
    • Structured data is easier to move between commands and extract information from.

    Specifically regarding the Unix philosophy, it’s really just the first two bullet points that are relevant - a different definition of thing is used, and consistency is a part of doing a job well.


  • Powershell isn’t perfect, but I like it a lot more than anything that takes sh as a major influence or thing to maintain backwards compatibility with. I don’t think the Unix philosophy of having lots of small tools that do one thing and do it well that you compose together has ever been achieved as I think being consistent with other tools you use at the same time should be part of doing your thing well, and things like sed, grep and perl all having different regular expression syntax demonstrate inconsistency and are easy to find. I also like that powershell is so verbose as it makes it much easier to read someone else’s script without knowing much powershell, and doesn’t end up getting in the way of actually writing powershell as the autocomplete is really good. I like having a type system and structured data, too.

    Some of these things are brought to a unixier shell with nushell, but I’m not convinced it’ll take off. Even if people use it, it’ll be a long while before you Google a problem and the solution also includes a nushell snippet, whereas for any Windows problem, you’ll typically get a GUI solution and a powershell solution, and only a maniac would give a CMD solution.




  • There are other rarely-used C+±like languages that fit your criteria, and they basically all aim to eliminate the kind of thing I was talking about. If someone was used to one of those and tried picking up C++ for the first time, they’d probably end up with working, but unnecessarily slow C++, having assumed the compiler would do a bunch of things for them that it actually wouldn’t.

    The popular low-level systems programming languages that aren’t C++ are C and Rust. Neither is object-oriented. C programmers forced to use C++ tend to basically write C with a smattering of features that make it not compile with a C compiler, and produce a horror show that brings out the worst of both languages and looks nothing like C++ a C++ programmer would write, then write a blog post about how terrible C++ is because when they tried using it like C, it wasn’t as good at being C as C was. Rust programmers generally have past experience with C++, so tend to know how to use it properly, even if they hate the experience.


  • I’d say this is pretty dependent on the language. For example, with C++, you need to micromanage (or at least benefit from micromanaging) a lot of things that you can get away without knowing about at all with other languages. That stuff takes time to pick up if you’re self-teaching as you can write stuff that looks like it works without knowing its half as fast as it could be because you aren’t making use of move semantics, and if a colleague is teaching you, then that’s time they’re not spending directly doing their own work. On the other hand, someone with Typescript experience could write pretty decent Javascript from the get-go.