• 1 Post
  • 54 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle


  • This really reads to me like the perspective of a business major whose only concept of productivity is about what looks good on paper. He seems to think it’s a desirable goal for EVERY project to be completed with 0 latency. That’s absurd. If every single incoming requirement is a “top priority, this needs to go out as soon as possible” that’s a management failure. They either need to ACTUALLY prioritize requirements properly, or they need to bring in more people.

    For the Chuck and Patty example, he describes Chuck finishing a task and sending it to Patty for review, and Patty not picking it up because she’s “busy.” Busy with what? If this task is the higher priority, why is she not switching to it as soon as it’s ready? Do either Chuck or Patty not know that this task is the current highest priority? Sounds like management failure. Is there not a system in place (whether automatic or not) for notifying people when high priority tasks are assigned? Also sounds like management failure. Is Patty just incapable of switching tasks within 30-60 minutes? She needs to work on her organization skills, or that management isn’t providing sufficient tooling for multitasking.

    When a top-priority “this needs to go out ASAP” task is in play on my team, I’m either working on it, or I know it’s coming my way soon, and who it’s coming from, because my Project Lead has already coordinated that among all of us. Because that’s her job.

    From the article…

    Project A should take around 2 weeks

    Project B should take around 2 weeks

    That’s 4 weeks to complete them both

    But only if they’re done in sequence!

    If you try to do them at the same time, with the same team, don’t be surprised if it ends up taking 6 weeks!

    Nonsense. If these are both top priorities, and the team has proper leadership, (and the 2 week estimate is actually accurate) 4 weeks is entirely achievable. If these are not top priorities, and the team has other work as well, then yeah, no shit it might be 6 weeks. You can’t just ignore the 2 weeks from Project C if it’s prioritized similarly to A and B. If A and B NEED to go out in 4 weeks, then prioritize them higher, and coordinate your team to make that happen.





  • As I understand it (and assuming you know what asymmetric keys are)…

    It’s about using public/private key pairs and swapping them in wherever you would use a password. Except, passwords are things users can actually remember in their head, and are short enough to be typed in to a UI. Asymmetric keys are neither of these things, so trying to actually implement passkeys means solving this newly-created problem of “how the hell do users manage them” and the tech world seems to be collectively failing to realize that the benefit isn’t worth the cost. That last bit is subjective opinion, of course, but I’ve yet to see any end-users actually be enthusiastic about passkeys.

    If that’s still flying over your head, there’s a direct real-world corollary that you’re probably already familiar with, but I haven’t seen mentioned yet: Chip-enabled Credit Cards. Chip cards still use symmetric cryptography, instead of asymmetric, but the “proper” implementation of passkeys, in my mind, would be basically chip cards. The card keeps your public/private key pair on it, with embedded circuitry that allows it to do encryption with the private key, without ever having to expose it. Of course, the problem would be the same as the problem with chip cards in the US, the one that quite nearly killed the existence of them: everyone that wants to support or use passkeys would then need to have a passkey reader, that you plug into when you want to login somewhere. We could probably make a lot of headway on this by just using USB, but that would make passkey cards more complicated, more expensive, and more prone to being damaged over time. Plus, that doesn’t really help people wanting to login to shit with their phones.


  • Automated certificate lifecycle management is going to be the norm for businesses moving forward.

    This seems counter-intuitive to the goal of “improving internet security”. Automation is a double-edged sword. Convenient, sure, but also an attack vector, one where malicious activity is less likely to be noticed, because actual people aren’t involved in tbe process, anymore.

    We’ve got ample evidence of this kinda thing with passwords: increasing complexity requirements and lifetime requirements improves security, only up to a point. Push it too far, and it actually ends up DECREASING security, because it encourages bad practices to get around the increased burden of implementation.




  • It’s the capability of a program to “reflect” upon itself, I.E. to inspect and understand its own code.

    As an example, In C# you can write a class…

    public class MyClass
    {
        public void MyMethod()
        {
            ...
        }
    }
    

    …and you can create an instance of it, and use it, like this…

    var myClass = new MyClass();
    myClass.MyMethod();
    

    Simple enough, nothing we haven’t all seen before.

    But you can do the same thing with reflection, as such…

    var type = System.Reflection.Assembly.GetExecutingAssembly()
        .GetType("MyClass");
    
    var constructor = type.GetConstructor(Array.Empty<Type>());
    
    var instance = constructor.Invoke(Array.Empty<Object>());
    
    var method = type.GetMethod("MyMethod");
    
    var delegate = method.CreateDelegate(typeof(Action), instance);
    
    delegate.DynamicInvoke(Array.Empty<object>());
    

    Obnoxious and verbose and tossing basically all type safety out the window, but it does enable some pretty crazy interesting things. Like self-discovery and dynamic loading of plugins, or self-configuration of apps. Also often useful when messing with generics. I could dig up some practical use-cases, if you’re curious.


  • I think the big reasons for most people boil down to one or both of two things:

    A) People having 0 trust in Google. I.E. people do not believe that paying for their services will exempt them from being exploited, so what’s the point?

    B) YouTube’s treatment of its content creators. Which are what people actually come to YouTube for. Advertisers and copyright holders (and copyright trolls) get first-class treatment, while the majority of content creators get little to no support for anything.


  • #4 for me.

    Proper HTTP Status code for semantic identification. Duplicating that in the response body would be silly.

    User-friendly “message” value for the lazy, who just wanna toss that up to the user. Also, ideally, this would be what a dev looks at in logs for troubelshooting.

    Tightly-controlled unqiue identifier “code” for the error, allowing consumers to build their own contextual error handling or reporting on top of this system. Also, allows for more-detailed types of errors to be identified and given specific handling and recovery logic, beyond just the status code. Like, sure, there’s probably not gonna be multiple sub-types of 403 error, but there may be a bunch of different useful sub-types for a 400 on a form submission.







  • I think it’s a fallacy to say that you can or should build an application layer that’s completely DBMS agnostic. Even if you are very careful to only write SQL queries with features that are part of the official SQL standard, you’re still coupled to your particular DBMS’s internal implementations for query compilation, planning, optimization, etc. At enterprise scale, there’s still going to be plenty of queries that suddenly perform like crap, after a DBMS swap.

    In my mind, standardization for things like ODBC or Hibernate or Entity Framework or whatever else isn’t meant to abstract away the underlying DBMS, it’s meant to promote compatibility.

    Not to mention that you’re tying your own hands by locking yourself out of non-standard DBMS features, that you could be REALLY useful to you, if you have the right use-cases. JSON generation and indexing is the big one that comes to mind. Also, geospatial data tables.

    For context, my professional work for the past 6 years is an Oracle/.NET/Browser application, and we are HEAVILY invested in Oracle. Most notably, we do a LOT of ETL, and that all runs exclusively in the DBMS itself, in PL/SQL procedures orchestratedbbybthe Oracle job scheduler. Attempting to do this kind of data manipulation by round-tripping it into .NET code would make things significantly worse.

    So, my opinion could definitely be a result of what’s been normalized for me, in my day job. But I’ve also had a few other collaborative side projects where I think the “don’t try and abstract away the DBMS” advice holds true.


  • Generally speaking, fault protection schemes need only account for one fault at a time, unless you’re a really large business, or some other entity with extra-stringent data protection requirements.

    RAID protects against drive failure faults. Backups protect against drive failure faults as well, but also things like accidental deletions or overwrites of data.

    In order for RAID on backups to make sense, when you already have RAID on your main storage, you’d have to consider drive failures and other data loss to be likely to occur simultaneously. I.E. RAID on your backups only protects you from drive failure occurring WHILE you’re trying to restore a backup. Or maybe more generally, WHILE that backup is in use, say, if you have a legal requirement that you must keep a history of all your data for X years or something (I would argue data like this shouldn’t be classified as backups, though).