• 1 Post
  • 41 Comments
Joined 1 year ago
cake
Cake day: August 7th, 2023

help-circle
  • But nothing is forcing you to check exeptions in most languages, right?

    While not checking for exceptions and .unwrap() are pretty much the same, the first one is something you get by not doing anything extra while the latter is entirely a choice that has to be made. I think that is what makes the difference, and in similar ways why for example nullable enabled project in C# is desired over one that is not. You HAVE to check for null, or you can CHOOSE to assume it is not by trying to use the value directly. To me it makes a difference that we can accidentally forget about a possible exception or if we can choose to ignore it. Because problems dealt with early at compile time, are generally better than those that happen at runtime.


  • It can be pretty convenient to throw an error and be done with it. I think for some languages like Python, that is pretty much a prefered way to deal with things.

    But the entire point of Rust and Result is as you say, to handle the places were things go wrong. To force you to make a choice of what should happen in the error path. It both forces you to see problems you may not be aware of, and handle issues in ways that may not stop the entire execution of your function. And after handling the Result in those cases, you know that beyond that point you are always in a good state. Like most things in Rust, that may involve making decisions about using Result and Option in your structs/functions, and designing your program in ways that force correct use… but that a now problem instead of a later problem when it comes up during runtime.




  • I largely agree with this nodding along to many of the pitfalls presented. Except numbers 2s good refactor. I hope I won’t sound too harsh/picky for an example that perhaps skipped renaming for clarity on the other parts, but I wanted to mention it.

    While I don’t use javascript and may be missing some of the norms and context of the lanugage, creating lamda functions (i don’t know the js term) and then hardcoding them into a function is barely an improvement. It’s fine because they work well with map and filter, but it didn’t address the vague naming. Renaming is refactoring too!

    isAdult is a simple function with a clear name, but formatUser and processUsers are surprisingly vague. formatUser gives only adult FormattedUsers, and that should probably be highlighted in the name of formatUser now that it is a resuable function. To me, it seems ripe for mistaken use given that it is the filter that at a glance handles removing non-adult users before the formatting, while formatUser doesn’t appear to exepct only adult users from it’s naming or even use! Ideally, formatUser should have checked the age on it’s own and set isAdult true/false accordingly, instead of assuming it will be used only on adult Users.

    Likewise, the main function is called processUsers but could easily have been something more descriptive like GetAdultFormattedUsers or something similar depending on naming standards in js and the context it is used in. It may make more sense in the actual context, but in the example a FormattedUser doesn’t have to be an adult, so a function processing users should clarify that it only actually creates adult formatted users since there is a case where a FormattedUser is not an adult.





  • The difference is, with a build pattern you are sure someone set the required field.

    For example, actix-web you create a HttpResponse, but you don’t actually have that stuct until you finish the object by setting the body() or by using finish() to have an empty body. Before that point you have a builder.

    There is noting enforcing you to set the input_directory now, before trying to use it. Depending on what you need, that is no problem. Likewise, you default the max_depth to a value before a user sets one, also fine in itself. But if the expectation is that the user should always provide their own values, then a .configre(max_depth, path) would make sense to finish of the builder.

    It might not matter much here, but if what you need to set was more expensive struts, then defaulting to something might not be a good idea. Or you don’t need to have Option<PathBuf> and check every time you use it, since you know a user provided it. But that is only if it is required.

    Lastly, builder make a lot of sense when there is a lot to provide, which would make creating a strict in a single function/line very complicated.

    Example in non-rust: https://stackoverflow.com/questions/328496/when-would-you-use-the-builder-pattern





  • I kinda get where he is coming for though. AI is being crammed into everything, and especially in things where they are not currently suited to be.

    After learning about Machine learning, you kind realize that unlike “regular programs” that ML gives you “roughly what you want” answers. Approximations really. This is all fine and good for generating images for example, because minor details being off of what you wanted probably isn’t too bad. A chat bot itself isn’t wrong here, because there are many ways to say the same thing. The important thing is that there is a definite step after that where you evaluate the result. In simpler ML you can even figure out the specifics of the process, but for the most part we evaluate what the LLM said or if the image is accurate to our expectations. But we can’t control or constrain the output to exactly our needs, because our restrictions largely are just input in a almost finished approximation engine.

    The problem is, that companies take these approximation engines, put them in their product and consider their output fact. Like Ai chatbots doing customer support, and make up facts like the user that was told about rules that didn’t exist for an airline, or the search engines that parrot jokes or harmful advice. Sure you and I might realize that these things come from a machine that doesn’t actually think about it’s answers, but others don’t. And throwing a “*this might be wrong because its AI” on it is not an acceptable waiver of accountability.

    Despite this, I use chatgpt and gemini a lot to help me program, they get a lot of things wrong but also do great. It’s a great tool, exactly because I step in after the approximation step, review and decide. I’m aware of the limits. But putting these things in front of “users” without a review step means you are advertising that you are either unaware of this flaw, or just see the cost-benefit analysis and see that if noting else it’ll generate interest during the hype.

    There is a huge potential, but throwing AI into a situation where facts are needed when it’s only making rough guesses, is the wrong way about it.






  • Why wait and hope for C++ to get where modern languages are now? I know there’s value in the shared experience in C++ that if adapted would make it stronger, but I can only see a development of C++ having to force a drop of a lot of outdated stuff to even get started on being more suitable.

    But the language is just not comfortable to me. From large amounts of anything creating undefined behavior, the god awful header files which I hate with a passion, tough error messages and such. I also met a fun collision of C++ in Visual Studio and using it with CMake in CLion.

    I’ve just started looking at rust for fun, and outside not understanding all the errors messages with the bounded stuff yet, figuring out what type of string I should use or pass, and the slow climb up the skill curve, it’s pretty nice. Installing stuff is as easy as copy pasting the name into the cargo file!

    Rust is just the prospective replacement of C++ though, people act like the White house said that C++ should be replaced by rust now. But the just recommend it and other languages, C# will do for a lot of people that does not need the performance and detail that the aforementioned languages target. Python is targeting a whole different use, but often combined with the faster ones.

    C++ will live a long time, and if the popularity dies down it will surely be very profitable to be a developer on the critical systems that use it many years from now. I just don’t think an evolution of C++ is going to bring what the world needs, particularly because of the large amount of existing memory related security vulnerabilities. If things were good as they are now, this recommendation would not be made to begin with.


  • Just learning Rust for fun, but decided I wanted to make a simple website. I don’t like web stuff that much, but seen htmx, so I gave that a shot. Found popular actix for the server side, and set out to make a simple blog.

    Making a page is simple, using htmx is also simple. Setting out to create an blog that is all in a single evolving page? Not so much. Either you don’t get the essential back and forward navigation, or you add that but a site refresh will call just the partial endpoint and screw things up. There’s some quite nice work arounds, but at the end result is that sometimes going back will leave me on a blank site in one step.

    I’m probably going to settle for each blog entry being a seperate page if I make the site public. Or just let the small flaws be there, because I hate sites these days being slow. So loading literally only the text/html that’s supposed to change is very cool.

    Next steps is going to remove chances of path traversal and reading literally any file on disk by modifying urls…, some markdown to html crate, and see how image loading works. If I ever get around to any of it.


  • I made do with my IDE, even after getting a developer job. Outside shenanigans involving a committed password, and the occasional empty commit to trigger a build job on GitHub without requiring a new review to be approved, I still don’t use the commandline a lot.

    But it’s true, if you managed to commit and push, you are OK. Even the IDE will make fixing most merges simple.