• 0 Posts
  • 58 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle
  • I used to work summers as an apprentice electrician. The amount of crazy wiring I saw in old houses was (heh) shocking. Sometimes it was just that it was old. Real old houses sometimes just had bare wire wrapped in silk. … And a few decades later that silk was frayed and crumbling in the walls and needed replacing.

    My current house was wired at a time when copper was more precious, so it was wired up and down through the house, with circuits arranged by proximity, not necessarily logic. When a certain circuit in my house blows the breaker, my TV, PC and one wall of the master bedroom all lose power. The TV and PC are not in the same room either.



  • Have you ever been in an old house? Not old, like, on the Historic Register, well-preserved, rich bastard “old house”. Just a house that has been around awhile. A place that has seen a lot of living.

    You’ll find light switches that don’t connect to anything; artwork hiding holes in the walls; sometimes walls have been added or removed and the floors no longer match.

    Any construction that gets used, must change as needs change. Be it a house or a city or a program, these evolutions of need inevitably introduce complexity and flaws that are large enough to annoy, but small enough to ignore. Over time those issues accumulate until they reach a crisis point. Houses get remodeled or torn down, cities build or remove highways, and programs get refactored or replaced.

    You can and should design for change, within reason, because all successful programs will need to change in ways you cannot predict. But the fact that a system eventually becomes complex and flawed is not due to engineering failures - it is inherent in the nature of changing systems.




  • Oh, for sure. I focused on ML in college. My first job was actually coding self-driving vehicles for open-pit copper mining operations! (I taught gigantic earth tillers to execute 3-point turns.)

    I’m not in that space anymore, but I do get how LLMs work. Philosophically, I’m inclined to believe that the statistical model encoded in an LLM does model a sort of intelligence. Certainly not consciousness - LLMs don’t have any mechanism I’d accept as agency or any sort of internal “mind” state. But I also think that the common description of “supercharged autocorrect” is overreductive. Useful as rhetorical counter to the hype cycle, but just as misleading in its own way.

    I’ve been playing with chatbots of varying complexity since the 1990s. LLMs are frankly a quantum leap forward. Even GPT-2 was pretty much useless compared to modern models.

    All that said… All these models are trained on the best - but mostly worst - data the world has to offer… And if you average a handful of textbooks with an internet-full of self-confident blowhards (like me) - it’s not too surprising that today’s LLMs are all… kinda mid compared to an actual human.

    But if you compare the performance of an LLM to the state of the art in natural language comprehension and response… It’s not even close. Going from a suite of single-focus programs, each using keyword recognition and word stem-based parsing to guess what the user wants (Try asking Alexa to “Play ‘Records’ by Weezer” sometime - it can’t because of the keyword collision), to a single program that can respond intelligibly to pretty much any statement, with a limited - but nonzero - chance of getting things right…

    This tech is raw and not really production ready, but I’m using a few LLMs in different contexts as assistants… And they work great.

    Even though LLMs are not a good replacement for actual human skill - they’re fucking awesome. 😅


  • What I think is amazing about LLMs is that they are smart enough to be tricked. You can’t talk your way around a password prompt. You either know the password or you don’t.

    But LLMs have enough of something intelligence-like that a moderately clever human can talk them into doing pretty much anything.

    That’s a wild advancement in artificial intelligence. Something that a human can trick, with nothing more than natural language!

    Now… Whether you ought to hand control of your platform over to a mathematical average of internet dialog… That’s another question.


  • Lots of little quality of life things. For instance, in Kotlin types can be marked nullable or not. When you are passing a potential null into a non-nullable argument, the compiler raises an error.

    But if you had already checked earlier in scope whether or not the value was null, the compiler remembers that the value is guaranteed not to be null and won’t blow up.

    Same for other typechecks. Once you have asserted that a value is a given type, you don’t need to cast it everywhere else. The compiler will remember.




  • Argh. I hate that argument.

    Yes - “Rewriting history” is a Bad Thing - but o argue that’s only on ‘main’ (or other shared branches). You should (IMHO) absolutely rewrite your local history pre-push for exactly the reasons you state.

    If you rewrite main’s history and force your changes everybody else is gonna have conflicts. Also - history is important for certain debugging and investigation. Don’t be that guy.

    Before you push though… rebasing your work to be easily digestible and have a single(ish) focus per commit is so helpful.

    • review is easier since concerns aren’t mixed
    • If a commit needs to be reverted it limits the collateral damage
    • history is easier to follow because the commits tell a story

    I use a stacked commit tool to help automate rebasing on upstream commits, but you can do it all with git pretty easily.

    Anyway. Good on you; Keep the faith; etc etc. :)







  • Classes are Data plus the code required to modify that Data. The idea is to encapsulate data modifications into one thing (a Class) that knows how to modify all the Data as a single unit. This lets us write some code to describe, say, a Scrollbar widget. The Class for the widget combines all the Data for a Scrollbar (position, orientation, bar size, total size, etc) with the methods that read or modify that data (scroll up/down, change size, draw, etc).

    That’s the first Big Idea of OOP - that data should be grouped with the functions that modify it. If you don’t have that - as in C - you have to write functions that only work on a given data type but which are namespaced separately. You get functions like void set_scrollbar_pos(void* scrollbar, word pos) which become verbose in a large project. (I’m not saying this is the worst thing in the world, just a different style.)

    The second Big Idea of OOP is message passing. Now that we have code and Data bundled together, it would be nice if Objects that share functions of the same essential type and intention could be swapped out interchangeably. So instead of directly invoking a function on an Object, we send a ‘message’ that says something like ‘if you know how, please draw yourself on screen, relative to X,Y’.

    Of course, since plain English is hella verbose, the actual message is going be something like “draw, X,Y” and the Object receiving the message then sorts out if it has a method called “draw” that can use the provided X and Y. If so, it runs the code to do so. If not, you get an error.

    Messages like this mean that you can swap out compatible Classes for one another. E.g. you can ask any collection of widgets to .draw themselves with a single method and let the compiler/interpreter generate the machine code as needed. That reduces the amount of boilerplate for engineers by a lot! Otherwise, trying to work with any collection of heterogeneous Objects (like a List of every Widget contained in a Window) would need to have essentially the same code rewritten for every different Type needed - a combinatorial explosion of code!

    Tl;Dr -

    • Classes help organize code and simplify state management by combining data with the functions that manipulate that data.

    • Classes reduce the amount of boilerplate code needed by allowing methods with the same “shape” to be called interchangeably.

    Everything else about OOP is essentially built off these two ideas. I hope that helps.



  • How do y’all solve that, out of curiosity?

    I’m a hobbyist game dev and when I was playing with large map generation I ended up breaking the world into a hierarchy of map sections. Tiles in a chunk were locally mapped using floats within comfortable boundaries. But when addressing portions of the map, my global coordinates included the chunk coords as an extra pair.

    So an object’s location in the 2D world map might be ((122, 45), (12.522, 66.992)), where the first elements are the map chunk location and the last two are the precise “offset” coordinates within that chunk.

    It wasn’t the most elegant to work with, but I was still able to generate an essentially limitless map without floating point errors poking holes in my tiling.

    I’ve always been curious how that gets done in real game dev though. if you don’t mind sharing, I’d love to learn!