• lukstru@lemmy.world
    link
    fedilink
    arrow-up
    83
    ·
    23 days ago

    I recently held a science slam about this topic! It’s a mix of the first computer scientists being mathematicians, who love their abbreviations, and limited screen size, memory and file size. It’s a trend in computing that has been well justified in the past, but has been making it harder for people to work together. And the need to use abbreviations has completely gone with the age of auto completion and language servers.

    • papabobolious@feddit.nu
      link
      fedilink
      arrow-up
      1
      ·
      22 days ago

      It’s been really holding me back in learning coding. I felt pretty comfortable at first learning javascript, but as I got further the code was increasingly hard to look back to and understand, to the point I had to spend a lot of time understanding my own code.

      Does it truely matter after the code has been compiled if it has more full words or not?

      • lukstru@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        22 days ago

        It matters as soon as a requirement change comes in and you have to change something. Writing a dirty ass incomprehensible, but working piece of code is ok, as long as no one touches it again.

        But as soon as code has to be reworked, worked on together by multiple people, or you just want to understand what you did 2 weeks earlier, code readability becomes important.

        I like Uncle Bobs Clean Code (with a grain of salt) for a general idea of what such an approach to make code readable could look like. However, it is controversial and if overdone, can achieve the opposite. I like it as a starting point though.