Over the past few years, the evolution of AI-driven tools like GitHub’s Copilot and other large language models (LLMs) has promised to revolutionise programming. By leveraging deep learning, these tools can generate code, suggest solutions, and even troubleshoot issues in real-time, saving developers hours of work. While these tools have obvious benefits in terms of productivity, there’s a growing concern that they may also have unintended consequences on the quality and skillset of programmers.

  • dinckel@lemmy.world
    link
    fedilink
    arrow-up
    25
    arrow-down
    5
    ·
    edit-2
    7 days ago

    Anything that allows people to blindly and effortlessly get results inherently makes them more stupid. Your brain is like any muscle. You need to repeatedly use it for it to work well

    • Scratch@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      12
      ·
      7 days ago

      I’ll bet people said the same thing when Intellisense started suggesting lines completions.

      And when errors were highlighted in the code rather than console output.

      And when high-level languages started appearing.

      • u_tamtam@programming.dev
        link
        fedilink
        arrow-up
        10
        arrow-down
        1
        ·
        7 days ago

        I’ll bet people said the same thing when Intellisense started suggesting lines completions.

        I’m sure many did, but I’m also pretty sure it’s easy to draw a line between code assistance and LLM-infused code generation.

      • dinckel@lemmy.world
        link
        fedilink
        arrow-up
        18
        ·
        7 days ago

        This really isn’t a good comparison at all. One gives you a list of choices you can make, and the other gives you a blind answer.

        If seeing what argument types the function takes make me a worse engineer, so be it, I guess

      • MajorHavoc@programming.dev
        link
        fedilink
        arrow-up
        14
        arrow-down
        1
        ·
        7 days ago

        I’ll bet people said the same thing when Intellisense started suggesting lines completions.

        They did.

        And when errors were highlighted in the code rather than console output.

        Yep.

        And when high-level languages started appearing.

        And yes.

        That said, if you believed my mentors, we were barelling towards a 2025 in which nothing running on software ever really worked reliably.

        So they may have been grumpy, but they were also right, on that point.

      • JackGreenEarth@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        7 days ago

        And they may have been right. But getting code is usually the end result, not proving you’re some better programmer. And useful tools may be used to help you with the aforementioned goal.