• TimeSquirrel@kbin.melroy.org
    link
    fedilink
    arrow-up
    156
    arrow-down
    9
    ·
    1 month ago

    Try programming for a day without syntax highlighting or auto-completion, and experience how pathetic you feel without them. If you’re like me, you’ll discover that those “assistants” have sapped much of your knowledge by eliminating the need to memorize even embarrassingly simple tasks.

    That’s…how the world works. We move on. We aren’t programming computers by flipping toggle switches or moving patch cables around anymore either.

    ‘Try directly hand-coding bits into regions of memory without a compiler/linker and experience how pathetic you feel without it.’

      • umbrella@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        code in some mothballs if its gonna be unmantained for a while. thats like programming 101

    • Sinuousity@lemmy.world
      link
      fedilink
      English
      arrow-up
      42
      arrow-down
      6
      ·
      edit-2
      1 month ago

      What a dumb take (in your quote). Autocompletion showing me all the members of an object is nothing like ChatGPT hallucinating members that don’t exist. Autocomplete will show you members you haven’t seen, or aren’t even documented.

      Not to mention they said syntax highlighting is a bad thing… Why use computers at all? Go back to the golden days of punchcards

      • Daedskin@lemm.ee
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        3
        ·
        1 month ago

        From later in the article (emphasis author’s)

        Earlier in this article I intimated that many of us are already dependent on our fancy development environments—syntax highlighting, auto-completion, code analysis, automatic refactoring. You might be wondering how AI differs from those. The answer is pretty easy: The former are tools with the ultimate goal of helping you to be more efficient and write better code; the latter is a tool with the ultimate goal of completely replacing you.

        • constantturtleaction@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          That might be the goal but it is a long way away. The current models have no chance of replacing a skilled engineer. We will need completely new types of models to start getting close to that.

    • DreamButt@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 month ago

      Without syntax highlighting?? Sorry I guess my pretty colors are a weakness. Some people just want to be curmudgeons

    • uis@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 month ago

      ‘Try directly hand-coding bits into regions of memory without a compiler/linker and experience how pathetic you feel without it.’

      There was article about programming atmega with pulling electrodes in and out of salty water.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    67
    arrow-down
    1
    ·
    edit-2
    1 month ago

    Our computer science professor in some programming course at university told us we were not supposed to take advice from the internet or answers from Stack Overflow for half a year… Until we learned the ropes. And could asses for ourselves what’s right and what is wrong. (And I believe that was some C/C++ course where you get lots of opportunuties to do silly things that might somehow work but for all of the wrong reasons.)

    I think he was right. There is lots of misinformation out there that isn’t a proper design pattern. And with copy-pasting stuff, you don’t necessarily learn anything. Whereas learning with some method is efficient and works.

    And I’m pretty sure I’m not super intelligent, but all of that isn’t really hard. I mean if someone codes regularly, they might as well learn how to do it properly. It takes a bit of time initially… But you get that time back later on. Though… I’d let AI write some boilerplate code. Or design a website if I’m not interested at all how the HTML and CSS works… I think that’s alright to do.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        21
        ·
        edit-2
        1 month ago

        I mean it also contains great stuff. Niche workarounds, ways to do something more efficiently than some standard library function does.

        You just need a means of telling apart the good and the bad. Because there’s also people smashing their forehead on the keyboard until it happens to be something that compiles. And people repeating urban legends and outdated info. You somehow need background knowledge to tell which is which. AI didn’t invent phrasing some nonesense with full conviction. It is very good at doing exactly that, but we humans also have been doing that since the beginning of time.

      • peopleproblems@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 month ago

        Debugging and being able to interpret documentation when it exists.

        But good lord, the amount of programmers I work with that never use an IDE debugger is unreal. I get that you don’t have to, but Jesus Christ, if yout not getting an expected result, it’s way fucking faster to step through the code and see where the data changes then to slap logging into every line and attempt to read the output.

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 month ago

        Debugging only teaches logic. Not structure. No amount cut, paste, debug with teach you the factory pattern.

  • r00ty@kbin.life
    link
    fedilink
    arrow-up
    47
    ·
    1 month ago

    I’ve never had AI create working code anyway.

    But it will generally point me in the right direction. It’s useful for:

    1. Helping get your train of thought back in the right direction
    2. Automating what would be a lot of boilerplate/repetitive coding. Just beware you will still need to check it over.

    You need to be skilled to spot the mistakes it will definitely make.

  • Nomecks@lemmy.ca
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    13
    ·
    edit-2
    1 month ago

    Good. There’s a lot of non-programmers who are now bad ones and are using AI to make their ideas real. It’s made programming way more accessible to people who would never learn before.

    • Sabata@ani.social
      link
      fedilink
      English
      arrow-up
      18
      ·
      1 month ago

      I got back into programming because I can ask an Ai my stupid questions I’m too dumb to google correctly. I haven’t otherwise wrote code since college and kinda revived a long dead hobby. It removes a barrier to entry that I otherwise gave up on. Been working on a project to teach myself python the last few months, with Ai replacing the roll of google for the most part.

      Copy-pasting Ai code still blows up in your face just as much as code you stole from stack overflow…

      • LucidNightmare@lemm.ee
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        1 month ago

        I wouldn’t say you’re dumb when it comes to Google. Their search is just a broken mess of dog shit now.

      • chakan2@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        1 month ago

        No…stack I can usually figure out from the context of questions what went wrong. AI will very confidently and eloquently give you a very subtle bullshit answer.

      • uis@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 month ago

        Copy-pasting Ai code still blows up in your face just as much as code you stole from stack overflow…

        Show me difference:

        They are the same.

      • Croquette@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        1 month ago

        The issue isn’t you doing your hobby projects however you want, it’s people being paid and produce LLM generated code.

        And the biggest issue is managers/c-suites thinking that LLMs can replace senior devs.

        And the biggest biggest issue is that the LLMs in their current mainstream form are terribly bad for the environment.

        • Nomecks@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 month ago

          Why wouldn’t you use AI as a shortcut if you can? Can you actually replace senior devs with AI? I’m sure that depends on the company and what they consider a “senior dev”. Maybe there’s some not-so-senior senior devs that should be worried.

    • raker@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 month ago

      Can confirm. Using AI for coding for a couple of months now. There sure is a lot of copy and paste, trail and error, but without the assistance I would not have been able to enhance and customize code like that. Now I am some steps further and was even able to question the AI output, correct it, made it better. I am getting there: learning, optimizing, creating new stuff. It is fun. And when I compile the code, it runs. If not, I debug. Unthinkable for me a year ago.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 month ago

      Unironically, yes. It’ll generally generate working code, but not necessarily the most correct or efficient. And it may not do exactly what you want.

  • normalexit@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    3
    ·
    1 month ago

    I’ve been writing code professionally for nearly two decades, and I love having copilot available in my IDE. When there is some boilerplate or a SQL query I just don’t want to write, it’ll oftentimes get me started with something reasonable that is wrong in a couple of subtle ways. I then fix it, laugh at how wrong it was, or use part of the proposed answer in my project.

    If you’re a non-corder, sure it is pure danger, but if you know what you’re doing it can give you a little boost. Only time will tell if it makes me rusty on some basics, but it is another tool in the toolbox now.

    • smiletolerantly@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      For me personally, there is only two applications of LLMs in programming:

      • doing tasks I kinda know how to do, but don’t want to properly learn (recent example: generate pgf plots from csv data in matplotlib. 90% boilerplate, I last had to do it 3 years ago and vaguely remember some pitfalls so can steer the LLM in that direction. Will probably never again have to do this, so not worth the extra couple hours to properly learn
      • things I would ordinarily write a script for, but aren’t worth automating because they won’t come up in the future again (example: convert this Lua table to a Nix set)

      Essentially, one-off things that you know how to check for correctness.

    • RedditWanderer@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      30 days ago

      Same here (15 years). I work in all sorts of frameworks and languages. I normally would have just googled a given question to see the code i need, paste it in with everything that’s wrong, and fix it to my liking. I know what I’m doing I was just missing the specific words i havent used in a couple years, i still understand them. Copilot just avoid me opening google, clicking through some bad SEO, passing the bad answers, and doing that a couple more times to bring in everything I need. It’s a google formatter.

      It’s also exactly like searching google. If you ask “is this cancer” you’ll find cases where it’s cancer, if you ask “is this not cancer” youll find cases where it’s cancer. You can’t trust it in that way, but you can still quickly parse the internet. I make juniors explain their code so even if they paste it in, they’re kind force to research it more to make sure they get it; it’s on the reviewers now to train llm kiddos.

  • mindaika@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    10
    ·
    1 month ago

    I don’t love AI, but programming is engineering. The goal is to solve a problem, not to be the best at solving a problem.

    Also I can write shitty code without help anyway

    • kiwifoxtrot@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      3
      ·
      1 month ago

      The issue with engineering is that if you don’t solve it efficiently and correctly enough, it’ll blow up later.

      • mindaika@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        5
        ·
        edit-2
        1 month ago

        Sounds like a problem for later

        Flippancy aside: the fundamental rule in all engineering is solving the problem you have, not the problem you might have later

        • Croquette@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          1 month ago

          It’s rarely the case. You rarely work in vacuum where your work only affects what you do at the moment. There is always a downstream or upstream dependency/requirement that needs to be met that you have to take into account in your development.

          You have to avoid the problem that might come later that you are aware of. If it’s not possible, you have to mitigate the impact of the future problems.

          It’s not possible to know of all the problems that might/will happen, but with a little work before a project, a lot of issues can be avoided/mitigated.

          I wouldn’t want civil engineers thinking like that, because our infrastructure would be a lot worse than it is today.

          • mindaika@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            2
            ·
            1 month ago

            “Not blowing up later” would be part of the problem being solved

            Engineering for future requirements almost always turn out to be a net loss. You don’t build a distillation column to process 8000T of benzene if you only need to process 40T

            • reksas@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              1 month ago

              but you could design it to be easily scalable instead of having to build another even more expensive thing when you suddenly need to process 41T

  • Random_Character_A@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    4
    ·
    1 month ago

    Not a coder. I can understand most python code and powershell scripts that others have done, but I don’t remember syntax, if I need to make something from scratch. Doing that involves ton of googling and reading awful documentation that still leaves some things out. I do this maybe twice a year.

    For someone like me AI coding is a god sent.

    • peopleproblems@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      1 month ago

      doing that involves a ton of googling and reading awful documentation

      Yes. That is programming.

      To most of us, the syntax is the easy part to remember, and our IDEs take care of most of it. Being able to bang our heads through the documentation and experiment with libraries is pretty much what our jobs are.

      AI coding is basically a shortcut to some of the stuff we have to repeat with slight changes in our software. It’s also useful for setting up more complex code that we know we’ll have to tweak.

      Expecting it to produce something with the desired results is a recipe for disaster. It’s basically a cheaper outsourcing method that can’t actually compile and run it’s code before giving it to you.

  • tias@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    4
    ·
    edit-2
    1 month ago

    I’ll confess I only skimmed the article, but it seems like just a bunch of unsubstantiated opinions and I don’t buy it.

    Using AI generated code is like pair programming with a junior programmer. You tell the junior what to do and then you correct their mistakes by telling them how to do better. In my experience, explaining things to someone else makes you better at your craft. Typically this cycle includes me changing the code manually at the end, and then possibly feeding it back to ChatGPT for another cycle of changes.

    Apart from letting me realize and test my ideas quicker, this allows me to raise the abstraction level of my thinking. I can spend more time on architecture and on seeing the bigger picture, and less time being blinded by the nitty gritty details. I would say it makes me both a faster and a better programmer.

    • ourob@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      1 month ago

      I’ve seen the comparison to pair programming with a junior programmer before, and it’s wild to me that such a comparison would be a point in favor of using AI for improving productivity.

      I have never experienced a productivity boost by pairing with a junior. Which isn’t to say it’s not worth doing, but the productivity gains go entirely to the junior. The benefits I receive are mainly improving my communication and mentoring skills in the short term, and improving the team’s productivity in the long term by boosting the junior’s knowledge.

      And it’s not like the AI works on the mundane stuff in parallel while I work on the more interesting, higher level stuff. I have to hold its hand through the process.

      I feel like the efficiency gains of AI programming is almost entirely in improving your speed at wrestling a chatbot into producing something useful. Which may not be entirely useless going forward - knowing how to search well is an important skill, this may become something similar, but it just doesn’t seem worth the hassle to me.

      • gramie@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 month ago

        I have done pair programming with a junior partner, and I found it extremely beneficial. Taking the time to talk out my ideas and logic invariably helped make them clearer in my mind and realize pitfalls much sooner than I otherwise would have.

        I had to explain things clearly and logically, and he was bright enough to ask good questions and point out typos as I was coding.

    • Sage1918@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      1 month ago

      Bugs never occur in the high-level/big picture land, it usually come up in the low-level/implementation land. Should you entrust these to AI ?

      • tias@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 month ago

        Only because bugs are defined as errors in implementation details. You can still have errors in your design (sometimes referred to as design bugs).

        It’s not about “entrusting” to AI any more than I would be entrusting important code to a junior developer to just go off and push to production on his own. We still have code review, pair programming etc. As I said, I read the output code, point out issues with it, and in the end make manual adjustments to fit what I want. It’s just a way of building up the bulk of the code more quickly and then you refine it.

  • 9point6@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    1 month ago

    Agree on the application side, but when it comes to the test suite, I’m definitely gonna consider letting an AI get that file started and then I’ll run through, make sure the assertions are all what I would expect and refactor anything that needs it.

    I’ve written countless tests in my career and I’m still gonna write countless more, but I’m glad I can at least spend less time on laborious repetition now and more time on the part of the job I actually enjoy which is actually solving problems.

    • MeatsOfRage@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      5
      ·
      1 month ago

      Things like unit tests I just have AI do it all now. Since running the test tells you your coverage you can verify if it got everything or not.

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 month ago

        Here’s something that might blow your mind. Coverage is not the point of tests.

        If you your passing test gets 100% coverage, you can still have a bug. You might have a bunch of conditions you’re not handling and a shit test that doesn’t notice.

        Write tests first to completely define what you want the code to do, and then write the code to pass the tests.

  • uis@lemm.ee
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    1 month ago

    I was saying AI for coding is bad until saw two pictures:

    • kameecoding@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      I use it to generate repetitive patterns that’s easy to guess what’s next, but PITA to write, eg. asserts in Unit tests

  • TORFdot0@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    4
    ·
    1 month ago

    I don’t have an encyclopedic knowledge of every random library or built-in function of every language on earth so what’s the difference between googling for an example on stack overflow or asking an LLM?

    If you are asking ChatGPT for every single piece of code it will be terrible because it just hallucinates libraries or misunderstands the prompt. But saying any kind of use makes you a bad programmer seems more like fud than actual concern