• brie@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    3
    ·
    10 hours ago

    Large context window LLMs are able to do quite a bit more than filling the gaps and completion. They can edit multiple files.

    Yet, they’re unreliable, as they hallucinate all the time. Debugging LLM-generated code is a new skill, and it’s up to you to decide to learn it or not. I see quite an even split among devs. I think it’s worth it, though once it took me two hours to find a very obscure bug in LLM-generated code.

    • NigelFrobisher@aussie.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      58 minutes ago

      I have one of those at work now, but my experience with it is still quite limited. With Copilot it was quite useful for knocking up quick boutique solutions for particular problems (stitch together a load of PDFs sorted on a name heading), with the proviso that you might end up having to repair bleed between dependency versions and repair syntax. I couldn’t trust it with big refactors of existing systems.

    • cley_faye@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      If you consider debugging broken LLM-generated code to be a skill… sure, go for it. But, since generated code is able to use tons of unknown side effects and other seemingly (for humans) random stuff to achieve its goal, I’d rather take the other approach, where it takes a human half an hour to write the code that some LLM could generate in seconds, and not have to learn how to parse random mumbo jumbo from a machine, while getting a working result.

      Writing code is far from being the longest part of the job; and you gingerly decided that making the tedious part even more tedious is a great idea to shorten the already short part of it…