• dastanktal@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    If you’ve ever seen what AI tries to do when you ask it to code for you, you would understand why it’s declining.

    We just hired this new guy who uses AI for everything and often leads him astray and into arguments that he is definitely not prepared for.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      2 days ago

      LLMs are a tool just like anything else, and they absolutely can be useful for coding tasks once you spend the time to actually learn how to use them effectively. It’s certainly not a substitute for actually knowing what you’re doing, but these tools have their applications. My experience is that as long as you give them small and focused tasks, they can accomplish them fairly consistently. I also find they’re really handy for digging through code. If I’m looking at an unfamiliar code base, it’s much easier to have the LLM find relevant parts of it that I need to change than to dig through the code myself. They’re also pretty good at tracking down random bugs. Just the other day, I had a frontend bug where a variable was being destructured from a map twice, and it’s a kind of thing that’s easy to miss. The LLM saved me a whole bunch of time debugging by zeroing in on it.

      • dastanktal@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        Those are excellent use cases for AI, but it is also not a magic bullet. It cannot do everything for you, and often it can leave you a strike, especially if you’re not willing to fact check it. It’s a well-known fact that LLMs hallucinate, or straight up lie to you about what they know. So in many niche cases, which is what I am doing and what we hire this guy to do, it’s often not effective. Just as often as it gives a silver bullet, it is often effectively wrong.

        I have seen this dude use code and use AI to say things that are absolutely not true, like claiming setting a very high UID can resize a Docker image to an an absurd level nearing 500 gigabytes.

        He also tried to use it to lecture me on how the auditors don’t audit our company correctly and how we’re actually doing things completely wrong and that he’s the guy to fix it all and that’ll take him just a little bit to train everybody up to shape.

        LLM tools are excellent when treated with respect and the limitations of the tool is understood but unfortunately, far too many people believe it is a magic talking box that always knows everything and always has the right solution. 😮‍💨

        I mean, this joker is so ridiculous that he can’t even figure out how to use the AWS CLI correctly or how to setup “deploy” repo github keys. We asked him if he was comfortable working with puppet, or at least capable of figuring it out, and he looked like we asked him to touch a hot stove. Did I mention this joker has 15 years of experience doing stuff like this?

        When I was looking at his code, it reeks of AI with anti-patterns I normally only see by strictly generated llm code.

        • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 days ago

          I think we’re in complete agreement here. These things are not magic, they’re tools that have limitations. I also think they’re best used by devs who already have a lot of experience. If you couldn’t write the code yourself without the tool, you have no business using it. It can’t do the thinking for you, and just because it sounds convincing that shouldn’t be mistaken for it having any sort of intelligence of its own.

          I’ve seen plenty of people do terrible things with LLMs as well. Honestly though, it’s not that different from what I’ve seen people do manually. For example, I’ve seen many inexperienced devs just start adding kludges to their code instead of stepping back and rethinking the design to make an underlying problem go away more times than I care to count. LLMs just act as an accelerant here allowing people to make a bigger mess faster.

          The fact that somebody with 15 years of experience would be so bad at coding is the real story here though. Reminds me how I interviewed a dev with supposed 5 years experience one time, and they couldn’t figure out how to reverse a string cause they didn’t know how loops worked. That kind of stuff really makes you wonder about the industry as a whole.

        • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          2 days ago

          I mean there’s no fundamental difference between code is generated by LLM or by hand. At the end of the day there’s only so many ways you can write a function to do something generic like make a button, or add a service endpoint. Most code is pretty generic and isn’t doing anything unique.

  • erock@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    This is absolutely not true in my org. LLM use is ramping up where every SWE is using it to some capacity

  • neon_nova@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 days ago

    I’d guess that every manager was trying to implement ai as much as they could and now that people have had time to figure out how to use it most effectively they are using it less.