My "day job" since graduating a few years ago is software engineering/architecting. It's what I'll be doing for the foreseeable future.
I started taking LLM-based tools more seriously over the last year. I didn't begin automating my workflows or writing code with LLMs until late 2023, and even then, it was in a copy-and-paste manner between an open chat window.
Before then, starting in July of 2022, I was still mainly learning "the old way:" seeking out help in online forums, asking co-workers for help, and spending 1-2 days being stuck on things. I became a better engineer over time by doing things this way, just as many junior engineers had done for decades before.
Today, 75% of my code is automated with tools like Cursor and Claude. If I am working on something with an established pattern in my codebase, Cursor can build new features with those established patterns in 1/6th the time. If I have questions about best practices or trade-offs when building something less established, I always seek out an LLM in addition to blogs and documentation to inform my decisions.
I'm at an interesting age where I've been able to learn and build things "both ways." I've found myself thinking recently about which one is better and whether maintaining this level of LLM involvement is worth it.
"Better" can, of course, mean lots of things depending on who you ask. My definition here means my problems are solved faster, and my learning rate hasn't decreased in the process. Things get done, and I don't feel like I cheated or lucked my way there.
First, the benefits of this switch.
- Writing code with established patterns that engineers understand is undoubtedly faster with LLMs. It's transformative.
 - Discovery of new concepts that branch from those established patterns is also faster, and the learning rate does not take a hit. If you use and understand Django, or Fastify, or Docker, or Git, LLMs can tell you a better "way" of doing things with those technologies that you may have been unfamiliar with: commands, functions, patterns, etc. Swapping those new concepts for your old ones happens instantly.
 
There's one major drawback.
- New concepts that don't branch from established patterns don't make it into your brain. They remain in the chat window. There is not enough of your mind familiar with that concept for new knowledge, "Lego blocks" to stack on top. You may get unstuck, but if you reencounter the same thing in the future, you'll be retracing your steps.
 
This does not feel satisfying. The second part of my definition of "better" is actually worse.
And if speed isn't essential? Learning something from 0 purely for yourself, like music production, stays the same. Having a superhuman reference point doesn't matter if you yourself don't actually put in the work.
A few friends and I of a similar age have repeated the same sentiment: "Oh, if ChatGPT had been around when I was in school, I'd have been screwed." It's easy to say LLMs have been better and a huge productivity boon, but for learning entirely new things, you're brain still needs that strong foundation. When the LLM training wheels are off, can you independently perform the same tasks, or do you fall?
Through writing this, my conclusion became relatively straightforward. If you know you are in a global maximum, and that you are operating with tools that you know are comfortable with and want to get the best out of, LLMs are amazing.
If you can't afford to be pigeonholed, or you're starting fresh, you're better off learning the old way.