43 Comments

Bookmarking this for future reference. It’s a great article and should be a must-read for all developers

Expand full comment

This entire article is a thousand percent accurate. And it reinforces my more recent finding that developers are fine. Contrary to most people's belief developers are probably amongst the most safe of any profession STILL. You hit on most of the reasons why but to take it even further, now with our abstraction level bumped up to natural language developers are unbound by their domain and can freely move back and forth across an unlimited landscape of possibilities - none with the rigorous formality of iterative process and our ability to assess the moving target of emerging tech picking out the winners and the duds and building the scaffolding for the next wave of emerging tech. This is only something learned through experience and no amount of textbooks or YouTube tutorials can make up for the hard time and dozens of mistakes made along the way. The future is bright for developers

Expand full comment

This piece is excellent, and matches a lot of my experience using various AI tools, and handling office hours in a course which is all about teaching people how to do problem solving with AI.

However, I’d say the “AI is like an eager junior coder” is somewhat misleading because it’s like a junior coder who has memorized the internet, and has the knowledge of a junior coder in more technical domains simultaneously than any senior coder.

This makes AI valuable in ways which have no close analogy, in addition to the ways which do.

Expand full comment

Agree 100%, good read.

I wrote an article with similar observation lately :)

https://dawidmakowski.com/en/2024/08/can-ai-code-sure-replace-us-nah/

Expand full comment

Thank you for writing this article, Addy. As a total programming noob I've been struggling to get anything "meaningful" after initial first round of prompting, and that's exactly why I got myself into learning Python from the scratch (so I can at least read the code). Your article is very important to keep me, and others, firmly grounded as well as to reminds us all that "Rome wasn't built in a day".

Expand full comment

Be sure to get the LLM to explain the steps and decisions it makes when asking for help. As the article states, it is still a great learning tool. Even with tons of experience, I use an LLM to ramp up quickly if I'm digging into something unfamiliar.

Expand full comment

This is so good! Highlights many of the ways I use AI. Many of which I didn't even realize I was doing until you wrote it. As an example, I do create many different chats to isolate the context because if I don't the model gets confused more with what I'm asking. So many good bits in here. I also prefer your version of the future of AI and software development versus the one I cook up in my AI doomer brain sometimes 😄. Thanks for this!

Expand full comment

Congrats on the article! It summarizes the entire scene of AI and how this can be frustrating for developers, managers and executives!

Expand full comment

Very well written, it fully resonates with me :)

What I can add here, is that productivity and quality can be combined into a method, framework, toolchain ... not necessarily requiring AI.

The idea is to produce, wherever possible, mathematical models throughout the software development life cycle / process (SDLC), and generate code from these models.

I've applied this idea into the React apps niche.

For example, the first step in the React app SDLC, the understanding of the problem can produce models using category theory and finite state machines. These models are turned automatically into React component architecture + State management code.

Of course there is need for manual coding, like filling the gaps with functions, where AI might help.

But the main idea stays the same: It is possible to generate likely-correct apps (the highest quality achievable without a sky-high budget), with new kind of formal methods (diagrams as code) which require small amount of coding.

Expand full comment

I almost never let LLMs write code for me, apart from simple repetitive auto-completion. Instead, I use it as a peer programmer, and ask it things like "what are common APIs to do X", or "how do I do X." Then I write the code myself with the help of the information it provides.

In my experience, this is faster than letting it write code and then refactoring it, and it avoids the problem where I accidentally overlook an issue in its code that I would not have created had I written the code myself. It also makes my code more consistent, since it's all written in my style, by my hand.

Expand full comment

This seems like a reasonable way to use AI programming tools. Starting from 0 and trying to get an AI to develop the bulk of my code has always led to bizarrely complex code that doesn’t even follow the documentation for the relevant code or API.

Expand full comment

If normal AI can write code that is 70% there but 30% a shitshow, why exactly are we expecting AI agents to do better? That they can they get stuck in a one step forward two steps back loop all on their own? I don't get it

Expand full comment

Yes, people who say AI is incredible and is going to replace people in X years are crazy, and so are people who say AI is completely useless and the industry is dying. The truth, as always, is in between.

For example, I don’t see myself performing a Google search ever again in my life now that I have GPT and Gemini.

Expand full comment

As a self-taught programmer who writes and learns code to automate my business processes, I love how AI enables me to focus less on syntax issues and more on the overall design and logic of my software. Previously, I would often get stuck on a few lines of code and spend hours experimenting to figure out what would happen if I tweaked or changed something. Alternatively, I’d take an online course, hoping it would address my specific problem.

Now, with AI, I can ask for detailed explanations of every line of code and get unstuck much faster. I still take online courses, but they’re now more focused on design and overall architecture. I also enjoy discussing potential design patterns with AI, exploring their drawbacks and benefits, and refining my codebase accordingly.

So far, my favorite tool is Cursor.

Expand full comment

AI is often good at writing code. But it's rarely good at debugging. It's not even really set up correctly to debug. It doesn't have the right inputs to observe what's going wrong, and it doesn't have the authority to use trial and error to fix bugs. It can't answer questions well about a medium size codebase that it wasn't trained on.

Hopefully all of these things improve in time. Personally, I would rather have an AI that was a great debugger and poor writer, than an AI that was a great code writer and poor debugger. For now I'll take what I can get ;-)

Expand full comment

Lost in all of this is the art of bootstrapping. That rough draft that AI wrote for you? That was where you'd have learnt what it takes to wade through verbiage and decide if something is relevant for you. You would have learnt how to string together pieces of seeming irrelevant code to make it work for the task at hand. You would have understood that it takes patience, understanding, and mental strength especially when the answer is not served on a platter. At the least you would have understood the reasoning behind the design.

What's the big deal, you say? After all, this is done for a few times. So why fret about it? All is hunky dory if we promise not to do this all the time, right?

To understand why some things are done differently on a code that used to work before, one has to understand what is going on under the covers. And this understanding has to be active. Active learning will let you see what is coming down the pike.

Did we talk about building confidence of being able to write code from the ground up instead of just copying a working piece?

Once you use an LLM to write code, you have smoked your first joint. You are now on the way to become an addict even if you are cognizant of it.

Expand full comment

I think another potential use case for AI is scaling software development team: I mean scaling in terms of completing simple tasks which no one has time to do, which could mean writing more tests, more documentation, more UI improvements like accessibility. So again AI would be assisting the team, but not as a junior level developer but as a mechanistic team member.

Expand full comment

It's just a matter of time before the AI is capable enough to apply real-world experience into its code. I believe we are not very far.

Expand full comment

Excellent read, this nailed it on the head. As a senior software engineer, I know what to ask the LLM, I know when it's wrong or leading me in the wrong direction. I find LLMs useful for those things you know but forgot the syntax for, and searching Google takes too long. Or mundane little tasks within the context of a big project, such as "Write me a shell script with parameters to...". My experience reflects this article. LLMs are just a tool. You still need actual skill and expertise to make something production-worthy truly. SWE's are pretty safe for a while.

If you think about it, until we have AGI, LLMs can only be as good as the data that went into it. Well, that data was created by humans. It's full of mistakes, bad practices, poor performance, etc. LLMs need to be constantly fed new data; if we don't get better, LLMs don't get better.

Expand full comment