8 Comments
User's avatar
Lorenzo De Leon's avatar

Great post. All devs using AI should read it.

Arvind Patil's avatar

All valid thoughts, the critical ability of human thinking on the system context is vital and keeping human in the loop is vital step and that needs to be practiced in AI-assisted/ AI-first development models. As more and more developers engaged with these tools, it becomes absolute necessary to design effective guardrails.

Will G.'s avatar

Love this!

Torre Taylor's avatar

Spot on!

Producing quality code and achieving engineering excellence can only be accomplished through human expertise. Simply because we have tools capable of generating large amounts of code quickly doesn’t bypass the need for experts. Continuing to emphasize good engineering fundamentals and ownership is as important as ever.

Fabien Ninoles's avatar

Is not a draft, it's adversarial code. AI don't know what it does, and so can be pushed to insert compromised code. Contrary to Stack Overflow, that code wouldn't have been reviewed (since no one knows exactly how the code is generated).

So, AI generated code must be reviewed as any code coming from an unknown and so possible adversarial stranger.

Rakia Ben Sassi's avatar

Thank you for the breakdown!

Hedwin's avatar

What was given to the AI as input to generate/create the code?

JP's avatar

The draft framing is spot on. The issue I'm seeing is that review processes aren't adjusting to the new volume. When AI generates code at 10x human speed but reviews still happen at human speed, you get a pileup. We've been experimenting with operational changes: labelling AI-assisted PRs so reviewers know to shift mental gears, tiering reviews by risk level, and requiring comprehension proof in PR descriptions. Wrote about it here: https://reading.sh/your-teams-code-review-process-wasn-t-built-for-ai-27e10022dd33?sk=4baec8cd10d50b40d8d447297d0ca973