Vibe Coding: Revolution or Reckless Abandon?
Decoding vibe coding: hype, hope, and hard truths for developers
tl;dr: While AI rapidly speeds up code creation (e.g. "vibe coding"), it dangerously lowers the bar for introducing subtle bugs, security flaws, and unmaintainable complexity. Instead of just embracing speed, the industry needs developers to act as expert curators and critical validators, reinforcing strong design, code review, security testing, and system understanding to ensure AI contributes to robust, reliable software. Better education of risks is also needed for beginners vibe coding. You have to make AI generate code up to your standards. “Vibe coding” is not an excuse for low-quality work.
This post is also available as a NotebookLM podcast if you prefer to listen :)
The origin of "Vibe Coding"
"Give in to the vibes, embrace exponentials, and forget that the code even exists."
With those words in early 2025, Andrej Karpathy introduced the world to vibe coding, a provocative new approach to writing software. Karpathy - a co-founder of OpenAI and former director of AI at Tesla - coined the term in a tongue-in-cheek post. He described a surreal coding experience: using an AI pair-programmer so advanced that he could talk to it (via voice recognition) and simply "vibe" his way through a project.
He would ask for the "dumbest things" like UI tweaks instead of manually digging through code, always click "Accept All" on the AI's suggestions without reviewing diffs, and whenever errors arose, just paste them back into the AI for fixes. The codebase grew in ways beyond his normal understanding, yet by iteratively poking and prodding (sometimes even asking for random changes until bugs disappeared) he could get a working app. As Karpathy admitted:
"It's not too bad for throwaway weekend projects... I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works."
In other words, he wasn't meticulously crafting code - he was feeling his way through it with AI assistance, treating software development like an improvisational conversation with the machine. Thus, "vibe coding" was born, half serious and half in jest, to describe this freeform, intuition-driven mode of coding with AI.
Karpathy's post struck a nerve. Within weeks, vibe coding became a buzzword featured in major media like The New York Times, Ars Technica, and The Guardian. It was a compelling origin story: one of the world's top AI researchers and programmers publicly experimenting with ceding control to an AI coder. For a senior engineer audience, this scenario sounds equal parts intriguing and alarming - a possible glimpse of the future or just a caricature of AI hype. To understand the nuance, we must look at how vibe coding's definition evolved beyond Karpathy's initial "weekend project" context and how different developers interpret (and misinterpret) the idea today.
Defining the "vibe": From Karpathy's Intention to engineering buzz
Vibe coding was never meant to describe all AI-assisted coding - it's a specific flavor where you don't even read the AI's code before running it
Karpathy's coinage was a tongue-in-cheek way to describe coding by intuition and interactive feedback, fully leaning on an AI agent. However, as soon as the term went viral, confusion followed. Many people began slapping the "vibe coding" label onto any use of AI in programming, diluting its meaning. Veteran developers like Simon Willison quickly pointed out that "vibe coding is not the same thing as writing code with the help of LLMs" . In responsible AI-assisted programming, you might use tools like GitHub Copilot or ChatGPT to generate code but you still critically review, test, and integrate that code as part of a normal development workflow. Vibe coding, in its original sense, eschews much of that rigor - it implies a kind of blind trust in the AI's output, a decision to "forget that the code even exists" and just push forward.
This distinction is crucial. Karpathy intended the term to describe a particular extreme: a developer acting more like a prompt-giver and bug tester while the AI does the bulk of the coding, without the developer carefully reading every line. It was almost self-deprecating - an expert coder intentionally not using his skills, just to see how far the AI could go. But as the term caught on, some observers missed the wink and took vibe coding at face value as a serious methodology to be applied broadly. In online discussions, one can see the definition stretching in two directions: one camp celebrates vibe coding as an exciting new human-AI collaboration paradigm, the other condemns it as a dangerously sloppy practice that no competent engineer should ever indulge in.
On one end of the spectrum, enthusiasts use "vibe coding" playfully to mean rapid prototyping with AI - a bit like improvisational pair programming. They emphasize that it's about high-level guidance and quick iterations, not total abdication of responsibility. For example, one developer explains that:
"vibe coding is more like pair programming with an AI that helps you move faster, not handing over control entirely"
In this interpretation, the "vibe" is just the flow state of iterating with an AI: you set the intent, the AI writes some code, and you steer it, almost like a conversation. Proponents stress that not all use of AI falls under vibe coding - if you're thoughtfully integrating AI-generated code, you're simply doing AI-assisted programming, which can be very disciplined. True vibe coding is a subset of that: specifically building software with an LLM without reviewing the code it writes . As one commenter succinctly put it, "vibe coding" is basically "code generation through an AI model, without even understanding it" - a workflow characterized by minimal upfront planning and minimal code comprehension. If you're still sanity-checking outputs or writing tests, you're not fully "vibing" in the original sense.
This clarification has been important because the term immediately became polarizing. Some early adopters proudly declared themselves vibe coders, while many experienced engineers recoiled at what sounded like an insult to software engineering principles. The internet being the internet, a bit of caricature ensued. Supporters sometimes overhyped vibe coding as a revolutionary new normal, while detractors sometimes misused the term to dismiss any AI usage as irresponsible. Over time, a more nuanced consensus emerged that vibe coding refers to a very specific, and risky, mode of AI-driven development - essentially "letting the AI drive" while the human just nudges and observes. All the nuanced positions along that spectrum, from careful use of Copilot to full-on "I don't read code anymore" braggadocio, have been conflated in debates about vibe coding. In the sections that follow, we'll disentangle those positions and explore the full landscape of discourse: the allure of vibe coding, the pitfalls and failures, and why it elicits both excitement and dread in equal measure.
The Allure: why developers embrace Vibe Coding
"For low-stakes projects and prototypes, why not just let it rip? An LLM can produce code an order of magnitude faster than even skilled humans."
Speed and creativity - these are the core promises of vibe coding that excite developers. Andrej Karpathy's own stance on his newfound technique was notably upbeat in its context. He didn't need AI assistance as an expert, but he found it fun and liberating. The AI could churn out boilerplate and even suggest entire implementations faster than he could type. This meant he could try out "wild new ideas" with unprecedented velocity . Many hackers recognized this potential: vibe coding can feel like having a supercharged junior developer who never sleeps. If the stakes are low, why not indulge in a bit of rapid, AI-fueled hacking to see what's possible?
Indeed, numerous success stories and enthusiastic testimonials have emerged. One Hacker News user recounted building surprisingly useful little utilities entirely via vibe coding, admitting "the fact that it works at all was hugely surprising to me". Another developer, inspired by Karpathy's example, described how:
"it feels like doing art...phrasing things one way and then another, testing, discarding changes, and phrasing them a new way"
- an almost creative flow state of enmeshed human-machine collaboration. That sense of flow is a big draw. Proponents compare it to being "in the zone" or achieving a musical groove with an AI partner. You're not stopping to check every semicolon; you're riding a creative wave, guided by the vibe of seeing the program evolve in real time. It's coding by feel - something previously impossible when a human had to manually handle every detail.
Moreover, vibe coding hints at a future of democratized software creation. By dramatically lowering the barrier to produce working code, it "frees the imagination" of people who have ideas but lack deep programming expertise. As one analysis noted, this could usher in an era of "home-cooked software" - highly personalized apps and tools built by end-users themselves, because the AI helpers enable them to iterate on their idea in plain language. Think about someone with a day job in marketing who has a niche idea for a web app: with vibe coding tools, they might actually be able to prototype it without formally learning programming. Enthusiasts herald this as revolutionary. It's not that software engineering is suddenly trivial, but with AI copilots doing the heavy lifting, more people can participate in creating software. Vibe coding shaves that initial barrier to nearly flat, as one commentator observed - potentially millions of non-coders can build their own custom tools if the interfaces are this natural . Some of those newcomers, in turn, "will get bitten by the programming bug and go on to become" full-fledged developers , meaning vibe coding might even serve as an on-ramp to learning, rather than a replacement for expertise.
Even seasoned engineers find appeal in the vibe coding style, when used judiciously. It enables a kind of exploratory programming at high speed - similar to rapid prototyping or a hackathon mode. You can throw together a concept over a weekend by simply describing your intent and letting the AI fill in the gaps. If something doesn't work, you poke it, tweak the prompt, or ask the AI to refactor, rather than manually tracing through every line. Many have noted that this is essentially an extension of how we already prototype ("write one to throw away"). "Sometimes you just need a place to sit" one developer quipped in defense of quick-and-dirty construction , likening vibe coding to hastily nailing together a makeshift chair - it might be ugly, but it gives you somewhere to sit for now. Another pointed out that:
"vibe coding is just a new name for exploratory programming or good old prototyping".
In other words, hacking together a version 0.1 of something - a time-honored practice - is made faster and more accessible with these AI "co-developers."
Importantly, many who have actually tried vibe coding report that it's not about magic or doing nothing; it's an interactive skill of its own. You have to learn how to speak to the AI, how to guide it when it veers off, and when to step in.
As one early adopter explained, the key is "treating the AI as a collaborator, not a crutch". For instance, a developer named Jon wrote about using an AI agent (DeepSeek + Cursor) to build a chatbot interface for a Raspberry Pi. It still took him over 100 careful prompts and refinements to get to a satisfactory result. But that was far faster than coding it all manually - the AI augmented his productivity by generating lots of code that he then tested and steered. This illustrates an important nuance: vibe coding can augment experienced developers, helping them code faster by handling the rote parts, while the developer focuses on the high-level direction. In fact, some argue that trying this style is one of the best ways for an experienced dev to develop intuition about what today's AI can and cannot do. It's a learning experience: by pushing an LLM to its limits in a project, you quickly see its failure modes and capabilities, arguably becoming a better engineer for it. And when it works, it's exhilarating.
Finally, we have concrete success stories fueling the hype. Perhaps the most cited is an indie hacker, Levelsio, who reportedly "vibe-coded" an entire game and made over $80,000 in revenue from it within weeks. While his large following undoubtedly helped the sales, the fact that a solo creator could spin up a profitable game so quickly with AI assistance set imaginations on fire . Others have showcased substantial projects built largely by prompting an AI. One developer shared links to a search-query DSL and a GUI tool for constructing Unix pipelines - not trivial toys - claiming "all of these were made basically with the same tools and methods that Karpathy is describing as vibe coding". There are even cases of programmers using vibe coding to generate low-level code in systems languages. An enthusiast recounted how he let an AI (Cursor with Claude Sonnet) write 95% of a new DSL in C, including tricky integrations like Lua scripting and database queries - he only intervened to set up memory management and architecture, typing maybe 5% of the code himself. "It was a wild headspace to be in" he noted, but the experiment succeeded. Such anecdotes show that, in capable hands, vibe coding can produce non-trivial, working systems. It's not limited to tweaking UI or writing CSS (a misconception some had); people have built entire websites and complex features through this approach.
In summary, the allure of vibe coding lies in speed, creative flow, and expanded access. It offers a taste of a world where telling a computer what you want is closer to a conversation than a painstaking translation into code. For many developers - especially those working on side projects or experimental ideas - that taste is sweet. "Vybe on" as one enthusiast wrote after extolling how it rekindled his joy in coding by offloading all the tedious bits. But as we'll see next, this freedom and speed come at a cost. The very things that make vibe coding feel like "magic" in the moment can lead to nightmares down the road. The coin of innovation has another side: trade-offs in quality, safety, and understanding.
"It works until it doesn't": Security and maintainability concerns
"Smashing the 'Accept All' button is the digital equivalent of playing Russian roulette with your codebase."
This colorful warning comes from a senior engineer reacting to Karpathy's vibe coding approach. For all its thrill, vibe coding triggers serious alarm bells for experienced developers, especially around security, code quality, and long-term maintainability. The scenario of blindly accepting AI-generated code changes without review is a nightmare in any production context. One commentator didn't mince words:
"This 'vibe coding' horseshit is the programming equivalent of closing your eyes while driving because your Tesla's on Autopilot. It works great until you're wrapped around a tree wondering why the AI didn't see that coming."
In less vivid terms, the concern is that vibe coding encourages a complete suspension of engineering judgment. And that, inevitably, will end in disaster when the "happy path" ends.
Security is one of the most immediate worries. Modern applications have countless pitfalls - from SQL injection and XSS to authentication bugs - that require careful code review and testing to catch. An AI, no matter how advanced, does not (yet) have an intrinsic understanding of your application's security model or business logic. It will blithely write code that looks plausible but may introduce subtle vulnerabilities. Developers have already observed this in practice. One engineer recounts that, in his extensive experience with AI coding assistants, "prompting for security is nowhere near sufficient" - if you vibe code a web app and deploy it without carefully reading through what the AI wrote, you're courting danger. He provides concrete examples: an AI added sensitive API endpoints with no authentication required, modified existing endpoints in ways that bypassed authorization checks, and generated HTML templates with glaring XSS holes. Each of these would be a severe flaw in production. Without a human in the loop doing code review, such issues slip by unnoticed because the AI doesn't truly "understand" the security context - it just regurgitates patterns. "Trusting the LLM" too much is listed as a top anti-pattern in vibe coding; even a cautious developer can be lulled into a false sense of security by a few good outputs, only to have a nasty surprise on code review later.
Beyond security holes, there's the broader issue of code health and maintainability. Traditional software engineering places a premium on code being understandable - not just for vanity, but so that bugs can be fixed and new features integrated down the line. Vibe coding explicitly deemphasizes understanding the code. Karpathy's whole point was forgetting the code exists and focusing only on whether the app behaves as desired . That approach might be "amusing" for a quick experiment, but seasoned engineers find it terrifying as a general practice. "The code grows beyond my usual comprehension, I'd have to really read through it for a while" Karpathy admitted of his vibe-coded project. To many, that sentence is a huge red flag. If you don't understand your own codebase, how can you possibly trust it or maintain it? "When you don't understand your own codebase, you're not a developer anymore. You're just an extremely inefficient person who never learned to code, just using AI" scathingly observes one critic. It may sound harsh, but the point is that code comprehension is not optional in real software projects. If something breaks in production or you need to extend the system, someone has to know how it works. A vibe-coded codebase - one that grew organically from prompts and AI suggestions - can end up like a mysterious black box even to its creator. In the best case, that slows down further development; in the worst case, it's unfixable and you have to throw it away.
Multiple developers have raised this scenario:
vibe coding "works until it doesn't, and then you have this massive codebase you're responsible for that doesn't work, and you don't understand why or know whether it can even be fixed.".
The image of being on the hook for a pile of code that you effectively didn't write (and can barely read) is the stuff of engineering nightmares. Imagine a critical bug arises in a vibe-coded system that's now in use - what next? If the original author treated it as a disposable prototype, they might not even be around or have the patience to decipher it. If it's handed to a team, the team faces a steep reverse-engineering effort. In professional environments, this is unacceptable. That's why many experienced voices assert that vibe coding "would not fly in any real programming environment" where code quality and accountability matter. It's fine for a personal side project, but try telling your staff engineers that you merged code you haven't looked at - that's a non-starter.
Another risk vector is hidden bugs and technical debt. Vibe coding often leads to a very reactive style of development: you prompt for a feature, run the code, see an error, feed the error back to the AI, get a fix, proceed, and so on. This whack-a-mole approach means you're always addressing the immediate visible issue, not proactively designing for correctness or robustness. It's easy to imagine how many latent bugs or poor design choices could be snowballing under the surface. One developer likened it to building without knowing where the "load-bearing walls" are - you might inadvertently make structurally unsound decisions.
The AI won't tell you "hey, this approach will be hard to maintain" or "we're introducing a performance bottleneck here." It just writes code to satisfy the prompt at hand. Over time, these quick fixes and circumventions can accumulate into a serious mess. Indeed, early vibe coders report that you hit a productivity wall: "After a short while I realized a lot of the hype was just that, and much of the antipathy was nonsense. [But] I quickly discovered the antipatterns.". The same author lists one antipattern as expecting the AI to maintain a coherent long-term plan - it won't. Another antipattern is "working without a plan" yourself- if you just say "code me X" with no design, you often get a tangle that goes nowhere. In other words, even a vibe coding proponent admits that if you approach it naively, you waste time and end up refactoring constantly. A developer who heavily tried vibe coding for two months confessed:
“in the beginning you seem to be getting places, but since you have no structure, you become consumed by the next error or feature that appears... The time you 'save' in the beginning, you'll have to spend later to rewrite the code to do what you intended in the first place.".
In effect, you incur technical debt at lightning speed. Without careful structure, the project can devolve into an incoherent patchwork that eventually forces a rewrite - erasing the initial speed advantage.
Perhaps the most damning critique is that vibe coding lacks all the traditional checks that keep software quality high: design review, code review, testing, debugging.
It's coding in a procedural trance, hoping the outcome is correct.
"Vibe coding crap has zero comprehension, no debugging, no code review, no security, no accountability, no real structure" fumes one blogger .
Hyperbole aside, he's pointing out that the normal safety nets are absent. If the AI introduces a subtle bug, and you never write unit tests or read the diff carefully, you won't catch it until something breaks at runtime (if ever). And when you do hit a bug that the AI can't immediately fix, what then? Often the human still has to roll up their sleeves and debug - except now they are debugging code they didn't fully write. In practice, vibe coders say you must intervene at times. "I can definitely out-debug the machine" one experienced developer noted, describing how eventually he steps in to troubleshoot issues the AI gets wrong. AI assistants can also get stuck on a wrong hypothesis about a bug, proposing elaborate fixes for a one-line issue. A human debugger can recognize that and correct course, but only if they re-engage with the code. This oscillation - disengage from code, then dive back in to debug - can whiplash productivity if not managed well, and it's easy to imagine less experienced developers getting completely lost.
In summary, the "move fast and break things" ethos of vibe coding might yield quick results, but it breaks the covenant of software engineering which demands understanding and vigilance. The collective wisdom from seasoned developers can be summed up simply: if you do choose to vibe code, treat the output with extreme skepticism. Review the code before trusting it. Test it as thoroughly as you would test anything hand-written. Otherwise, you're one unchecked buffer or logic error away from a security incident or a crashed system that you have no idea how to fix. As one Hacker News commenter dryly put it:
"It honestly sounds like building a chair by just randomly nailing pieces of uncut wood together until it barely looks like a chair and supports your weight." Yes, you got a "chair" in the end - but who would sit on that in production?
Prototype tool or production liability? (personal Projects vs. professional software)
"Vibe coding a prototype for fun is easy. Developing a robust software solution with auth, security, best practices... is way harder, and vibe coders can't do that yet."
This quote captures a widely shared sentiment: vibe coding might be acceptable for personal or throwaway projects, but it's not ready for prime time in production. Even Andrej Karpathy was careful to frame his experiment as something he did for a weekend project "that nobody depends on". He implicitly acknowledged that this approach is not (at least currently) suitable when reliability matters. Many supporters of vibe coding echo this: it's a great way to implement ideas and validate them - a "dream for startups" at the prototyping stage - but a different attitude must kick in when you go from demo to production. One engineer commented that he doesn't mind if the next TikTok clone is churned out by AI, "but how about software in nuclear plants... or medical devices?"For high-stakes or long-lived software, the carefree vibe approach is a non-starter.
This split between personal tooling vs. production systems comes up again and again. Proponents often try to defuse critics by saying, "Relax, nobody is suggesting to ship vibe-coded apps to paying customers or critical infrastructure - it's for learning and personal projects." And indeed, a lot of vibe coding's current use is in private experiments, hackathon demos, or niche tools the author alone uses. A commenter on HN pointed out that we shouldn't be too alarmist: "It doesn't have to be the only way someone ever builds software... vibe coding can be a great way to learn and explore. Build one to throw away, as they say."
The idea here is that there's nothing wrong with quickly throwing together a version 1 using an AI, fully expecting that you might discard or rewrite it once you've proven out the idea. In fact, this matches a classic recommendation from Fred Brooks ("plan to throw one away"). The nuance, as another engineer interjected, is that Brooks meant you should try your best on the first implementation and then be willing to throw it out if needed - not that you should intentionally do a half-baked job initially. The danger is if someone treats vibe coding as a license to not even attempt sound engineering on version 1, they might end up anchored to a shoddy foundation because it "sort of works" and then try to patch it rather than rebuild properly. The healthy approach, this person suggests, is perhaps the opposite: first do an honest implementation yourself (so you deeply understand the problem), then maybe let an AI have a go and compare, so you're in a position to judge the output. That way, you leverage AI without becoming completely dependent on its potentially flawed approach from the start.
Within companies and teams, vibe coding raises practical workflow questions. Can a vibe-coded prototype be handed to a team for productionizing? Possibly, but it may negate the initial speed advantage if the team has to painstakingly audit and refactor it. Some foresee that this will become a new kind of hand-off, akin to how data scientists prototype something in Jupyter and engineers then rewrite it properly. The worry, though, is that if management only sees the shiny prototype working, they might underestimate the effort needed to make it production-quality - leading to pressure to deploy something unsafe. There's already cynicism that tech leadership or investors are hyping AI development as a way to cut engineering costs or skip steps. "Yes, vibe coding is part of the great LLM marketing campaign" one developer remarked dryly, expressing skepticism at some of the bold claims.
In a Y Combinator batch, for instance, some startups claimed astounding 10× to 100× productivity boosts from AI coding. Seasoned observers called these out as dubious: a 100× speedup would mean compressing what used to take a whole team-month into a single day, which clearly isn't happening in reality Even a 10× across an organization would be obvious - "glaringly obvious" - to any external observer, which it has not been. These claims might tempt less technical stakeholders to push engineers toward irresponsible workflows like unchecked vibe coding in hopes of miraculous productivity gains. Thus, a sensible guideline emerges: use vibe coding for exploration and prototypes, but when it comes to production, apply the usual rigor. Or as one blog concluded:
'Vibe Coding' might get you 80% of the way to a functioning concept. But to produce something reliable, secure, and worth spending money on, you'll need experienced humans to do the hard work that today's models can't."
In other words, the last 20% - which is often 80% of the effort - still requires traditional engineering. I’ve written about this in the 70% problem.
Interestingly, some developers caution that the line between prototype and production can blur quickly, sometimes to the detriment of quality. A personal project that "mostly works" can gain users or importance, and suddenly you have a quasi-production system that was never built with longevity in mind.
One HN user mused that people defend vibe coding by saying it's not meant for production, "but let's not be naive, because it will 100% be used to push actual applications." Once a tool exists, someone will eventually use it beyond its safe zone. Indeed, we're already seeing early instances: indie hackers selling vibe-coded apps to customers, or internal tools vibe-assembled and then used by teams.
The challenge for teams is how to incorporate AI-driven development responsibly. Perhaps one model is to allow vibe coding during a spike or prototyping phase, but then require a full code review and test cycle before anything goes live. That would preserve some of the speed benefit while catching the worst issues. Tool vendors are also aware of this gap - we see emerging features like sandboxed execution for AI-generated code (to prevent security disasters) and integrated test generation to bolster reliability. The ecosystem might evolve such that "vibing" can be done in a constrained environment where the stakes are low (e.g. the AI can't directly deploy to production, and any code must pass tests). In short, the industry is feeling out how to get the best of both worlds: the creativity and speed of vibe coding, and the safety and robustness of traditional methods.
At present, the consensus is conservative:
"Until [AI agents] develop a lot further, 'vibe coders' will not be able to release production-quality software in this world, no matter how exponential the agent is over their own inferior skill set."
That stark statement from one critique underscores that, today, vibe coding is largely limited to proto-software - useful experiments, personal scripts, one-off apps. It's overrated to think this heralds the death of software engineering or that we can ship serious software built in this fashion. We still need skilled humans to tighten the bolts. The prudent approach for now is: Prototype with vibes, but production needs discipline. Or as another put it more bluntly: "Learn to code! Then vibe code." Use the vibes to enhance productivity, not to replace fundamental understanding when it really counts.
Impact on developers and teams: Are we all product managers now?
"Hardly anything feels gatekept by obscure knowledge anymore... If you don't like it, don't use it; it's not for everyone. This is not a replacement for those who code like machines - keep doing what makes you special!"
This exuberant statement captures how vibe coding is changing the self-image and role of developers. For some, it's empowering - suddenly they can do more on their own, without needing to memorize API minutiae or be a wizard in every framework. For others, it's threatening - it suggests a future where a lot of traditional coding skill could be bypassed or devalued. Let's unpack the economic and career implications of vibe coding and how teams might reorganize around these AI "co-developers."
One immediate observation: vibe coding shifts a programmer's day-to-day work more towards specifying what to build rather than crafting the implementation. This has led to quips like, "We are all product managers now." Instead of spending time deeply thinking through algorithms or debugging pointer issues, a vibe coder spends a lot of time describing desired features, testing the app, and deciding what feels right or wrong - arguably tasks that overlap with product design and QA more than classic software engineering. Some celebrate this shift, arguing that it lets developers focus on the higher-level problem solving and user needs rather than boilerplate. Others worry it deskills the profession. If taken to an extreme, a future junior developer might not learn how to write a sorting function or optimize a query; they might just prompt the AI with "make this faster" and trust it. Over time, could this erode the pool of human expertise?
A few experienced voices have expressed concern about skill atrophy.
"My fear is that if I were to start doing this, I would stop engaging in the difficult aspects of building something. And I think the skills I have now would atrophy"
one senior developer confessed. This echoes a broader anxiety: reliance on too much AI assistance might make the next generation of programmers less capable of solving hard problems independently. If you only ever "vibe out" small projects and never struggle through the foundational learning moments (debugging a tricky bug, understanding algorithmic complexity, etc.), you might not develop the deep expertise needed to handle those situations when they inevitably arise. Another commenter wondered:
What happens if you go from troubleshooting problems 5 times a week to once every few weeks because the AI fixes most issues? Does that diminish your problem-solving muscles over time?
These are open questions. It's possible that future developers will simply have a different skill profile - maybe more akin to an orchestra conductor or a QA engineer combined with a product thinker. But the transitional period is awkward: today's engineers value the skills honed by hands-on coding, and vibe coding appears to short-circuit that training process. We may need to consciously adjust how we mentor new engineers in an AI-pervasive environment, ensuring they still get exposure to the "hard stuff" and not just ride on AI autopilot.
On the flip side, vibe coding could open the field to more people, which has labor market implications. If indeed "millions of new people" can create software with much less training, the exclusive aura around coding might fade. Some experienced devs have reacted defensively to this. They take pride in programming as "an art of intellectual precision" and balk at terms like "vibe coding" that make it sound like a casual jam session. One commenter said the term itself is "antithetical to everything programming means to me", because it downplays rigor.
There is a hint of cultural clash: the meticulous craftsman vs. the fast-and-loose AI wrangler. If the latter approach gains prominence, will the craftsman feel devalued? Possibly. A particularly pointed remark on HN was: "It's interesting to see some 'engineers' have their pride and feelings hurt here. Turns out software engineering isn't gonna be among the sexiest jobs much longer... the arrogance is erupting in real time." This suggests that some developers see vibe coding as an equalizer - a humbling force for those who rode the tech boom feeling indispensable. When anyone can churn out a basic app, maybe being a programmer won't confer as much status as it did in the 2010s.
However, before declaring the coding profession "done for" it's worth noting that experienced engineers still have unique value that vibe coding doesn't replace. As we saw, complex, reliable systems still need that expertise.
One analogy drawn was the Industrial Revolution:
"There are still people manufacturing custom, high-quality items... but most things that used to be made in workshops are now produced in factories, with orders of magnitude fewer workers."
In software terms, routine CRUD apps or simple websites might become "factory-produced" by AI with far fewer human developers needed. But the bespoke, complex systems (or any novel problem) will still need skilled engineers - albeit possibly fewer of them. This raises tough questions: Will entry-level coding jobs diminish? Will we need as many junior engineers if one senior with AI helpers can do the work of five juniors? It's conceivable that the shape of software teams will change. We might see more demand for roles like "AI navigator" or "prompt engineer" (people who are very good at getting results out of AI), along with "AI auditor" or "software custodian" (people who ensure AI-generated code meets quality standards).
In the Ask HN "Are you vibe coding?" discussion, one person remarked that they use AI as a helper, "sort of like pairing with a junior dev who is genius in some ways and unbelievably dumb in others.". That's a great way to frame it (no disrepect to juniors): current AIs can do a ton of grunt work (like a tireless junior), but they lack judgment in other ways (like a junior who makes weird mistakes). So you, the human, act as a mentor/manager to this pseudo-developer. "I think at this point it's no longer coding, it's more 'AI coder management'" another commenter observed. We might literally see job postings in the future for "AI developer manager" - someone whose day-to-day is guiding AI agents to build software, reviewing their output, and integrating the results.
For individual careers, there's also an economic angle: those who embrace these tools early could become dramatically more productive (at least for certain tasks), potentially commanding higher pay or outcompeting peers.
Conversely, those who dismiss AI entirely might find themselves less efficient or left behind if the industry moves this way. It's reminiscent of previous paradigm shifts - e.g., developers who embraced high-level languages vs. those who insisted on assembly, or those who learned automation vs. doing everything manually. But it's tricky, because vibe coding to its extreme isn't advisable in production, so one must find the right balance. The safest bet for developers is to learn to use AI tools responsibly. As one article put it: AI coding assistants
"enable skilled people to create more independently than they ever have. [But] it will not replace those that can solve the hard problems that only experience and intuition can identify."
So a developer who combines experience and AI proficiency will be in the best position. They can leverage the AI for speed on easy parts, and apply their own skills for the hard parts. In contrast, someone who only knows how to prompt the AI without underlying understanding might hit a career ceiling - they'll struggle the moment the AI output isn't perfect (which is often).
Within teams, we might also see a shift in collaboration dynamics. If a single developer can prototype a whole feature by vibing with an AI, does that reduce the need for cross-specialty collaboration? For example, a frontend engineer with AI might single-handedly implement something that used to require backend help, because the AI can fill in the backend code for them. This could blur traditional role boundaries. It could also introduce new ones: maybe a "vibe coordinator" ensures that the code produced by various people+AI pairs still adheres to a coherent architecture (since AI might introduce inconsistent patterns).
Documentation and knowledge sharing also become interesting - if half the code was written by an AI, how do you document the rationale behind certain implementations? Some suggest that maintaining a "project journal" or having the AI generate a running changelog is useful, so that humans later can understand what was done. All these considerations hint that team processes will adapt. Code review, for instance, might explicitly include steps to identify AI-written sections and scrutinize them for known AI failure modes (like insecure defaults or inefficient loops). Education and onboarding might involve teaching newcomers how to effectively co-code with AI (and also how to double-check AI work).
Economically, if vibe coding and AI development broadly do make producing software cheaper, we could see more software created overall (expanding the pie, rather than just reducing developer jobs). That ties back to the "personal software" vision: more niche applications will be worth making because the cost is lower. Companies might start expecting engineers to handle multiple domains at once with AI help (maybe a single person builds an entire small product - front to back - with AI, whereas before you'd hire separate specialists). This could make the superstar generalist developers even more valuable, but it could also mean each project needs fewer total people.
In summary, vibe coding is catalyzing a re-examination of what it means to be a developer. Are we moving from being craftspeople to being curators and conductors of AI output? And if so, how do we ensure the next generation still learns the fundamentals?
The likely outcome is that software engineering will bifurcate: routine app development may become more automated (with relatively less-skilled operators overseeing AI - an evolution of "citizen developer" concept), while the cutting-edge and critical systems remain in the hands of highly skilled engineers who now wield AI tools as force-multipliers.
Far from killing the programmer, AI might make good programmers even more indispensable, even if the average code monkey work decreases. The long-term health of our software ecosystem will depend on blending the best of both worlds: human expertise and AI assistance. And that requires developers to adapt, not just in skillset but in mindset - recognizing when to vibe and when to scrutinize.
Lost in Translation: debugging, design, and testing challenges
"To understand the code, you have to read it and interpret it - the whole 'just vibe and don't worry about the code' thing goes against that obvious truth."
The ethos of vibe coding - don't sweat the details, follow the "vibe" - clashes head-on with the painstaking activities of debugging, systematic design, and thorough testing. These are the unglamorous chores that turn a hack into a robust piece of software. When developers dive deep into vibe coding, they often report hitting a wall where these traditional skills suddenly become necessary again, but harder to perform because of how the code was produced. Let's examine how vibe coding makes debugging tougher, can lead to poor design decisions, and complicates testing - and why even vibe code enthusiasts end up reintroducing structure to overcome these issues.
One major challenge is debugging code you didn't fully write.
In vibe coding, it's common to end up with large swaths of code that the AI generated while you were in flow. As Lucas Aguiar described, the AI can spew out "a thousand lines of code in a matter of minutes" while you're experimenting. It's basically impossible for a human to absorb and understand that code in real-time as it's generated. Only later, when something doesn't work, do you confront this code for the first time. Many have likened this to trying to fix someone else's code - except that "someone else" was an AI that might not follow consistent logic or could have misunderstood your intent. The result? Time-consuming debugging sessions to decipher what the AI did. Aguiar noted that frustration often leads one to just "explain [the issue] to the model again, without even knowing what has been implemented in the first place"
In other words, when faced with a bug in AI-written code, a vibecoder's instinct (understandably) might be to ask the AI itself to fix it, rather than investigating manually. This can devolve into a loop: the AI tries a fix, maybe doesn't fully get it, introduces another issue, and so on - all while the human still hasn't fully read the code.
It's debugging by trial-and-error proxy, which can be extremely inefficient. One observer dubbed it "vibe tripping and falling on your face" - you keep stumbling as the AI and you chase the errors around in circles . Eventually, many realize they'd save time by stepping out of the loop and reading the darn code. As one HN user advised:
"It seems like a waste not to read code in your project, even if quickly. What's the harm? And think of the upside."
By skimming the AI's output, you might catch obvious mistakes immediately rather than encountering them indirectly through failing tests or runtime errors.
Another issue is that bugs can hide in plain sight if you never inspect the code or write tests. The AI might produce code that functions for the happy path you tried, but has edge-case bugs that won't show up until much later. Traditional TDD or careful thought can catch those early; pure vibe coding likely won't.
Some vibe coders mitigate this by periodically asking the AI to generate tests or by running automated tests on the fly. In fact, some modern tools integrate test execution such that the AI will run tests and attempt to fix any failures as part of its loop. This is helpful, but not foolproof - an AI can only fix what it recognizes as a failure, and tests are only as good as the scenarios you ask for. If you forgot to test a condition, the AI won't magically cover it. An experienced human tester, however, might think of those conditions. This is a subtle point: AI is reactive - it responds to the prompts and examples you give it. It's not proactively designing for quality. So if a developer in vibe mode forgets to consider, say, performance under load, the AI won't spontaneously optimize for it. Many deep aspects of software quality (scalability, readability, modularity) need forward-thinking design, which vibe coding tends to skip.
The result can be code that works but is brittle or sloppy in ways that aren't immediately apparent. AI can come up with odd solutions (like an absurdly convoluted promise chain) that a human normally wouldn't, and if you accepted it without question, you're left holding the bag to untangle it when it fails. The "17 nested promises" might function in simple tests, but it's clearly a ticking time bomb for maintainability and likely to cause problems. A human who wasn't watching for that kind of thing could let it slip in, whereas a peer code reviewer would have flagged it immediately ("What on earth is this? Refactor it."). Thus, vibe coding can allow questionable design choices to proliferate until they bite you later.
Speaking of design: architectural and design decisions are easy to get wrong when one is improvising with an AI. Traditional development often involves designing data models, choosing algorithms, and planning module boundaries before coding heavily. In vibe coding, you might skip straight to "let's see something working" without that forethought. This can paint you into corners.
For instance, you might prompt the AI to implement features one by one, and only later realize they don't compose well or that you should have used a different data structure. In a normal setting, you'd refactor, but if you've lost track of what the AI has built, refactoring becomes perilous. Some early adopters encountered this "context bankruptcy" - as the AI builds more and more, even it loses track of earlier decisions if they exceed the context window, and the human hasn't maintained a coherent mental model either.
The result is a disjointed system. "No structure" was one of the biggest pitfalls Aguiar cited; being led by whatever the AI writes next is not a recipe for good design. Ironically, after some painful experiences, many vibe coders come full circle and start imposing structure themselves to make the process workable. They write down plans, break the problem into sub-tasks, and even maintain documentation for the AI to reference.
For example, one guide advises storing a project plan file and a project description file in your repo, and keeping a changelog that the AI updates. These act as an artificial "memory" so that as you or the AI come back to the project, there's a reference of what's going on. In essence, they are re-introducing classical software engineering practices (design docs, specs, etc.) into the vibe coding workflow to counteract the chaos. It's a telling development: vibe coding started as "forget the code, just vibe" but to scale beyond trivial projects, you end up needing many of the same checklists and artifacts that structured development uses. The difference is you produce them alongside or in collaboration with an AI.
Testing is another area of tension. Many vibe-coded projects (especially quick personal ones) likely forego rigorous testing - the human just runs the app and if it does what they asked, they move on. This can miss bugs. Forward-looking vibe coders are starting to leverage AI for testing, which is a promising idea: you can ask an AI to generate unit tests or integration tests fairly easily.
One blogger mentioned using an AI assistant that "is really good at creating unit tests, and while running tests it is fairly good at resolving the failures". This hints at a future where vibe coding doesn't mean no testing; instead, it means tests are also part of the conversation with the AI. The human might say "write tests for this function" and the AI does, then runs them and fixes issues. If this becomes standard, it could mitigate some quality concerns. Still, those tests have to be validated too - an AI might write simplistic tests that all pass but don't prove much. A savvy developer would review tests as carefully as code.
Interactive debugging is worth mentioning: debugging often involves using tools like debuggers, logging, etc. If you haven't read the code, it's harder to know where to put a breakpoint or what to log. Some vibe coders rely on the AI itself to debug: e.g., copy-pasting an error and stack trace into the prompt and asking for insight. This can work for straightforward issues (the AI might recognize a common mistake). But for complex logic bugs, the AI might not magically intuit the issue from the outside.
The developer might have to instrument the code and see what's happening - essentially, step out of vibe mode into classic debugging mode. By that point, lack of familiarity with the code is a handicap. One experienced user noted a pattern: "Claude (AI) tends to come up with a theory of the problem and get stuck on it, sometimes devising extensive solutions when it was a one-line bug.". The human had to intervene, use a debugger to inspect the real state, and then correct the AI or directly fix the line. This suggests that for now, human intuition and debugging skills are still crucial in non-trivial scenarios.
In terms of developer mindset, vibe coding can encourage a short-term focus: fix what's broken, add what's asked, nothing more. This is efficient but can undermine qualities like good design and future-proofing which require thinking beyond the immediate task. Some developers worry that if they get used to this mode, their ability to do deep design could wane.
"If you need to be buried in your subconscious (flow) to achieve meaningful work, maybe you've failed somewhere in design or discipline"
one person opined, drawing a parallel between over-reliance on flow and vibe coding. His point was that needing to turn your brain off (or let the AI handle it) might indicate a flaw in your approach. It's a contrarian view - many celebrate flow as productivity - but it underscores that deliberate, conscious design work is still valued. There's a balance to strike between vibing out code and stepping back to think critically about it.
Overall, the takeaway is that vibe coding doesn't eliminate the hard parts of software development; it just postpones some of them. Debugging, designing, and testing are still necessary to make software robust. If you skip those practices up front, you'll pay the price later when the mess becomes too big to ignore. Experienced developers in the discourse urge that vibe coding be augmented with at least lightweight versions of planning and review to avoid these pitfalls.
This has happened before: Hype cycles and déjà vu
"Vibe coding... I'd say it's just a new name for exploratory programming or good old prototyping."
To seasoned engineers, the buzz around vibe coding rings familiar bells. Software development has seen waves of hype about higher-level automation and paradigm shifts many times. From CASE tools in the 1980s, to visual programming and WYSIWYG IDEs in the 90s, to low-code platforms more recently - each promised to revolutionize coding and let us build software with drastically less effort. The reality usually fell short, often teaching us that the fundamentals of engineering still mattered. Many commentators are placing vibe coding in this historical context, seeing it as the latest swing of the pendulum between automation and craftsmanship. There's a sense of déjà vu: we've heard bold claims before, and we've also seen the limitations play out.
A common comparison is to Rapid Application Development (RAD) tools and visual builders. In the 90s, tools like Visual Basic and later GUI builders allowed developers to drag-and-drop UI components and generate code, or use wizards to scaffold applications quickly. These were great up to a point - until you needed to go beyond what the wizard generated or debug a complex issue, at which point you often hit a "sharp cliff". They commented that a natural-language WYSIWYG coding tool might have an even sharper cliff than past RAD tools.
The analogy is that vibe coding can give a false sense of security with rapid initial progress (the easy 80% of features), but then you encounter the 20% of cases that require deep understanding or custom logic, and the tool doesn't smoothly handle that. Early adopters of vibe coding have effectively described this phenomenon - things go swimmingly until the AI can't resolve a tricky bug or until the project complexity outgrows the context window. Then progress grinds to a halt, and one faces a lot of painful catch-up work (learning the codebase, refactoring, etc.). This is analogous to how people used to swiftly build an app with, say, Microsoft Access or PowerBuilder, only to later discover it didn't scale or wasn't maintainable, necessitating a ground-up rewrite in a more robust stack.
Another parallel is to the hype around "no-code/low-code" platforms in recent years. These aimed to let non-programmers build apps via graphical interfaces and templates. They achieved some success in niche domains, but didn't eliminate the need for traditional developers for anything beyond standard CRUD apps. In fact, the conclusion from one critical blog could apply equally to no-code tools:
"Agents give the less skilled more capability than they had before. But until they develop their own competent skill set, vibe coders will not be able to release production quality software, no matter how exponential the agent."
This is reminiscent of how no-code allowed business analysts to prototype apps, but when it came to real production systems, engineers had to get involved to ensure reliability and scalability. The current AI coding assistants might be more powerful than earlier no-code platforms (because they can generate arbitrary code under the hood), but they follow a similar hype cycle: early excitement, followed by the realization of limits and the continued importance of software engineering fundamentals.
Some skeptics explicitly call out the marketing hype surrounding AI coding. "Yes, vibe coding is part of the great LLM marketing campaign. I'm old enough to be skeptical of things that are overhyped" wrote one veteran. If you've watched tech trends for a few decades, phrases like "AI will let anyone code" set off alarm bells. It's not that the claims are false - AI really does let us do some things much faster - but the degree of improvement is often overstated in the fever pitch of excitement.
One HN commenter wryly noted that "'Vibe coding' reminds me of the '10x Developer' memes... It sounded like an excuse for not being careful/thoughtful."
The point here is that flashy labels can sometimes mask poor practices. Just as some used the "10x" concept to justify cowboy coding or rudeness ("I'm 10x, I don't have time to explain my code"), there's a fear that vibe coding could be an excuse: "Oh, I was vibe coding, that's why I didn't write tests or docs - it's the new way!" The community is rightly pushing back that hype should not override common sense. If someone claims they built a complex system purely by vibing with an AI and it's perfect, that should be met with healthy skepticism.
There's also a cycle in how developers react: initial dismissal, then over-enthusiasm, then a balanced middle. We're likely in the transition from over-enthusiasm to balance now. Initial dismissal was seen in some comments that wrote off vibe coding as "just laziness" or not real coding. That's an oversimplification - as we discussed, it's not purely laziness; there are legitimate reasons to try this approach. Over-enthusiasm is evident in those who claimed vibe coding is the future and that understanding code might become optional. But as reality sets in (through experiences like those in previous sections), a more nuanced view emerges: vibe coding is a powerful new tool, but it augments rather than replaces traditional skills. This tempered view is essentially what happened with past technologies. For example, when high-level languages came out, some thought assembly programmers would be obsolete; in reality, developers still needed to understand what the machine was doing, even if they now wrote in C or Java - it was a productivity boost, not a total replacement of thought. Likewise, Cursor/Cline/Copilot/Windsurf didn't replace programmers; they just sped up certain tasks. The unique thing about vibe coding, perhaps, is the idea of not even reading code. That extreme invites skepticism because it's so contrary to best practices. It's as if a tool encouraged people not to source control their code or not to use any testing - experienced folks will balk because we've been down similar roads (and paid the price) in the past.
Another historical echo is the concept of "end-user programming." For decades, people have imagined a world where anyone can program via natural language or simple interfaces (from COBOL's English-like syntax in the 1960s to Siri Shortcuts today). Vibe coding with plain English prompts is the closest we've come to that vision.
It does fulfill, in some measure, the sci-fi idea of telling the computer what you want in your language. But past attempts taught us that human communication is nuanced, and getting exactly what you want often requires precision that casual users struggle with. We see that with AI: you might have to refine a prompt 10 times to pin down what you actually intend. So while vibe coding tools will let more people create software, it won't mean everyone suddenly becomes a competent software creator overnight. We can expect a learning curve and new literacy to develop around prompting and verifying AI outputs. Much like spreadsheets empowered a huge population to "program" calculations (with some errors along the way), AI coding might empower many to build simple apps - and we'll have to live with a new layer of possibly buggy "citizen-developed" software that professionals then audit or integrate.
In the end, the hype cycle will likely land on a middle ground: vibe coding is a useful technique in the toolbox, not a magic wand.
"People are still free to program by hand if they want, just like some people still build engines from scratch - but it's also fair to just want to drive the car and take a trip somewhere."
This analogy to cars hints at how the narrative might settle. There will be those who value the craft of coding (like car enthusiasts who build or tune engines), and there will be those who just want to get software built (like everyday drivers who use a car without knowing the mechanics). Both can coexist. And much like the automotive world, we'll likely establish safety standards and best practices (traffic rules, seatbelts) so that even if you're "just driving" with an AI co-pilot, you don't crash and burn. The history of software suggests that every leap forward in abstraction (assembly to C, C to Python, on-prem servers to cloud, etc.) initially causes debate and skepticism, but eventually we integrate it and move on to the next challenge. AI-assisted development is going through that cycle now. Vibe coding in particular might not stay as a distinct concept; it could simply become one extreme end of AI-assisted dev that we know is risky. The fervor around the term will fade, and it will take its place alongside past trends - remembered both for the excitement it caused and the lessons it taught about the enduring realities of building reliable software.
War Stories: successes, failures, and lessons from Vibe Coding
"Sometimes, the journey is not the destination - build one to throw away. I vibe coded my entire website... but at some point you gotta introduce smart patterns to create controlled chaos."
The debate about vibe coding is not just theoretical. In the few months since the term was coined, developers have been experimenting in the real world, with varying outcomes.
From these stories, a pattern emerges: vibe coding tends to succeed when used by experienced developers on non-critical projects or as a way to boost personal productivity, and it tends to fail (or at least frustrate) when used indiscriminately or by those hoping to avoid understanding altogether. Successful vibe coders treat the AI as a powerful assistant - sometimes even a partner - but maintain a level of oversight or willingness to intervene. They also choose appropriate projects (often greenfield, well-bounded in scope, or not mission-critical). Those who have bad experiences often fell into the trap of over-relying on the AI or applying it in scenarios where its weaknesses mattered (like a complex codebase with constraints the AI didn't grasp).
In terms of lessons: one clear lesson is to start small. Many who enjoyed vibe coding did so with small utilities or components first. This lets you get a feel for the AI's capabilities and quirks in a low-risk way. Another lesson is know when to step in - treat it like working with a junior developer or an intern. If something seems off, don't just keep prompting hoping it resolves; step in and fix or guide explicitly. The stories also teach that communication with the AI is a skill. Those who succeeded often are good at prompting and iterating (remember the Raspberry Pi chatbot took 100+ prompts to refine. It's not magic on the first try; you have to work with it.
Finally, a heartening takeaway is that vibe coding can be fun and rewarding when approached with the right mindset. People have felt reinvigorated by this style - enjoying coding again, learning new things from AI suggestions, and breaking out of old thought patterns. One commenter joyfully noted that with the right combo of tools (voice input, a good code model, etc.), "It feels like taming a beast... an enmeshing of man and machine." He described closing his eyes and talking through solutions, then seeing them appear. That almost sci-fi experience is, in itself, a success - it's the kind of thing that can remind us why we love technology. So even if vibe coding isn't suitable for your day job project, trying it on a personal level can be eye-opening and instructive.
Conclusion: striking a balance between Vibes and Rigor
The rise of vibe coding has sparked excitement, backlash, and thoughtful discussion - and in the end, it underscores a timeless principle: tools change, but the fundamentals of software engineering remain.
Large language models have given us an intoxicating new way to create software by "just vibing" - tapping into an AI's vast knowledge to generate code at lightning speed, iterating through natural language prompts. This feels like a radical shift, and in some ways it is: developers can achieve in hours what might have taken days, and even non-developers can put together simple apps by describing their ideas. The genie is out of the bottle; AI-assisted coding (in various forms) is here to stay. Vibe coding, as the extreme end of that spectrum, has shown us the outer limits of what's currently possible - both the magic and the mayhem.
The key takeaway from our exploration is nuance.
Vibe coding is neither the end of programming as we know it, nor a meaningless buzzword for lazy hacks. It is a tool and technique with specific strengths and weaknesses. Used in the right context - a hackathon project, a prototype, a personal script - it can dramatically reduce friction and open creative doors.
Used recklessly in the wrong context - a security-sensitive production system, a team project with multiple collaborators - it can court disaster. Seasoned engineers have rightly pumped the brakes on the hype: "Don't believe people who boldly claim that software engineering is dying" because of vibe coding. Building robust, maintainable, complex software still requires careful thought, collaboration, and expertise. An AI can assist with those tasks, but it cannot replace the human judgment behind them, at least not with today's models.
So where do we go from here? The discourse suggests moving towards a best-of-both-worlds approach. We shouldn't throw out vibe coding entirely; rather, we should domesticate it. That means establishing patterns and tools to make it safer and more effective.
For example, employing "safe vibe coding" practices like sandboxing the execution of AI-written code, having the AI generate tests and documentation as it writes code, and setting clear boundaries (e.g. always review code for critical sections, always enforce version control, etc.). Some efforts, like Anthropic's Claude "Artifacts" or Gemini’s “Canvas” environment, have demonstrated how sandboxing can allow one to vibe code without risking your actual system. Similarly, IDEs are beginning to integrate AI in more controlled ways - for instance, having an AI suggest a multi-file refactor but staging it as a Git diff for you to inspect, rather than blindly applying it. As these tools mature, the workflow of vibe coding will likely become more interactive and transparent, addressing many of the current complaints.
Education will also adapt. Just as we teach new programmers about using debuggers or version control, we may teach them how to effectively collaborate with an AI coding assistant. Part of that will be instilling the importance of not surrendering one's understanding.
An apt motto might be: "Use AI to code faster, but never outsource your comprehension."
The moment you accept code you don't understand, you incur a debt that must be paid. Good engineers will still pay that debt, just perhaps at a later stage (e.g. reviewing an AI-generated module in detail before merging). Poor engineers might let it accumulate, and they or their team will pay dearly later. This dynamic isn't new - think of developers who copy-paste Stack Overflow code without understanding it. AI just amplifies it. The community seems to be converging on advice like: Read the diff. Write tests. Keep design in mind. If the AI suggests something, consider why. In essence, treat AI as a junior developer on the team: you welcome their contribution, but you also mentor and verify.
Another likely outcome is specialization. We may see certain domains where vibe coding (or heavy AI generation) becomes the norm, and others where it remains rare. Front-end UI tweaking, small utility scripts, internal tooling - these might lean towards vibe-like workflows because the risk is low and iteration speed is paramount. On the other hand, library development, core algorithms, security-critical code - these will likely remain mostly hand-written or at least hand-audited, because the tolerance for error is zero.
Over time, as AI improves, the boundary will shift, but developers will shift with it, always carving out what needs human careful attention versus what can be automated. The discourse often repeated: 70% to a concept, but the last 30% needs humans. That might incrementally become 90/10 or 95/5 as AI gets better at the routine stuff. But that last slice - the hard parts - might never fully disappear. Instead, humans will climb to ever higher levels of abstraction, dealing with new hard problems, while AI handles more of the grunt work below. This has been the story of programming since its inception, and vibe coding is one more chapter in that evolving abstraction story.
In closing, vibe coding has been a catalyst for discussing what we value in software development. It's forced us to ask: if an AI can churn out code faster than us, what is our role? The answer many gave is heartening: our role is to provide vision, discernment, and deep understanding.
We set the intent and ensure the result truly meets the intent. We make the hard decisions and trade-offs that require context and foresight. We also infuse the process with ethics, responsibility, and empathy for users - things no AI currently possesses. Vibe coding doesn't remove the need for humans; it shifts it. As one commenter eloquently put it,
"Another way of looking at this is that a lot of people have much to gain by letting go of some (not all) low-level control and focusing on higher layers.".
That "letting go" can be scary - it feels like losing part of our craft - but it can also be empowering if done judiciously, freeing us to tackle bigger problems.
The term "vibe coding" itself may fade as the practices around it mature. But the core idea - using AI as an active development partner - is undoubtedly going to become commonplace. The challenge and opportunity for the software community is to integrate this partner in a way that enhances our capabilities without eroding the quality and integrity of our work.
As with any powerful new tool, there will be those who misuse it and those who master it. The hope is that by openly discussing successes and failures (as we have in this essay), we collectively get wiser. Perhaps the ultimate vibe is a harmonious one: human and AI working together, each compensating for the other's weaknesses, to build software faster and better than ever before.
Getting there will require experimentation, discipline, and yes, a bit of vibing - but with eyes open and hands ready to catch the wheel when the AI swerves off course.
In the end, the "vibes" should serve the code, not the other way around. And if we maintain that perspective, vibe coding can indeed rock - without rolling us off a cliff.
Addy, I fully agree with everything you shared. Your article is not only insightful but also brings a much-needed sense of maturity to the conversation around AI-assisted development.
What I’d like to add from my own experience is this: now more than ever, it's crucial that we build a solid mental foundation—clear principles, patterns, and paradigms—for how we approach and collaborate with AI. It’s not just about speed; it’s about developing a mindset that knows how to extract value from these tools while staying grounded in good engineering practices.
Even platforms with great version control and structure can break down if there's no conscious framework guiding how we use them. Personally, I’ve been exploring the use of custom instructions—refining a set of rules that define base premises, forbidden actions, and conversational boundaries. It’s been helping me turn the AI into a real partner instead of a chaotic code generator.
I’d honestly love to see a future piece from you exploring this idea—
“Rules before Vibes: A framework to avoid depressed coding™” 😅
Thanks for pushing the conversation forward with such clarity and depth.
Such a lovely well written and comprehensive article for something we're still wrapping our heads around.
I personally feel this is a pretty significant paradigm shift in the way we do things. Despite being on the conservative side where I do look down upon vibe coded, AI bugs riddled projects, I do acknowledge the fact that the change is inevitable.
And a root of lot of my derision does stem from an existential threat from the "new way of doing things", which just puts greater stress on needing to find the middle ground between velocity and discipline.
As much as I don't want to bring out an old man yells at cloud energy, I do believe there are going to be some messy times ahead till be reach the balance. There's going to be a lot more talk on protocols to understand the mix of scrutiny vs vibing. But in the end, vibing by definition, wherein it is absolved from any form of control might not exist in its so called truest sense for anything that even touches production code.