Building with AI — buff or debuff?
My personal experiences working with AI tools to increase personal productivity and build side-projects.
Having experimented with generative tools for months, I'm often asked: do they actually change the game for personal productivity and software development, or are they still too fickle to trust? The short, perhaps unsatisfying answer is "both." Here's why.
The buff
For those of us who aren't professional engineers, generative AI looks like a massive unlock — and it is. A Python debugging exercise that once took hours is now near-instantaneous. This productivity boost isn't restricted to software engineering — it extends to news aggregation, inbox summarization, ideation and more, but getting there takes some hard work.
The good news is that many in the community are building great content to guide the way. I've found folks like Matt Pocock, Jake Van Clief, and Jeff Su to be invaluable. Their YouTube tutorials and template resources helped me move these tools from "interesting tech" to "functional utility."
One of my most critical learnings from Matt is the importance of agent alignment. For example, his "grill-me" skill relentlessly questions project details, exposing the architectural assumptions that can lead to unexpected outcomes. By driving alignment early and converting it into phased plans, my success rate for AI-driven development has improved significantly.
Some of the other concepts I've learned from these experts include the use of agent instruction sets (e.g. AGENTS.md), memory (MEMORY.md), skills (SKILLS.md), directory systems, and more. These techniques support more granular control of the AI and can extend from traditional engineering use cases to personal productivity — inbox management, for example.
The debuff
However, without these techniques, it is very easy to inadvertently relinquish control of a project to the AI. If you aren't careful, these tools have an inclination to jump ahead, make assumptions, and build without that critical human alignment. This "instinct" from the AI renders as unexpected outcomes.
Once this happens, it can be challenging to resolve. When the AI starts guessing at intent, the user — who may lack the requisite knowledge to debug the output — is left watching the machine guess at its own "mistakes." Mistakes that the AI may be unable to recognize given unstated misalignment. At that point, I've found it is often better to start fresh than to keep incrementing on top of complex and contradictory context.
This effect doesn't just hurt the project; it eats away at our skill development. The "hour-long debugging session" is often what teaches us how a framework actually works. Agentic software development must prioritize Human-in-the-loop (HITL) steps to ensure we stay in the driver's seat, not the passenger seat.
The verdict
From MySpace to custom builds, I've often struggled with the balance between "the dream" and "the time to build it." (More to come on my experience building this specific website.) These tools are a real buff to anyone willing to throw out their old workflows and invest in learning new ones. It requires a shift in focus: more time on design, ideation, and QA, and less time on the manual labor of building. The good news is that there is plenty of content to help.
I encourage you to check out Matt's work and see if you find the same benefits in your own process.