#4 Why I Don’t Let AI Write My Code (And You Shouldn’t Either)

A glowing human brain and an advanced AI robot face off, separated by a digital energy barrier.

Anytime I meet someone new and we get to talking about my project, at some point they inevitably say something like, "Dude, just write it with ChatGPT!"

They’ll then excitedly share an example from their own projects—how AI wrote all the code for them in no time, and how they’re completely happy with the results.

What follows is my usual 10-minute explanation of why I don’t let AI write any of my code. Since this conversation happens so often, I figured I’d just make a whole blog post about the topic.

To be clear, I do think AI is an amazing tool that can significantly boost productivity in many areas. I just don’t believe it’s a good tool for writing code for you.

Code that works vs Code that is optimized

A programmer sits at a desk, facing two contrasting representations of code. On the left, a chaotic mess of tangled wires symbolizes inefficient but functional code. On the right, a sleek, structured digital blueprint represents optimized and efficient programming. The programmer is positioned between the two, contemplating the difference. The background features a futuristic, high-tech setting with digital highlights.
Quick and dirty comes with a ton of technical debt you eventually need to pay up.

For many VBA programmers, our first experience with code came from clicking the "Record Macro" button in Excel. The macro recorder generated code automatically, and it worked like magic—until something changed. A row shift, a new data structure, or an unexpected variation, and suddenly, the macro broke or had to be re-recorded.

That’s because recorded macros rely on hardcoded values—fixed ranges, sheet names, and object references. Once we started writing VBA manually, we learned to make our code dynamic, using variables, loops, and logic structures. This shift transformed our macros from fragile, one-time scripts into flexible, reusable solutions.

Do you see where I’m going with this? Just because code works doesn’t mean it’s good code.

The case with AI-generated code is similar, but the flaws are harder to spot. AI can generate code that runs without errors, but it often lacks efficiency, flexibility, and maintainability. At first, it might seem fine, but as your project grows, AI-generated code can become a mess—full of redundancies, poor structure, and hardcoded values that make long-term use a nightmare.

In contrast, optimized code is clean, adaptable, and built for long-term use. It’s efficient, scales well, and follows good design principles—things AI struggles to achieve without human oversight.

That being said, I’ll concede that not every coding task demands perfection. Sometimes, you just need a quick-and-dirty prototype to test an idea or throw together a proof of concept as fast as possible. In those cases, yes, using AI can save you a ton of time and would be recommended. If all you need is something that works for a one-off demo or a short-term fix, AI’s speed can be a lifesaver—just don’t expect that code to hold up under scrutiny or scale gracefully when your project grows.

AI Code is a Blackbox

A mysterious glowing black cube with question marks floats in a futuristic digital environment, surrounded by chaotic, unreadable programming code. A puzzled programmer stands nearby, deep in thought, unable to see inside the cube. The scene represents the opaque nature of AI-generated code—where it may function but remains a black box, difficult to understand or debug.
Do you understand how your AI generated code actually works?

One of the biggest problems with AI-generated code is that you have no idea how it actually works.

When you write code yourself, you understand every function, every variable, and every decision made. You can debug, modify, and optimize it as needed. Sure, if you don’t document your work, you might still scratch your head a little after a six-month break, but you’d still have a far better grasp of your own code than something AI wrote for you.

Do you really want to create a dangerous scenario of blind trust in AI-generated solutions? Just because the code works doesn’t mean it’s efficient, secure, or even doing exactly what you intended. Hidden within it could be performance bottlenecks, security flaws, or unnecessary complexity. And the moment something breaks, you’ll find yourself troubleshooting a tangled mess of AI logic—without ever fully understanding how it got there.

AI makes you Dumb

A split-image of two brains represents the contrast between human intelligence and AI dependency. On the left, a colorful brain with mechanical gears symbolizes active thinking, problem-solving, and cognitive engagement. On the right, a dull, gray brain with circuit-like patterns represents AI reliance, automation, and the loss of independent thought. The image conveys the idea that blindly trusting AI can weaken critical thinking and problem-solving skills over time.
Use it or lose it

Beyond the immediate risks of Blackbox code, there’s a deeper issue: how AI impacts your own abilities as a coder. The more you blindly trust AI, the more you over-rely on it, shifting more and more responsibility away from yourself. And here’s the problem: use it or lose it applies just as much to coding and critical thinking as it does to any other skill.

I worry that heavy reliance on AI might leave newer coders struggling with foundational skills if they had to work without it. They’d struggle with designing efficient algorithms, structuring applications, or even debugging without AI holding their hand.

At the end of the day, coding isn’t just about typing—it’s about thinking. The hardest part of programming isn’t writing the code; it’s the planning phase:

  • What needs to be done?
  • What’s the best way to do it?
  • Is this even necessary, or is there a better approach?

At the core, programming is about problem-solving, logic, and decision-making—skills that erode when you let AI do all the thinking for you.

AI-generated code is fine for basic tasks like automation, but once you move into designing complex applications or frameworks, it quickly falls apart. The real challenge in coding isn’t typing—it’s planning, structuring, and optimizing. The more you let AI handle that thinking for you, the more you lose the ability to do it yourself.

How I use AI with ACED BI?

I don’t use AI to write any of my code. Not a single line. But that doesn’t mean I ignore AI entirely—it just means I use it the right way: as a coding buddy, not a replacement for my own skills.

Each day, while working on ACED BI, I keep ChatGPT in the loop, explaining what I’m doing, what I’ll do next, and whether it has any opinions on my approach. Instead of asking it to write code, I use it to bounce ideas off of—especially when I have multiple ways to solve a problem.

For example:

  • If I have two different ways to structure an algorithm, I’ll ask AI which one has the smallest memory footprint and would likely be the most performant.
  • If I’m considering refactoring a part of my code, I’ll check whether AI sees potential pitfalls or better alternatives.
  • Sometimes, just explaining my approach to AI helps me spot flaws or rethink my strategy before I even get its response.

AI doesn’t think for me—it just challenges my assumptions, much like having a second set of eyes on my work. Used this way, AI is a valuable tool, but it never replaces deep understanding, experience, and careful decision-making.

Closing Thoughts

AI has its place in coding, but handing over full control is a mistake. Writing code isn’t just about getting it to work—it’s about understanding, optimizing, and maintaining it. That’s why I treat AI as a second opinion, not a replacement for thinking.

Next week, I’ll share how I document my code for ACED BI, ensuring I can always pick up where I left off—even months later—while keeping the project scalable for future growth.