I’ve been using AI a lot lately.
For content. For software development. For all sorts of tasks that used to take me a lot longer.
It has improved drastically over the last few months, especially for software development. It feels borderline magical. It can move fast, fill in gaps, suggest patterns, write boilerplate, refactor old code, and help get me unstuck far quicker than I could on my own.
But there’s a downside if I’m not extremely vigilant, which I haven't always been.
AI is making me lazy and incompetent.
That sounds dramatic, but it's important to notice what's happening.
Not because AI is bad. Not because it’s doing anything that shouldn't be expected. But because I’ve started changing the way I work without changing the way I think.
I’m mostly talking about this through the lens of being a software business owner and software developer, because that’s where I’m feeling it most clearly right now. But I think the same principles apply well beyond software. Coaches using AI for programming, content, communication, planning or business admin are going to run into many of the same traps.
The Real Problem Isn’t AI
AI does a great job.
Just not a perfect one.
When it writes code, it doesn’t always have all the context it needs. The same thing applies when a coach uses AI to draft a training plan, write athlete messages or create content. AI can produce something solid, but it may miss the context that matters most: the athlete’s history, the coach’s philosophy, the nuance of the relationship or the reason a certain decision was made before. Sometimes it can’t possibly have all the context, because some of it only exists in our heads or in systems the AI does not have access to.
The little edge cases. The assumptions. The business rules I forgot to explain. The reason one ugly-looking piece of code exists because six months ago it solved a very specific problem.
So AI fills in the gaps.
Sometimes it fills them in well. Sometimes it guesses wrong.
That’s normal, people do that too.
The issue is that when I write code myself, I move more slowly. And that slower pace actually helps understanding.
As I write, I naturally build up a mental checklist. A coach probably does the same thing when building a program manually or writing feedback to an athlete. Going step by step creates space to notice risks, inconsistencies and things that need double checking. I notice the risky parts. I can feel where things might break. I start forming a library of tests I need to run as I go. That thinking is built into the act of writing.
When AI writes the code, it happens much faster.
I can still see the code. I can compare it against the previous version. I can review the difference. I can understand what changed.
But sometimes I miss things.
And if those things don’t get tested, bugs get through.
That’s happened to me a few times recently.
Partly overconfidence. Partly laziness. Partly incompetence.
None of that is comfortable to admit, but true.
Speed Can Hide Bad Habits
The danger with AI is not just that it makes mistakes.
The danger is that it lets me skip parts of my own process without realizing what I’ve lost.
It compresses work so quickly that it becomes easier to approve things I haven’t properly interrogated. In coaching, that might look like approving a week of programming, a nutrition guide or a piece of marketing content because it looks good at a glance, without fully checking whether it actually fits the athlete, the business or the brand.
It creates the illusion that because I understand roughly what happened, I’ve done enough.
Often I haven’t.
That’s where the laziness creeps in.
Not the obvious kind. Not “I can’t be bothered.” More the subtle kind: trusting that something is probably fine because the output looks good and arrived quickly.
And over time, if I keep doing that, incompetence follows.
Because competence isn’t just knowing what good code looks like after the fact. It’s developing the habits that catch problems before they ship. It’s building judgment. It’s noticing what’s missing. It’s understanding why something works, not just that it appears to.
If I outsource too much of that thinking, I shouldn’t be surprised when those muscles get weaker.
This Is Not Terminal
The good news is I don’t think this is some irreversible decline.
I think it’s just the normal process of adapting to a new technology.
Every big productivity shift changes the workflow around it. The people who benefit most are usually not the ones who simply use the new tool the most. They’re the ones who redesign their process to account for its strengths and weaknesses.
That’s the part I’m still learning.
AI clearly makes me more productive. I’m not interested in pretending otherwise. The upside is too large.
But if I want the upside without the hidden cost, I need to change how I work.
I need better review habits. Better testing habits. Better prompts. Better constraints. Better checkpoints.
In other words, I need a workflow that assumes AI will help me a lot and also make honest mistakes.
Because it will.
AI Is Starting To Feel Like An Employee
More and more, AI feels less like a tool and more like an actual team member.
It has more agency now. It can go off and do work on its own. It can make progress independently. It can hit roadblocks, make judgment calls, choose a direction, and keep moving without me. That’s true whether it’s writing software, drafting blog posts, building workflows or helping a coach create onboarding emails, training resources or community content.
That’s useful.
But it also means we probably need to start managing it the same way we’d manage a real employee.
Not with paranoia. Not with micromanagement.
With healthy oversight.
In good software teams, nobody is above review. Everyone gets their work checked by someone else. That’s not an insult to their ability. It’s not a lack of trust. It’s just an acknowledgement of reality: everyone makes mistakes, no matter how smart or experienced they are.
Code review exists because humans are fallible.
That same principle applies here.
If AI is doing meaningful work, then that work needs review.
Not because AI is uniquely unreliable. But because all contributors are.
Review Is Not The Enemy Of Speed
One of the traps I’ve fallen into is assuming that because I’m reviewing it, I’m reviewing it thoroughly enough.
As something that slows down the benefits of AI.
But that’s the wrong way to think about it.
Review is what makes speed sustainable.
Without review, AI can absolutely help me move faster in the short term. It can also help me create bugs faster, stack up technical debt faster, and become less sharp over time.
With review, it becomes much more powerful.
And there’s another benefit too: learning.
Some of the best things I’ve learned in software development came from feedback on my code. Someone pointing out a cleaner pattern. A better structure. A hidden edge case. A simpler way to express an idea.
And I’ve learned just as much reviewing other people’s code. Spotting something clever. Seeing a pattern I’d never considered. Understanding a tradeoff I hadn’t appreciated before.
That’s one more reason to treat AI like part of the team.
Not just as something that produces output, but as something whose output can be reviewed, discussed, improved, and learned from.
The Mindset Shift I Need
I don’t think the answer is to use less AI.
I think the answer is to use it more responsibly.
For me, that means a few things.
It means not confusing generated output with completed work.
It means remembering that fast code generation does not remove the need for slow thinking.
It means being explicit about context, because AI cannot read my mind.
It means checking assumptions, not just syntax.
It means building review and testing into the workflow, instead of treating them as optional if the code “looks about right.”
Most of all, it means accepting that this is my responsibility.
AI didn’t make me careless. It exposed where I was willing to be careless because the tool was so useful.
That’s a harder thing to admit, but probably a more useful one.
Where I’ve Landed
So yes, AI made me lazy and incompetent.
But only because I’ve been using it with an outdated workflow.
I’m still applying old habits to a new kind of collaborator.
And that’s what AI increasingly is: a collaborator.
A very fast one. A very capable one. A very useful one.
But still one that makes mistakes.
So the path forward isn’t fear. And it isn’t blind trust either.
It’s better process.
Treat AI like a member of the team. Give it direction. Let it do work. Expect it to make honest mistakes. Review what it produces thoroughly. Iterate based on feedback. Whether that’s code, coach education content, athlete communication or business systems, the principle is the same.
That’s not micromanagement, that’s just good management.
And if we all get that right, AI won’t make us lazy and incompetent.
It’ll make us better.
Take our Free "Authentic" Marketing Course for Coaches
Designed for endurance sport coaches. Marketing doesn't need to be pushy. The best marketing simply creates a win-win relationship between you and your customers. Take the simple 6 part course to learn more.











