I used to think the risk with AI was that it would get things wrong. Bad code, hallucinated facts, whatever. Turns out the bigger risk is when it gets things right, and you stop thinking for yourself.
Your brain on ChatGPT
MIT ran a study in June 2025. They strapped EEG sensors to 54 people and had them write essays. One group used ChatGPT. One used Google. One used nothing. The ChatGPT group's brains were barely active. The Google group did a bit better. The no-tools group lit up.
The scary part wasn't the measurement itself. It was what happened after. In a fourth session, they took ChatGPT away from the first group. Those people still couldn't match the cognitive engagement of the group that had been thinking on their own the whole time. Four months of leaning on AI and their brains had already started to rewire. The researchers called it "cognitive debt," borrowing the term from software engineering. You know how quick hacks in code pile up until the whole system is fragile? Same thing happens in your head.
I think about this a lot. I'm not anti-AI. I use it every day. But I've caught myself reaching for Claude on questions I used to think through. Copying an answer without checking if I actually understood it. There have been weeks where, if I'm honest, I'm not sure how much of my own thinking I was really doing.
The neuroscience is pretty simple
There are three things your brain does when it learns something:
First, it encodes. Your prefrontal cortex and hippocampus fire up, build new connections, physically rewire. This is the "paying attention and taking it in" step.
Second, it retrieves. When you try to solve a problem or write code from memory or explain something to someone, you're pulling information back out. This is what converts short-term knowledge into actual skill. Your basal ganglia turn conscious effort into the thing where your fingers just know what to type.
Third, it corrects. When you get something wrong and realize it, your brain fires a specific error signal that prunes bad connections and strengthens good ones. This is how you get better. You have to be wrong first.
When you paste a question into ChatGPT and copy the answer, you skip steps two and three completely. You might get a faint encoding from reading the response, but there's no retrieval, no error correction. No learning.
It's not complicated. It's the same reason you can't get in shape by watching someone else exercise.
The thing nobody talks about with senior devs
There's a survey of 791 developers that found something I didn't expect. Senior developers use more AI-generated code than juniors. But they spend way more time arguing with it. They inspect the output, rewrite chunks of it, debug it, sometimes throw it away entirely.
A senior dev who's spent years writing code by hand is like a head chef who's worked every station in the kitchen. They taste what the AI produces and immediately know when something's off. Needs more salt. Error handling is garbage. They use AI to go faster, but they're still doing the thinking.
A junior who's never written much code on their own is trying to run a kitchen full of AI cooks without knowing how to cook. Everything looks fine until something catches fire, and they're standing there going "I... don't know how to cook?"
Senior vs junior using AI
The output can look identical. The understanding behind it is completely different.
Harvard figured out how to make AI actually teach
Here's the thing, though. AI isn't inherently bad for learning. It might actually be the best learning tool we've ever had. But there's a very specific way you have to use it.
Harvard built an AI tutor called PS2-PAL for physics students. Three rules: never give the full answer (one step at a time), make students attempt the problem before getting help, adapt to each student's pace. Students using this system learned more than twice as much compared to traditional classes. They were more engaged, more motivated. Their brains were doing the work.
Then they let students use plain ChatGPT without the guardrails. Those students learned less than students with regular human teachers. They just pasted problems, got answers, felt like they were learning, and retained nothing.
Same underlying technology in both cases. Completely opposite results.
Struggle is the feature, not the bug
Robert Bjork at UCLA has been studying this for decades. He calls it "desirable difficulty." The counterintuitive finding is that making learning harder in the short term makes it dramatically more effective long term. Testing yourself instead of re-reading. Spacing practice out instead of cramming. Mixing different problem types instead of drilling one thing.
All of these require effort and friction. Exactly what most people use AI to avoid.
The thing I keep coming back to is that our brains are wired to take the path of least resistance. Hundreds of thousands of years of evolution optimized us to save energy. Thinking burns actual calories. So when there's a tool that lets you skip the hard part, your brain will take that deal every single time unless you actively decide not to.
How many people take the stairs when there's an elevator? Now imagine the elevator also whispers the answer to whatever you're working on.
Why I built this
This is why I made Do The Reps. It's a system prompt you can add to any AI assistant.
When it's active, the AI stops giving you answers. It asks what you think first. It makes you work through problems step by step. When you're wrong, it doesn't correct you, it asks you to figure out why. There's a strict mode if you want zero hand-holding. And if you start trying to cheat (pasting problems, fishing for answers), it calls you out and redirects.
It's a small thing, just a system prompt. But it changes the dynamic from "AI does the thinking" to "AI makes you think harder."
The actual point
I'm not writing this to tell people to stop using AI. I use it constantly. I'm writing this because I've seen what happens when you let it do your thinking for you, in my own head and in people around me.
Every time you open a chat window, you're making a small choice. Use it to skip the work, or use it to do better work. Most of the time, for routine stuff, let the AI handle it. But for the things that matter, the things you actually need to understand, do the reps.
Nobody else can do them for you.
Written with ❤️ by a human (still)
