Last month I noticed I couldn't remember how to write a recursive function. Not a hard one. The kind I used to knock out in five minutes. I'd been letting Claude write them for me, and somewhere in the past year, the skill just... left.
I assumed the danger of AI was hallucination. Wrong answers, bad code, made-up facts. The actual danger is the opposite. It gets things right, and you stop thinking.
Your brain on ChatGPT
MIT put EEG sensors on 54 people and had them write essays. One group used ChatGPT. One used Google. One used nothing.
The ChatGPT group's brains were barely firing. The no-tools group lit up like a switchboard.
That alone is interesting. What came next is alarming. In a fourth session, they took ChatGPT away from the first group. Those people still couldn't match the cognitive engagement of the group that had been thinking on their own the whole time. Four months of offloading and their brains had already rewired. The researchers called it "cognitive debt," borrowing the term from software engineering. Quick hacks pile up until the whole system is fragile. Same thing, different substrate.
I recognized myself in that study. Not the measured subjects, the pattern. Reaching for Claude on questions I used to think through. Copying answers without checking whether I understood them. Whole weeks where I'm not sure how much of the thinking was mine.
Three steps, two skipped
Your brain learns in three stages. It encodes: prefrontal cortex and hippocampus fire, build new connections, physically rewire. It retrieves: you pull information back out from memory, and that retrieval is what converts knowledge into skill. It corrects: you get something wrong, your brain fires an error signal, prunes bad connections, strengthens good ones.
Paste a question into ChatGPT, copy the answer. You get a faint encoding from reading the response. No retrieval. No error correction.
Two out of three steps skipped. No learning happened.
You can't get in shape watching someone else lift weights.
The chef and the line cook
A survey of 791 developers found something counterintuitive. Senior developers use more AI-generated code than juniors. But they spend far more time fighting with it. Inspecting output, rewriting sections, debugging, sometimes scrapping the whole thing.
A senior dev who spent years writing code by hand is a head chef who's worked every station. They taste what the AI produces and know immediately when something's off. Needs more salt. Error handling is lazy. Race condition waiting to happen. They use AI to move faster. The thinking is still theirs.
A junior who skipped that apprenticeship is managing a kitchen full of robot cooks without knowing how to cook. Everything looks fine until something catches fire.
Senior vs junior using AI
Identical output. Completely different understanding behind it.
Same technology, opposite results
Harvard built an AI tutor called PS2-PAL for physics students. Three constraints: never give the full answer, make students attempt the problem first, adapt to each student's pace. Students using it learned more than twice as much as those in traditional classes.
Then they gave a different group plain ChatGPT with no guardrails. Those students learned less than students with regular human teachers. They pasted problems, got answers, felt productive, and retained nothing.
Same underlying model. One version doubled learning. The other destroyed it.
The difference was whether the student's brain did the work.
The difficulty is the point
Robert Bjork at UCLA has studied this for decades. He calls it "desirable difficulty." Making learning harder in the short term makes it dramatically more effective long term. Testing yourself instead of re-reading. Spacing practice instead of cramming. Mixing problem types instead of drilling one.
Every one of these techniques requires friction. Friction is exactly what people use AI to eliminate.
Our brains are optimized to conserve energy. Thinking burns real calories. When a tool offers to skip the hard part, your brain takes that deal every time, unless you consciously refuse.
How many people take the stairs when there's an elevator? Now imagine the elevator also whispers the answer to whatever you're working on.
Why I built this
This is why I made Do The Reps. A system prompt you add to any AI assistant.
When it's active, the AI stops handing you answers. It asks what you think first. It walks you through problems one step at a time. When you're wrong, it doesn't correct you, it asks you to figure out why. Strict mode removes all hand-holding. If you try to cheat, paste problems, fish for answers, it calls you out.
One system prompt. But it flips the dynamic from "AI does the thinking" to "AI makes you think harder."
The choice
Every time you open a chat window, you're making a small decision. Use the tool to skip the work, or use it to do better work.
For routine tasks, let the AI handle it. For the things you need to actually understand, the skills you want to keep, the knowledge that matters, do the reps.
Nobody else can do them for you.
Written with ❤️ by a human (still)
