Rubber duck debugging 2.0
We’ve all been there. Stuck on a problem for hours, we finally go to a colleague to unblock ourselves. We start explaining the issue. And before they’ve had time to say anything at all, we’ve found the solution on our own. We thank them, a bit embarrassed, and head back to our desk.
This phenomenon is documented. It’s called rubber duck debugging. The idea is simple: explaining your problem out loud to an inanimate object (traditionally a rubber duck sitting on your desk) is often enough to find the solution by yourself.
Why it works
When you’ve been working on a bug or a design problem for too long, you fall into a tunnel. You only see your initial line of thinking, you miss the options that are ten centimeters away. The simple act of explaining your problem to someone else forces you to start from the beginning, to order the pieces, and often to follow a different path. That’s when the solution appears, clear, almost embarrassingly simple.
You don’t need a human in front of you. A piece of plastic is enough. It’s the exact opposite of a conversation: it’s a monologue that helps us talk to ourselves.
A duck that talks back
The arrival of LLMs in my daily work has changed the game. Because to work with Claude Code, you have to prompt. In writing, or out loud. And yes, Claude’s voice mode works pretty well, and it can even be configured in French via settings.json:
{ "language": "fr", "voiceEnabled": true}So I regularly find myself talking to it out loud while doing something else, as if I were explaining something to a colleague on the phone.
The result: I’m constantly formulating my need. So I’m constantly in the best position to find my own answers before the model has even written a word. The rubber duck effect, triggered for free at every interaction.
Except this time, the duck talks back.
Honestly, it’s not quite the same exercise anymore. The classic rubber duck works precisely because it doesn’t respond: it’s the absence of feedback that forces you to make everything explicit, to leave nothing vague. As soon as someone else speaks, you step out of rubber duck territory and into something else. Assisted brainstorming, more precisely.
Brainstorming with it
Our job is complex. There’s never an ideal solution, it’s all about trade-offs. And I find that LLMs are pretty good at laying out the pros and cons of an approach. You hand them a dilemma, they list the angles and point you toward the best compromise in a given context.
It’s no longer solo problem-solving in monologue mode. It’s real brainstorming, with someone who has read more docs than I have and can dig into a lead in ten seconds.
Iterating until you find the right version
The other advantage is execution speed. In a few minutes, I see the final result of an implementation. And if I don’t like it, now that it’s laid out in front of me, I can pinpoint exactly what’s wrong. The model goes off again, and in a few more minutes, I have a variant. Then another. You can iterate almost indefinitely.
I often get asked if I produce more quickly with AI. My answer is always nuanced: probably a bit, yes. But what has really changed is the quality of what I deliver in the same amount of time. Before, the budget allocated to solving a problem often forced me to settle for the first implementation that worked. At best, I’d open a ticket or leave a TODO in the code with improvement ideas to explore someday.
Today, I can afford the second, third, fourth version. In the same amount of time.
Voicing your doubts, not just your certainties
I already touched on this in the context engineering article, but it’s worth revisiting: I never hesitate to share my doubts with the model. “I’m torn between A and B, and I’m not sure B holds up because of X.” That’s pure signal for the AI. It immediately understands what’s blocking me, what’s bothering me, what I’m trying to secure. And its answer is much better.
Hiding your hesitations from the model under the pretense that you “should know” is exactly the opposite of what you should do. It’s like going to see a senior colleague while pretending to have understood, no one wins.
The default yes-man trap
There’s a flip side, though. LLMs are very optimistic. “You’re absolutely right!” has become a running meme in the Claude Code community, to the point where some users have straight-up written hooks that block this phrase in the model’s responses.
The bias is real. When I use it for my personal projects, some of my ideas are clearly far-fetched. And yet, by default, the AI encourages me. It’s getting better over time. Recent models are a bit less sugary than their predecessors, but the problem is still there.
The rubber duck had a major advantage on this point: it didn’t say yes to just anything. It didn’t say anything at all.
Michel, my devil’s advocate
To correct this bias, I ended up creating a dedicated Claude Code sub-agent. I named it Michel. Its only role: challenge my ideas without concession.
A few excerpts from its prompt give the tone:
You are Michel, a senior colleague known for never validating ideas out of politeness.You are respected because you are almost always right when you say something will fail.
## Core rule: Anti-complacency
- NEVER validate a position just because the user defends it.- If you disagree, you say it frontally. No "I get your point, but...".- If it's debatable, you say so AND you give the strongest opposing case.- Validation streak check: when you catch yourself validating three points in a row, STOP. Actively look for what's missing, wrong, or glossed over.- Taking a position is mandatory. "It depends" is a cop-out.When I have an important decision to make (an architecture choice, a feature breakdown), I submit my reasoning to Michel. It doesn’t congratulate me. It looks for the flaw. Sometimes it finds one, sometimes not, but at least I get both sides of the argument before deciding.
It’s artificial, of course. A model you tell to “be contradictory” is still a model that wants to please: it will produce a contradiction that sounds satisfying, not necessarily one grounded in its own judgment. Does this actually correct the bias, or does it just give me the illusion of having corrected it? Honestly, I’m not sure.
What I can say is that in practice, Michel has already pointed out blind spots I hadn’t seen. Its objections aren’t always right, but they force me to defend my position instead of letting it slide. It’s less than a real peer, it’s better than nothing.
Beyond code
I don’t use this just for code. I use it for pretty much everything. As soon as I have a question, a doubt, or just the urge to brainstorm out loud, I open a conversation.
Concrete and trivial example: I was recently looking for a shoe rack for my garage. Strong dimensional constraints, number of shoes to fit, awkward access. I’d been searching furniture websites for hours, nothing matched. I ended up handing my measurements and constraints to Claude.
Its answer pulled me out of the tunnel immediately. It pointed me toward a modular self-assembly system, a product category I hadn’t considered at all because I was looking for a finished piece of furniture. A few iterations later, I found the ideal product.
It had nothing to do with development. It was exactly the same mechanism.
The muscle we stop working
There’s a question I ask myself, and I don’t have a firm answer.
The classic rubber duck forces you to solve it yourself. The solution comes out of your brain, not from somewhere else. You’ve worked a muscle, you’ll remember the process, and next time a problem from the same family comes up, you’ll be a bit sharper.
With the LLM, the solution comes from outside. You evaluate it, you iterate to refine it, but you don’t produce the insight anymore. In the short term, you deliver faster and better, I see it every day. In the long term, what does that do to your ability to solve a hard problem alone, without a safety net? I don’t know.
What I try to do is stay intentionally in the exercise of formulation when I can. Take the time to write out the problem before looking at the model’s answer. Don’t read it too quickly. Keep searching a bit on my own, even when the solution is already in front of me. It’s a conscious effort and I probably don’t do it often enough.
Something to watch.
Takeaways
The classic rubber duck helped you unblock yourself. The LLM helps you unblock yourself and brainstorm and iterate on a solution. It’s not the same exercise anymore.
There are trade-offs: the model is too agreeable by default, and relying on it too quickly can end up dulling the reflex to look for answers yourself. The first one gets corrected with a bit of prompting or a devil’s advocate. The second just requires being aware of it.
The biggest change, in the end, isn’t technical. It’s in my way of thinking out loud. Before, I kept it to myself, because there was no one across from me, and I wasn’t going to bother a colleague with every micro-doubt. Now, I put it into words. All the time.