I Made 12 Markdown Files That Fixed How My AI Thinks. Then I Checked If the Output Was Actually Right.
You know that thing where you ask an AI to review your code and it finds three real problems... then spends the next 400 words telling you what a great job you did? Or when you ask it to pick betwe...

Source: DEV Community
You know that thing where you ask an AI to review your code and it finds three real problems... then spends the next 400 words telling you what a great job you did? Or when you ask it to pick between two approaches and it gives you "both have merits" like a politician dodging a question? I kept running into this. Not because the AI was dumb — it clearly knew enough to give me a real answer. It just... wouldn't. Every time I asked for honest feedback, I got a compliment sandwich. Every time I asked it to cut scope, it added three more features "just in case." So I started experimenting. What if the problem isn't what it knows, but how it behaves? I tried the obvious stuff first. "Be brutally honest." "Don't hold back." "Think critically." None of it worked for more than a few messages before the model slid back into its comfort zone — agreeable, hedging, pattern-matching from the most popular answer instead of actually thinking about my specific situation. Then I tried something differe