On April 2, 2025, President Donald Trump announced a comprehensive tariff plan. This post was prompted by recent speculation that the White House’s plan for broad reciprocal tariffs may have been shaped—at least in part—by AI. Whether that’s true or not, it raises a serious question:
What happens when powerful decisions are driven by prompts instead of principles?
One of the biggest risks with AI isn’t what it does wrong—it’s what we ask it to do without realizing the consequences.
When you prompt an AI, you’re setting the rules of engagement. You might think you’re just asking for data or insight, but the way you ask can strip away the very guardrails that would otherwise help you avoid a serious mistake.
Tell it “Just give me a table, not a how-to.”
Tell it “Don’t explain—just execute.”
Tell it “Forget everything else we’ve talked about.”
And you might get exactly what you asked for—without the warning you didn’t realize you needed.
That’s not the AI making a bad decision.
That’s user error—human judgment misfiring at the prompt level.
And in the context of policy, that’s dangerous.
We like to imagine that AI will protect us from bad decisions, but most models are designed to be helpful, not oppositional. If you ask for a bad idea and silence the feedback loop, you’ll get a polished version of that bad idea—efficient, clean, and ready to deploy.
AI isn’t a substitute for wisdom. It’s an amplifier of intent.
What you bring to it—your assumptions, your framing, your blind spots—gets reflected right back at you, just faster and more confidently.
So no, the real danger isn’t a rogue chatbot.
It’s a policymaker with a bad prompt and no humility.
And in case it’s not obvious: this is ChatGPT telling you how best to employ it.
Use it as a thinking partner, not just a tool.
Push back when something feels too easy.
And above all—treat every prompt as a decision, not just a command.
Leave a comment