Clearer Vision, Not Better Prompts
Prompt engineering is the wrong bottleneck. The real limit is how clearly you can describe what you want. Vision engineering — describing feeling, not just feature — is what separates AI-assisted work that feels generic from AI work that feels like you.
The Prompt Is Never the Problem
Someone in a design community posted their prompt last month. It was beautifully structured: color modes specified, typography named, component list itemized, grid columns defined. The output was technically flawless. It also looked like every other AI-generated tech site. They were confused why. The prompt was perfect.
The prompt was never the problem.
Vision engineering — describing the emotional register and experience you want rather than its technical features — produces AI output that feels personal instead of generic. Most people who struggle with AI are frustrated with their prompts. They workshop them. They add structure. They learn prompt engineering techniques. They spend energy optimizing the output layer when the real problem is at the input layer — they don't know what they want.
I've watched people write technically perfect prompts and get technically correct, completely soulless results. And I've seen people type three sentences of messy, feeling-driven description and get something that actually fits their vision.
The difference isn't the prompt. It's the clarity of the vision behind it.
Two Briefs for the Same Website
Here's a real comparison. Two different ways to brief an AI designer on the same project — a personal portfolio website for a creative technologist:
The Feature Brief:
"Build me a portfolio site with a dark background, purple accent colors, animated text headers, a project grid with hover effects, and an about section with a photo."
The Vision Brief:
"I want the site to feel like you've broken into a back-room lab — dark, slightly dangerous, obsessive. The kind of space where someone builds things that shouldn't exist yet. It should feel like the designer knows things you don't."
Both are valid inputs. One produces a dark purple website. The other produces a character.
The feature brief tells the AI what to put on the canvas. The vision brief tells the AI what you want a visitor to feel when they arrive — and AI is exceptionally good at reverse-engineering features from feelings.

Why AI Can Translate Feelings Into Features
AI is a literal-minded tool. AI is trained on human-generated content — and humans communicate in feelings, metaphors, and experiences constantly. When you say "like a coffee shop," the AI understands:
- Color: warm tones, amber, cream, dark wood
- Typography: readable, approachable, not cold
- Spacing: comfortable breathing room, not cramped
- Interaction: subtle, unhurried, welcoming
You communicated all of that in three words. The AI decoded all of it. That's the power of vision engineering.
The same logic applies to:
- "Feels like a luxury brand" → high contrast, minimal text, generous white space, quality photography
- "Like a hacker's basement" → monospace fonts, glitchy effects, terminal aesthetics
- "Clean like a medical interface" → high readability, reduced color, strong hierarchy
- "Premium and editorial" → serif typography, editorial rhythm, fewer animations
These aren't feature lists. They're emotional coordinates — and AI navigates to them better than it navigates to specific specs.
Getting Clearer: The Practice
What is vision engineering? Vision engineering is the practice of describing what you want users to feel rather than what you want the AI to build. Instead of specifying features, you specify emotional outcomes: the atmosphere, the character, the experience. AI models are exceptionally good at reverse-engineering features from feelings — better than they are at interpreting detailed feature lists — because feelings provide calibrated direction while feature lists leave most design decisions ambiguous.
Vision engineering is a skill, not a talent. It gets better with practice. Here's a framework:
1. Start With the Visitor
"When someone arrives on this page, what should they feel in the first 3 seconds?"
Stop thinking about what you're building. Think about what they'll experience. This shift forces you into the emotional register that this approach requires.
2. Give It a Reference Point
"It should feel like [X]" is one of the most effective prompts in creative AI work.
- "Like a Bloomberg Terminal" → dense, precise, professional
- "Like old Tumblr" → chaotic, expressive, human
- "Like Linear" → opinionated minimalism, things just work
A single good reference communicates more than a paragraph of specs.
3. Name What You Don't Want
"Not generic. Not like every other tech startup. Not another purple gradient."
Negative constraints force the AI off the default path — the statistical average of everything it's been trained on. Our own site's color scheme suffered from not saying this early enough.
4. Describe the Anti-Example
"It should feel the opposite of [X]"
If you can name something you hate, the AI can move away from it. "The opposite of LinkedIn's corporate desperation" is a legitimate creative brief.
Before giving any creative brief to AI, ask yourself: Could two different AI outputs both satisfy this description? If yes, your vision isn't specific enough. Keep refining until the answer is no.

When Details Matter
Vision engineering isn't permission to be vague. It's permission to be emotionally specific instead of technically specific. Some details still need precision:
- Colors you definitely need (brand colors, accessibility requirements)
- Technical constraints (mobile-first, specific framework, performance budget)
- Content requirements (what must appear, what must not)
- Brand elements (logo placement, tone of voice)
The difference: these are constraints, not the vision. They channel the execution. The vision guides the feeling. Both are necessary.
The Meta-Lesson
Here's what makes vision engineering hard: it requires you to know what you want before you ask for it. That sounds obvious, but most people interact with AI in exploration mode — they're using the AI to discover what they want, not to execute a clear vision.
Exploration mode is fine for early ideation. But at some point, you need to commit to a vision and pursue it with clarity. The implementation planning workflow helps formalize this: the plan makes the vision concrete, section by section.
The AI is never the bottleneck here. You are. But the good news is that the bottleneck is movable — it moves every time you get a little clearer about what you actually want.
Feature brief vs. vision brief — the measurable difference
Brief type What AI receives What AI produces Feature brief A list of components and properties Technically correct, personality-free output Vision brief An emotional target, a character, a feeling Output calibrated to how it should be experienced
The best AI output you've ever seen was produced by someone who knew exactly what they were after before they started typing — not someone who wrote a better prompt.
Clearer vision is the upstream fix. Every prompt, every plan, every iteration runs better when the vision is sharp. Start there, and the rest gets easier.
→ Stop Prompting — Start Planning — once the vision is clear, this is how to turn it into an execution plan that AI can build from.
This post is part of our AI Productivity Stack — the full system for building faster, thinking clearer, and shipping better with AI.