Design Shapes AI
An essay by Austin Knight
Summary
AI tools like Cursor, V0, Lovable, and Bolt are making it easier than ever for designers to create functional prototypes and ship live code. While these tools excel at producing good experiences by default, great design requires a designer’s taste, perspective, and craft. AI will get you to a strong baseline quickly. The emerging opportunity is in knowing how to push past that baseline to create something truly distinctive. This unlocks the designer's longstanding dream: to focus purely on craft. Good is automated; great is designed. And the designer is the difference between the two.
Designers today have access to AI tools that can write production-ready code, generate polished interfaces, and scaffold entire apps in minutes. Cursor can interpret natural language prompts into working React components. V0 can output beautiful, functional UIs in seconds, built on Tailwind CSS and ShadCN. Lovable can ship full-stack applications without touching a local dev environment. Bolt can generate responsive web apps, deploy them instantly, and even fix its own bugs.
They’re fast, capable, and getting better every month. Every designer should be experimenting with them. Used well, they can remove much of the mechanical friction in early design and prototyping work, allowing you to focus more time on creative direction and decision-making. The role of the designer changes from “builder of the baseline” to “shaper of the exceptional.”
Turning Good into Great
The challenge with these tools, and the gripe you'll hear most often from designers, is that they produce a lot of sameness. This is because most AI-generated interfaces start from a similar baseline and are built using the same design frameworks, like Tailwind CSS and ShadCN. These frameworks produce reliable, accessible, and visually safe layouts. But they also tend to output the same formula: a navbar, a hero section, a centered headline, a few neatly separated cards in a grid, and a clean footer. Everything is good, nothing is great.
It’s the same phenomenon we saw with Twitter Bootstrap in the early 2010s. Bootstrap made it fast to build something usable, but it also created a wave of sameness across the web. The output was good by default. But great required pushing past the default.
The most effective way to push AI-generated work from good to great is to give the AI a rich starting point and then iterate with intention. Too often, people start by entering some generic text in the open-ended prompt box that has become such a common pattern in AI tools. They submit the prompt, get a generic output, and then try to steer the AI toward good design from there. A more effective approach is to start with a good design reference as your initial input. Not text; something visual. That could be a CSS theme, a design token set, or a simple Figma component library. Feed that to the AI first, and then prompt it. The AI’s output will only be as strong as the input you give it.
Workflows for Pushing Beyond the Baseline
So, that all sounds nice in theory, but how does it work in practice? These are a few tried-and-true workflows that I've found to move from “default AI output” to something more distinctive. We'll start with the easiest and then get into the more advanced side of things.
Break the Grid (Easiest)
Most AI-generated UI falls into a familiar, safe structure: stacked sections, neatly contained grids, and isolated blocks with generous padding. The upside is clarity. The downside is sameness.
Breaking the grid does not mean throwing out structure. It means prompting the AI to introduce moments where elements interact, overlap, or flow into each other. Let imagery bleed into adjacent sections, overlap cards with a header image, merge related modules into a single interactive component, or carry a background element through multiple sections to create continuity.
Guardrail tip: Instruct it to vary the rhythm. Change section heights, mix dense areas with open space, and avoid the “stack of identical blocks” effect.
Prompt example
Create a layout where elements flow between sections. Allow the top row of cards to overlap the hero image by 20px, merge charts and tables into one component with linked hover states, and remove any empty space between these elements so they feel integrated.
Critique–Then–Patch Loops
AI is surprisingly good at improving its own work when you guide it with specifics. Instead of accepting its first draft, have the AI critique its own work and then apply focused changes.
The key is to prompt for targeted adjustments rather than vague improvements. Tighten vertical spacing so the layout feels intentional, increase contrast between key text elements, simplify toolbars or navigation, reorder content to match expected user flows, or make primary actions more prominent.
Guardrail tip: Don’t let AI stop at functional changes. Use these feedback loops to introduce motion, improve visual hierarchy, and adjust color palettes so they move away from safe defaults. This is just as much about aesthetics as it is function.
Prompt example
Review this design for hierarchy and usability. Suggest 3–5 specific changes, then apply them. For example: reduce decorative elements in the header by 50%, increase heading/body contrast by one scale step, normalize spacing between modules to 48px, and ensure all primary actions are right-aligned.
Prompt Against Your Patterns
Document your brand’s patterns in a STYLE_GUIDE.md or /patterns folder (such as button hierarchy, form spacing, table layout, empty state styling, navigation structure) and reference them directly in prompts.
This keeps AI output consistent across different screens and features, even when you’re generating them weeks apart or in different tools.
Guardrail tip: Patterns should be specific. “Primary buttons are blue” is too vague. Define sizes, states, spacing rules, and even copy tone so AI can replicate them exactly.
Prompt example
Build the FilterBar using our standard pattern from /patterns/FilterBar.md with search on the left, view toggles in the center, bulk actions on the right.
Lightweight Figma to V0
If you don’t have a full design system yet, you can still give V0 strong visual direction by creating just a handful of base components in Figma. This is a low-effort, high-impact way to move beyond the default Tailwind + shadcn/ui look.
Start with components that will have the most visual influence: a hero block, a primary button, a card layout, a table, or a widget. Import these into V0 so it uses them as visual references when generating new screens or flows. You can simply paste the Figma URL into V0 (it will prompt you to authenticate), or import CSS from Figma's Dev mode, or attach design exports from Figma. Once generated, use V0’s Design Mode to adjust typography, spacing, colors, and shadows visually (without leaving the code context) so your tweaks stay consistent with tokens.
Guardrail tip: Even without a system, introduce basic tokens for color, type, and spacing so the Figma components and generated output share the same DNA.
Prompt example
Use the attached Figma components as the base visual style. Apply the same typography, spacing, and border radius rules across all generated screens. For new components, match them to the closest Figma equivalent before adding variations.
Figma to Cursor via MCP
Use Figma’s Model Context Protocol (MCP) server to connect to Cursor. This pulls selected components into Cursor with their CSS and structure intact, letting you prompt Cursor to style or refactor generated UI to match your base designs.
Guardrail tip: Store your most important Figma components in a dedicated library file so you always have clean, token-consistent references to pull into Cursor.
Prompt example
Style all cards in this project according to the attached Figma design. Match spacing, font sizes, border radius, and hover state exactly.
Token-First Generation in V0 (Most Advanced)
If you already have a design system, load it into V0 before generating anything:
Tailwind config with your color palette, typography scale, spacing, and radii.
CSS variable theming for light/dark or brand variants.
Motion tokens for easing and duration (e.g.,
ease-out-160ms
).Component registry using shadcn/ui or your own primitives.
Starting with your system means every generated component already reflects your brand’s DNA.
Guardrail tip: Motion and interaction patterns are part of your brand too. Define them as tokens or patterns so AI-generated components feel cohesive beyond just visuals.
Prompt scaffold
Use our Tailwind config and shadcn component registry. Colors must reference our CSS variables, not Tailwind defaults. Typography follows our scale. Buttons use our asymmetric radii: top-left and bottom-right 10px, others 6px. Motion uses --ease-out-160 for entrances and --ease-in-120 for exits.
The Future Belongs to Taste
AI has automated much of the mechanical work of design. That’s not a threat. It’s an opportunity. The value of a designer isn’t in manually coding a navbar or aligning a grid; it’s in shaping the vision, defining the experience, and making the calls that move a product from usable to memorable.
With AI handling the baseline, the designer’s role becomes even more critical. The real leverage comes from taste: knowing when to follow conventions, when to break them, and how to make a product feel uniquely yours.
Great designers will use AI the way a great chef uses a prep cook: to handle the setup so they can focus on the composition and finishing. They’ll start with strong inputs (tokens, components, patterns) and direct the AI toward outcomes that feel distinct and purposeful. They’ll resist the temptation to ship the first pass, knowing that the value they add is in the second, third, and fourth.
In the age of AI, the tools are powerful, but it’s still the designer who decides what “great” looks like. And that’s the point: design shapes AI.