Language is the new canvas
Building a system where one word drives layout, color, type, and sound
In a world where AI can generate layouts, components, and UI in seconds, I’ve been exploring what stays uniquely human in the design process. What’s the role of taste, judgment, and system thinking when a single prompt can generate an entire interface?
I recently built Verbal. A prototype of a language-to-UI-engine transforming language into structured color, layout, type and tone.
TL;DR
Built Verbal, a tool that turns a single word into a generative moodboard (color, type, shape, sound).
Used GPT‑3.5 with structured prompt logic + a visual system built in Tailwind, ShadCN, and V0.
Explores how to encode taste and structure into language-driven, programmable UI.
Live demo: alantippins.com/moodboard-app (Use the sample buttons, or generate a custom board with your own OpenAI API key.)
Source code: github.com/alantippins/moodboard-app
From screens to systems
In a post-handoff world, where AI can generate layout and code, design is still about solving problems. But the medium we use to do so is rapidly changing.
Instead of focusing on static screens or prototypes, we're designing systems that interpret intent. Verbal is one of my experiments in this shift. What happens when language becomes the spec?
Quick specs
Framework: Next.js + Tailwind + ShadCN
Tools used: Lovable → Figma → V0 → Windsurf
Model: GPT‑3.5 (structured JSON output)
AI Layers: Color, type, shape, sound
System Design: Semantic tokens + OKLCH logic
Part 1: Scaffolding the system
Defining the Interaction
I started in Lovable as my initial sketchpad, not for visuals but more just to test the interactions and structure of the project. For this, I prompted it to generate a README that defined: feature sets, tone logic, and output rules.
Here's the prompt I started with:
I'd like to build Verbal, a small creative tool that transforms a single word into a generative micro moodboard—color, type, sound, and shape—for rapid inspiration. Here's my initial readme:
You type a word like “euphoric”...
Verbal returns:
- Color palette
- Font pairing
- Ambient sound
- Shape or motif
All generated based on tone-mapping logic.
Can you create it and help me phase it out? Don't code yet.
From there, I added examples like "stone" with example palettes, type suggestions, shapes and sounds. Prompting Lovable to help generate the initial structure gave me fast feedback on the idea, the system components, and the tone logic.
Prototyping the layout
It worked well enough to validate the idea and explore interaction patterns. Lovable helped me sketch the logic fast. I then redesigned it from scratch in Figma to fully control layout and spacing.
The layout was intentionally minimal: a text input, a few example buttons with shape icons, and a strict grid for the output. Each part followed a system. Every element in a responsive grid. Type pairings in styled blocks. Colors applied semantically based on the palette.

Tool Evaluation and code export
I tested importing my Figma files into Bolt, Lovable, and V0. For this project, V0 preserved my layout the closest. It mostly mapped the right spacing and component structure. I’ve had projects where Bolt or Lovable outperformed, so tool performance can be project-dependent. But for this, V0 delivered.
Windsurf scaffolding and setup
I then brought the V0 export into Windsurf for more control. Windsurf is more of a traditional IDE where I could edit Tailwind classes plus just ask questions and prompt it. I structured everything like a design system: semantic colors, layout grid, shape logic, type pairings. When spacing or hierarchy felt off, I’d screenshot the layout, paste it into Windsurf, and prompt the model to fix it. This manual loop, screenshotting, prompting, adjusting, hints at a missing layer in current tooling: semantic layout feedback loops. Eventually, I want a tool where Figma, prompt logic, and code converge.
Building out and refining the static examples
Lastly before I wired up any AI, I hardcoded some sample boards like "Dusty peach" to make sure the structure was in place.
I made sure each color and item on the board was mapped to a semantic meaning i.e. background, accent, headingColor, and textColor, along with a swatches[ ] array. These values get applied across the layout for fonts, SVGs, audio tiles. This was in preparation for applying generative AI to it, and felt a little bit similar to turning patterns into components in Figma.
Part 2: Wiring language to visual output
Once the UI was solid, the next challenge was turning a single word or phrase into structured design output using OpenAI.
I referenced the actual moodboard.tsx layout file and asked Windsurf:
“Help me build a prompt that could generate this kind of layout dynamically based on a word. Leverage the example palettes and use moodboard.tsx."
That’s what became the foundation for the prompt logic. There was a lot of trial and error here, but eventually I was able to get it to work. I decided to map each "component" so I could clearly map each piece of the moodboard. This was one of the final open ai prompts to the api:You are a designer AI. Given a word, return a JSON with name, semantic colors, font pairings, and an audio tone. Use OKLCH-consistent palettes and real Google Fonts.
Once the basics were in place, I went through each component step by step to make sure the generation made sense for each.
I validated each output field by comparing it back to the tone of the original word. If a word returned dull, muddy unaccessible colors, the system failed. This manual audit helped tune prompts and reinforce how tightly the structure reflects the input word’s tone.
Color Logic with OKLCH
Then I jumped into fine-tuning the color logic. I started with hex values, but moved to OKLCH because I wanted more control over contrast and perceptual consistency. Separating lightness, chroma and hue gave me more predictable, accessible results across generative themes. Contrast is enforced by selecting readable combinations based on luminance rather than falling back to white or black.
Note: I did start with HEX and am converting to OKLCH in the app, this is something I need to refactor later.
Type pairings and shapes
Font pairings are pulled from Google Fonts. I curated combinations that offer enough variation without breaking consistency. Pairings are rendered using palette-driven colors and layout logic applied. I've explored this a little bit in another experiment https://typepairs.com/.
Shape generation uses SVGs built on a 3x3 grid. I gave GPT examples of combinations using triangles, squares, and circles. Shapes are layered, centered, and follow alignment rules. The results are variable but predictable enough to feel intentional.
Even in this phase, the decisions I made, like semantic color roles or using SVG in a 3x3 grid, were less about what fine tuning specific screens and more about teaching the system how to work. That’s the core of where I think design is headed.
Prompting and Model Behavior
Initially I used GPT‑4 which was overkill for this use case. I switched to GPT‑3.5, which ended up being faster, at a fraction of the cost.
Prompts are tightly scoped and return structured JSON with roles, values, and rationale. I tested widely across themes to avoid generic outputs or noise. I also tested for contrast and mood alignment on the fly, often prompting Windsurf with layout screenshots and asking for adjustments:
Why This Matters (to me)
Product design has always been about solving problems. It was never just about the tool. Figma is a tool. So is AI.
What’s changed is that the system-thinking side of design can be programmable.
Verbal is a proof of concept that language can drive real-time, reusable, systemized aesthetics. Can you turn a single word or phrase into consistent, generative output without sacrificing intent?
Unlike static token systems like Tailwind or Radix Themes, this prototype tests how language can drive semantic theming in real time. This system pattern could extend to brand kits, generative theming engines, or input-based onboarding flows. The broader opportunity is infrastructure: a language-to-UI engine that adapts brand, mood, or intent across surfaces. Today it builds moodboards. Tomorrow, it could assemble entire onboarding flows or product themes based on tone or persona.
In this new shift, we’re now starting to design the systems that build the UIs. This doesn’t replace designers. It just raises the bar for clarity and input quality. If the structure is sound and the semantics are clear, the UI can write itself.
Language is the new canvas.
Have ideas and want to contribute? Want to fork or remix it?