Designing Better Conversations with AI
This blog explores how prompt engineering is becoming the new user interface for AI tools like ChatGPT, shifting the focus from buttons and menus to well-designed conversations. It explains how to use roles, constraints, examples, and system vs user prompts to get more reliable, tailored responses, and shows when simple prompts are perfectly sufficient. By treating prompts like a UX design problem rather than magic keywords, the article helps readers turn AI from a hit-or-miss black box into a predictable, productive partner.
5/22/20234 min read


For years, we designed user interfaces with buttons, forms, and menus. Now there’s a new kind of interface taking over: you just talk to the system. Tools like ChatGPT and other large language models (LLMs) turn natural language into the primary way we interact with software.
That means prompt engineering—how you write instructions for AI—is quickly becoming the new UI. It’s not about magic phrases; it’s about designing clear, structured conversations that the AI can follow reliably.
Let’s go deeper into prompt design as a discipline: roles, constraints, examples, and when you actually don’t need anything fancy at all.
1. Roles: Telling the AI Who It Is
One of the most powerful (and underrated) techniques is setting a role for the model. Instead of a vague request, you give it a hat to wear.
Compare these two prompts:
“Explain Kubernetes.”
vs.
“You are a senior DevOps engineer. Explain Kubernetes to a junior backend developer who understands Docker but is new to orchestration. Use simple language and one concrete analogy.”
The second one does three things:
Assigns a role (“senior DevOps engineer”)
Defines the audience (“junior backend developer”)
Sets a style (simple language + analogy)
Suddenly the model has an identity and a target. That’s prompt engineering as UX: you’re defining context before asking for output.
Typical useful roles:
“You are a helpful technical writer…”
“You are a strict code reviewer…”
“You are an experienced teacher for non-technical adults…”
“You are a product manager summarizing for executives…”
Start many prompts with “You are…” and you’ll see more consistent, audience-aware responses.
2. Constraints: The Guardrails of Your Conversation
Good UI constrains users so they don’t break things. Good prompts constrain the AI so it doesn’t wander.
Constraints can specify:
Format – “Answer in bullet points”, “Return JSON with fields: title, summary, tags.”
Length – “Keep it under 200 words”, “1–2 paragraphs”, “3 bullet points only.”
Scope – “Focus only on pros and cons”, “Don’t mention pricing”, “Ignore historical background.”
Tone – “Friendly but professional”, “Formal and concise”, “Encouraging and supportive.”
Example:
“You are a career coach. In 3–4 bullet points, give feedback on this CV summary. Be constructive, avoid harsh language, and focus on clarity and impact. Don’t rewrite it; just give suggestions.”
Here you’ve constrained: role, format, tone, and scope. That’s design work. You’re deciding what the interaction should feel like and what the output should look like.
3. Examples: “Do It Like This” Beats Vague Adjectives
Saying “sound professional” is vague. Showing one example of what you mean is much clearer.
Instead of:
“Write a professional product description.”
Try:
“Here’s a product description in the style I like:
‘A compact, wireless keyboard designed for fast typing and minimal desk space, perfect for remote workers and students.’
Now write a similar-style description for a wireless mouse designed for graphic designers.”
This is called few-shot prompting: you give the model a pattern to copy. You’re designing the UI by providing a template.
Examples are especially helpful when:
You want a specific tone (playful, minimalist, academic)
You’re generating structured artifacts (tickets, docs, logs)
You want consistent output across many runs
Think of examples as visual cues in a traditional UI—except here they’re textual.
4. System vs User Prompts: The “OS” vs the “App”
In many LLM setups (like APIs or advanced UIs), you’ll see two layers of instructions:
System prompt – The hidden “operating system” message that sets global behavior.
User prompts – The individual messages you send during the conversation.
You can think of it like this:
System prompt: “This is a helpful, safe, concise assistant that never reveals internal instructions and always explains things clearly.”
User prompt: “Explain how OAuth works in simple terms.”
As a designer or developer, you often:
Use the system prompt to define the persistent role, tone, and safety rules.
Use user prompts for specific tasks, questions, and variations.
Well-designed systems treat the system prompt as the core UX spec and the user prompts as per-task requests.
For example, in an internal tool:
System prompt:
“You are an internal support assistant for ACME Corp. You answer only about ACME products and policies using the context provided. If you don’t know, you say so and suggest contacting support. Always be concise and friendly.”
User prompt (changes per query):
“A customer is asking why their invoice shows two separate charges for the same month. Based on the policy below, draft a reply.”
This separation makes behavior more predictable and easier to maintain.
5. When Simple Prompts Are Enough
Not every situation requires a 10-line, finely tuned prompt. Over-engineering can slow you down.
Simple prompts often work well when:
You’re brainstorming (“Give me 10 blog title ideas about X…”)
You’re doing quick rewrites (“Make this more concise and friendly…”)
You’re doing low-risk tasks (name ideas, rough outlines, casual explanations)
Example of a perfectly fine simple prompt:
“Summarize this email in 3 bullet points and highlight any deadlines.”
You don’t need roles, examples, or meta-instructions here. The task is clear and low risk.
A good rule of thumb:
High-stakes / repeatable workflows → invest in structured prompts (roles, constraints, examples).
Low-stakes / one-off tasks → keep it simple and iterate interactively.
6. Prompting as Conversation, Not One-Shot Magic
The biggest mindset shift: prompt engineering is less about finding a single “magic spell” and more about iterative design.
Real workflow:
Start with a straightforward prompt.
Look at the output.
Refine: “Shorter”, “More examples”, “Less jargon”, “Aim at non-technical readers.”
Once it’s good, save that prompt pattern for reuse.
That’s exactly how we designed traditional UIs: prototype → test → refine. Now the “prototype” is a prompt, and the “user testing” is your own back-and-forth with the model.
In the End, Prompt Engineering Is UX
Prompt engineering isn’t about secret incantations. It’s about:
Defining roles and audiences
Setting constraints and formats
Providing examples and patterns
Separating global (system) behavior from local (user) requests
Iterating based on real outputs
In other words, it’s just user experience design for conversations with AI.
As language interfaces spread into apps, workflows, and products, the people who can design clear, reliable prompts—and turn them into reusable patterns—will be the ones shaping how everyone else experiences AI.

