AI UX Trends — and How I Actually Designed for Them
Lately, I’ve been diving into how AI is showing up in the UX world — not just as a flashy tool, but as a real design challenge. Working in a university setting, I’ve seen firsthand how non-technical users interact with tech that’s... well, kind of technical. That tension — between power and usability — is exactly what excites me about designing for AI.
Here’s what I’ve been learning (and loving) lately:
## What’s trending right now in AI UX
1. Multi-Agent AI Systems
AI isn’t just “one chatbot” anymore. Now we have multiple agents working together — like an HR bot, a finance bot, and an IT support bot, each handling their own domain. I had to rethink what a "conversation" meant when you're switching contexts mid-task.
💡 How I Designed for It
-
I started by assigning roles to each agent: “HR Assistant,” “Finance Helper,” etc.
Each had a distinct tone, function, and boundary — like characters in a play. -
To visualize the system, I used swimlane diagrams where each agent had a lane, and I plotted how and when they'd interact with the user or with each other. This helped make the orchestration of different tasks (and transitions between agents) more tangible.
-
In the UI, I separated responses by agent using subtle visual cues — like a small label on the response card (e.g., “Reply from HR Assistant”), and sometimes grouped interactions into agent-based tabs to reduce cognitive load.
-
I also added buttons to seamlessly pass the conversation to another agent. For example:
“Ask the Finance Assistant how this affects your paycheck” → click to continue that thread.
In short, I designed the agents to feel like distinct team members the user could consult, rather than one generic assistant trying to do it all.
2. Conversational UX Is the Whole Experience
In AI-first products like Perplexity or Rewind, the entire user journey happens inside a conversation. There’s no static “next page” — it’s fluid, branching, and often unpredictable.
💡 How I Designed for It
-
I started with hand-drawn conversation trees, not just flows. I imagined not just what the user should say, but what they might say — even if it’s vague or emotional.
Example:-
“What’s the weather like today?” → “It’s cold! Want me to help pack for your day?”
-
“Ugh, never mind” → “No worries. I’ll be here if you need anything.”
-
-
I wrote sample replies and fallback messages by hand.
Instead of robotic error messages like: “I cannot parse your input.”
I tried responses like: “Hmm, I didn’t quite catch that. Want to try rephrasing it?”
Most importantly, I tested for messy or weird inputs, not just perfect flows.
Things like:- “Why do I even need to do this?” or
“This makes no sense”
— and made sure the AI responded with empathy, clarity, or a helpful suggestion.
3. No-Code Customization Is Expected Now
With tools like Custom GPTs or Gemini in Google Workspace, users expect to shape their AI’s behavior. But if the interface feels too technical, they’ll back out fast.
💡 How I Designed for It
-
I applied progressive disclosure. Start with simple presets, and offer advanced options only when the user signals interest (e.g., clicking “Show More”).
-
I rewrote UI labels in natural language, like:
“Make it sound more friendly” instead of “Tone: Informal” — so users felt like they were giving a request, not adjusting settings.
-
I included prompt buttons like:
- “Summarize this in 3 bullet points”
- “Rephrase as an email”
- “Make it sound more confident”
— These worked way better in usability tests than exposing raw parameters.
The key was making customization feel like a conversation, not configuration.
## Tools I’ve been testing out as a designer
- Figma + GPT plug-ins – To speed up placeholder content and test microcopy.
- NotebookLM – I used this to prep research notes before stakeholder meetings. It's like having a smart assistant that actually reads the documents.
- Diagram’s Genius – Still early days, but this one feels like the future of AI-assisted wireframing.
## Underrated tricks that helped me
- Prompt libraries = your new design system
You don’t need to start from scratch every time. Save well-crafted prompts just like you’d save Figma components. You’ll thank yourself later.
- Give your AI features a voice
When I designed a micro-assistant for internal use, giving it a clear role (like “your research sidekick”) helped define tone, behavior, and edge cases. - Design for misunderstanding
We always test success flows. But with AI? The weird inputs are where UX really shows its strength. Try giving your AI vague, messy, or emotional prompts — then fix what feels broken.
## Closing Thoughts
You don’t have to be an AI engineer to design great AI experiences — but you do need to understand the logic, language, and limits of these systems. For me, the most exciting part of this space is that we’re not just building tools — we’re shaping new relationships between people and machines.
