Why AI demands a new kind of design practice?

For over 15 years, I’ve worked with companies across the globe from fast-scaling startups, smart city governments to legacy enterprises trying to reinvent themselves. I’ve seen products pivot in single meeting. Watched trends come and go. Been part of digital experiences that helped brands raise millions and others that quietly disappeared despite great product in the hand.

But nothing has shifted the ground under our feet the way AI has.

This isn’t just a new feature. It’s a new interface, a new behavior model, and a new design problem. It challenges our assumptions. Our tools. Even our language.

Design teams are spending hours with legal experts — not debating colors or copy, but trying to understand what users should know, consent to, or even expect from what engineers have built.

That’s when it hit me: we’ve entered a new era. One where software doesn’t just respond, it interprets. And as designers, we need to catch up and build new practices.

The old playbook: clarity, consistency, control


For the past two decades, we’ve mastered a design grammar built on clarity. Dropdowns, buttons, breadcrumbs. Everything is visible. Everything is explainable.

We celebrated predictability. You tapped a button; something happened. Maybe not always the right thing but something you could explain to QA 🙂

Take Google Maps, a triumph of conventional UX. You type in a location, and it gives you options. You pick a route. Everything is traceable, logical, and hierarchical based.

That interface model scaled beautifully.

But then came a new generation of interfaces.

When AI showed up


At first, it was subtle. Gmail started finishing our sentences. Spotify created better playlists than we ever could, still doing it. Amazon knows what we will reorder before we do.

Then came the flood:

ChatGPT. Midjourney. Copilot. Runway. ElevenLabs. etc…

Suddenly, interfaces weren’t just reactive, they were generative.
Users weren’t selecting from options they were typing intent.
And the systems weren’t delivering answers, they were making educated guesses.

I am watching the dev team rely on GitHub Copilot to realign their codebase. Helpful? Absolutely. Transparent? Not even close. One developer told me, “It’s faster, but I have no idea why it chose this approach.”

Designers wrestled with similar uncertainty. Midjourney or ChatGPT produced stunning images, but the controls were buried in Discord channels and undocumented syntax. Creative direction became trial and error.

What we began to see was a strange absurd:

More power. Less clarity.

More potential. Less control.

That’s when it became obvious, this isn’t just a new toolset. It’s a design challenge.

We’ve been here before (Sort of)


If you’ve been around long enough, you remember the mobile shift. Suddenly, screens shrank. Taps replaced clicks. Context became king. Clients wanted it, We all adapted. We rethought affordances. We designed gestures and location and latency.

AI is a bigger shift.

Because it is no longer just about the screen sizes. It changes the contract between user and system.

With traditional UX, the system says: “Tell me what to do, and I’ll do it.” 

Traditionally, the system was a rule-based software: clear inputs, expected outputs. Think of a banking app- tap a button, see your balance. Predictable.

With AI, the system says: “Tell me what you want, and I’ll try to figure it out.”

…it means that users can no longer assume that their input will produce the same output, or even a fully explainable one. The trust, expectations, and feedback loops are different and that shift demands a new design mindset.

It’s subtle. But it’s huge.

What breaks when AI enters the room


Let’s talk about trust.

One of the first things I noticed with AI-powered tools was how fast people moved from excitement to uncertainty. A financial advisor I know asked ChatGPT to summarize complex insurance policies, only to find that it “hallucinated” clauses. A designer I mentored spent hours trying to replicate an image style in Midjourney, only to realize she had no control over its randomness.

These aren’t UX behaviors. They’re trust issues. When users don’t know what’s happening behind the curtain, when there’s no feedback, no undo, no rationale. Even the most advanced system starts to feel unreliable. 

And that’s dangerous. Because people start making real decisions based on outputs they can’t verify.

As design leaders, we can’t treat this as just a usability issue. It’s an ethical one. It’s a human one.

The emerging patterns and gaps


There are glimmers of good AI design out there.

Notion has started integrating AI in a contextual, restrained way. Suggesting edits without taking over the writing experience.

Figma’s new “Make Design” feature offers intelligent suggestions based on content blocks, while keeping the designer in the loop. It doesn’t try to be creative, it tries to be helpful. (dont really like it though, its for kids as per now)

But many tools and advanced systems still feel like they bolted on AI without rethinking interaction design.

There’s no taxonomy (actions or commands) to depend on. No model for debugging outputs. No space for human judgment to override machine assumption. At least till now.

We’re still treating AI like a feature when it’s really an actor, a collaborator with its own behaviors, errors, and moods.

Designing for AI: what it really means


Design for AI isn’t about making the model better. That’s the engineer’s job, I know 🙂

Our job is to make the system comprehensible. Navigable. Forgiving.

It means thinking in new ways:

  • How do we expose uncertainty without overwhelming users?
  • How do we create interfaces that adapt without becoming unpredictable?
  • How do we build trust when the answer might be right or overconfident but inaccurate?

I really don’t have many answers right now. But we need new design principles. But we also need new instincts.

In the AI era, good design is no longer about control. It’s about calibration of the system.

What’s next and why it matters


The tools are evolving fast. The interfaces? Not fast enough. For me its already started getting boring, look at those chat ui, how many variables have you seen?

If we don’t build better mental models, we risk leaving people behind who aren’t very adaptive. Or worse handing over their work to machines they don’t understand.

That’s why we’re launching “Design for AI” not as a trend, but as a practice. A space to rethink our roles. Share learnings. Call out blind spots. And build a new kind of design muscle.

Because the future of AI won’t be decided by tech alone.

It’ll be shaped by how people use it or refuse to.

If You’re a designer, product owner, or developer…


Ask yourself:

  • Would you trust your product if it made up a number on its own?
  • Would you ship a feature that couldn’t explain its decisions?
  • Would you hand off your customer’s choices to an invisible model?

If not, welcome. You’re already thinking like someone who’s ready to design for AI.

We’re just getting started.

Let’s build better systems. Ask better questions. And design a future where intelligence is not just artificial but approachable.

Subscribe to our journey