Design methods in the age of AI
I’ve been designing + building products for 20 years. One AI project changed everything I thought I knew.
It was 5 years ago. The brief: an AI assistant for financial advisors. “Easy” I thought. I brought the playbook - understand users, map needs, prototype, iterate.
Within weeks, every method had failed.
User-centred design has given us incredible tools: journeys, personas, usability testing. It created a shared language for innovation and put users at the centre of product development.
But it also gave us something dangerous: the illusion that good process guarantees good outcomes.
Where design methods break:
🔴 They treat all problems as design problems. Not every challenge needs a workshop. Some need engineering breakthroughs. Some need business model innovation. Some need regulatory change. When your only tool is empathy, everything looks like a user experience problem.
🔴 They assume user needs reveal future possibilities. Advisors thought they wanted better dashboards. Not “AI that predicts my clients needs and anxiety levels”. Revolutionary products create needs people didn’t know they had.
🔴 They confuse good process with good results. Following the method perfectly doesn’t guarantee you’re solving the right problem. Great design comes from insight, not adherence to frameworks.
What building AI systems has taught me:
🤔 The old tools need rethinking. User research couldn’t predict interactions with something that evolves. Journey maps couldn’t map AI that creates new paths. Prototypes couldn’t capture systems that learn and change.
🤔 The real design challenge isn’t the interface - it’s the intelligence architecture. Should the system interrupt or wait? Learn from the user or protect their privacy? Optimise for efficiency or explainability? These aren’t UX decisions. They’re ethical and technical decisions that determine trust, dependency, and agency.
🤔 And critically: AI systems create feedback loops that change user behaviour over time. Traditional design assumes static user needs. AI design requires predicting how your solution will reshape the problem space.
We’re designing systems that could shape human behaviour for generations. User research and workshops aren’t enough anymore.
We need a new playbook.
What I’ve learnt:
🟢 Ask “should we?” before “how might we”. Consider consequences, not just possibilities. What data does this use? How does it learn? What could break?
🟢 Develop systems thinking. Your decisions ripple through complex networks of technology, behaviour, and culture.
🟢 Design for responsibility, not just iteration. Every design choice becomes a values statement when scaled through AI.
🟢 Question the AI narrative. Not every problem needs an AI solution. Some need better human processes.
🟢 Partner deeply with engineers and data scientists. The best AI experiences emerge from true collaboration, not handoffs.
The craft evolves. The responsibility remains the same.