The Problem With Most UX Case Studies
I have read a lot of designer portfolios. Not casually. Systematically, as someone who has been on both sides of the hiring table and who spent months studying what separates the portfolios that get responses from the ones that don’t.
The pattern I keep seeing: designers document their process instead of telling a story. They list the methods they used. They show the wireframes they made. They present the final screens. And then they wonder why they aren’t hearing back.
Here is what I believe: a case study that describes what you did is fundamentally different from a case study that shows what you understood. Hiring managers at the Principal and Director level are not looking for evidence that you completed steps. They are looking for evidence that you saw things others missed.
This post is about how I structured my four case studies, what I cut, what I kept, and what I learned from writing them.

The Three Lenses Every Reader Applies
When I was building displayedux.com, I kept coming back to a framework that shaped every word I wrote for the case studies. Every reader applies three filters simultaneously:
- Business lens: Does this designer understand how their work connects to revenue, retention, or operational efficiency? Do they know what a good outcome looks like beyond aesthetics?
- Process lens: Can this designer navigate complexity? Do they show evidence of research, iteration, decision-making under constraint, and cross-functional collaboration?
- Craft lens: Is the work well-executed? Does the visual communication show sophistication and intentionality?
Most designer portfolios satisfy only the craft lens. The ones that get callbacks satisfy all three, with the business lens leading.
This is not intuitive. Designers are trained to care about how things look and how they work. We are not always trained to connect those decisions to the number that appears in the quarterly review. Writing case studies that lead with business outcomes required me to reframe how I thought about my own work.
What I Led With in Each Case Study
Each of my four case studies opens with an outcome, not a problem statement. Here is what that actually looks like in practice.
Self-Service Customer Portal: 67 days to 35
The business outcome was a 48% reduction in enterprise onboarding time. That is the headline. Not “I redesigned the onboarding experience.” Not “I conducted user research and built a self-service portal.” The number comes first, because the number is why anyone at TeleSign cared about this project.
The supporting context: $500K to $2M+ daily transaction revenue during the same period. 85% of customers completing onboarding without CS intervention. Zero compliance violations across 120+ countries with wildly different regulatory requirements. These numbers did not appear at the end of the case study as a results section. They appeared at the top, in a stats grid, before the reader hit a single word of narrative.

Fraud Prevention Suite: 21 billion transactions protected
The scale of this one is the hook. Most designers have never worked on a system processing 21 billion annual transactions. Most have never designed an interface that non-technical fraud analysts use to make split-second decisions about whether a phone number is legitimate. The number establishes stakes before I explain anything about what I did.
Universal College Application: 47% vs. 20-35% industry average
This is the clearest before/after comparison in my portfolio. The industry standard completion rate for college applications ranges from 20% to 35%. We hit 47%. That is a comparison that is immediately meaningful to anyone who understands conversion. I did not need to explain why completion rate matters. The reader already knows.

What I Cut
The editing process was harder than the writing. Here is a specific list of things I removed from each case study draft.
- All method lists. Phrases like “I conducted user interviews, created wireframes, iterated based on feedback” describe tasks, not thinking. Every working designer did those things. They are not evidence of anything.
- Every wireframe that did not show a pivotal decision. Wireframes belong in a case study only when they illustrate a direction you took or a direction you abandoned. A wireframe shown simply to prove you made wireframes adds nothing.
- Outcome language without attribution. “Improved user satisfaction” is not an outcome. “Post-onboarding NPS increased from 2.8 to 4.2 out of 5” is an outcome. I went through every sentence that claimed an improvement and either attached a number or cut it.
- Any section that did not answer: what did you understand that others didn’t? This was the hardest cut. Sections that described what happened are easy to write. Sections that explain what you saw that shaped the decision are the ones that matter.
The Constraint Section Is Not Optional
One of the most consistent differences I see between junior and senior designer portfolios: senior designers describe constraints. Junior designers present solutions as if constraints did not exist.
For the Self-Service Portal, the constraints were real and significant. The project ran during the pandemic, across US and European time zones, with a distributed team I had never met in person. The budget was limited to a specific story point allocation. Every feature required ruthless prioritization against that ceiling.
These constraints are not qualifications. They are not apologies. They are the conditions under which the work happened, and they explain decisions that would otherwise look arbitrary. Why did we launch with a simplified phone number flow instead of the full one? Because we had 8 story points left and needed to ship. That is not a failure. That is product design.
A hiring manager reading this knows what real projects look like. They have been in rooms where engineering said the feature they designed could not ship on schedule. They have made the same tradeoffs. When you name your constraints honestly, you signal that you have been in those rooms too.
The Reflection Section Is Where Seniority Shows
Junior designers end case studies with outcomes. Senior designers end them with questions.
For the TeleSign portal, the reflection I wrote is specific: given what I know now about the SMB market we unlocked, I would have advocated for a separate onboarding track for SMB from the start rather than retrofitting the enterprise flow. The enterprise track was built for complexity that most SMB customers never needed. We served them well enough, but we served them with a tool that was never really theirs.
That is not a confession of failure. It is evidence of the kind of retrospective thinking that prevents the same mistake in the next project. A VP of Design reading that section knows exactly what it means. They have made the same call.
The 10-Second Test
Hiring managers spend an average of 6 to 10 seconds on the initial portfolio scan. In those 10 seconds, they are answering one question: is this person worth 20 more minutes?
Every case study card on my work index page shows a specific outcome in the thumbnail title. Not the project name. Not the method. The result:
- Enterprise Onboarding Redesign: 67 Days to 35
- ML-Powered Fraud Prevention for 21 Billion Annual Transactions
- 6 Channels, 1 Interface: 22.2% CTR on RCS vs. 3% for SMS
- One Essay, Every College: 47% Completion vs. 20–35% Industry
Each of those titles answers the business lens question before the reader clicks. The number is the invitation. The story is what follows.
Four Case Studies Is Enough
I have 18 years of work. I could fill a portfolio with 20 projects. I chose four. This was a deliberate editorial decision, not a limitation.
Eight to twelve shallow projects signals an inability to prioritize. It says: I don’t know which of my work is most important, so here is all of it. Four deeply executed case studies says: I know what matters, I know why, and I can make that case clearly.
Editing is itself a design skill. A portfolio that cannot edit itself does not inspire confidence that the designer will edit their work on the job.
