Home

AI Design Project

Designing with AI

Abstract:

This page represents my perspective, at this particular moment, on how AI design tools are changing the work of everyone involved in shipping software. These are not settled conclusions. The short version of where I currently stand: these tools require more experience to use well, not less. I look at what that means for product managers who trust instinct over user research, for engineers whose relationship with collaboration is about to matter more than ever, and for designers who are entering the field with tool skills but not craft. The argument running through all three is the same: velocity is not a substitute for judgment, and judgment takes years to build.


So, what's all this about AI being able to "design?"

For this project, I used the online AI design tool, Loveable.

I created an application that looks for typical SOC 2 violations on websites and hosted (SaaS) applications. The prompts or "conversations" that I used to create the application are available to download.

So, what's my perspective? How we design software is changing under our feet and this kind of tool is the "why" of it. If you read my prompts, you'll note that I'm very careful about framing a given prompt with some experience-backed detail. I do this because, given my experience with a competing AI design tool, I found that such tools need carefully framed guidance to get things even close to "right."

Tools like Loveable are a useful lens for understanding how software design is changing. The assumption that AI lowers the bar for building software is only half true. What I found is that these tools require more experience to use well, not less. Framing a prompt with the right level of specificity, anticipating where the tool will go wrong, and knowing when to override its suggestions all draw on exactly the kind of judgment that comes from years of doing the work the slow way.

On product management, user research, and the limits of intuition Speed doesn't validate direction. A wrong idea built quickly is still a wrong idea.

There is a school of product thinking, popularized by writers like Marty Cagan, that places enormous confidence in the product team's ability to know what to build. The 'faster horses' argument, often attributed to Henry Ford, gets trotted out as a reason to trust internal instinct over user input. The logic is seductive: users can't articulate what they don't yet know they want, so why ask them?

The problem is that this framing conflates two very different kinds of user conversation. Asking users what they want is, indeed, a limited exercise. But that's not what skilled researchers and designers do. We sit with users and ask them to show us how they work, what they reach for, where they hesitate, and what they work around. We watch what they do rather than taking at face value what they say. The insights that come from that kind of structured, carefully facilitated conversation are not faster horses. They are things the product team would never have thought to ask about.

AI tools make this distinction more important, not less. When a PM can generate a working prototype in an afternoon, the temptation to skip the user conversation and ship the intuition grows considerably. But speed doesn't validate direction. A wrong idea built quickly is still a wrong idea. If anything, the acceleration that AI brings to the build side of the process should push PMs to invest more deliberately in the research side, not less. The teams that figure that out will have a meaningful advantage over the ones that mistake velocity for insight.

On the engineers who make the work better, and the ones who don't The quality of what an AI tool produces will reflect the quality of the thinking brought to it.

The best engineers I have worked with share a particular quality: they come to design conversations genuinely curious about the problem rather than defensive about their domain. They listen for the reasoning behind a design decision, offer technical insight that sharpens the solution, and leave the session having moved the work forward in ways neither party could have managed alone. Those working sessions are some of the most productive of my career, and I have come away from more than a few of them having learned something I didn't expect to.

Not every engineer approaches collaboration that way. Some arrive with a posture that is harder to work with, shaped by years of "designing" and building without meaningful design input. The resistance isn't always overt, but it shows up in the room: a reluctance to treat the conversation as genuinely open, a tendency to present constraints as conclusions rather than starting points, an undercurrent of 'I already know how to do this.' That dynamic is worth naming not to criticize engineers broadly, but because it points to something real about how the disciplines have historically been siloed, and why breaking those silos down takes deliberate effort on all sides.

AI changes this dynamic in an interesting way. The practice of pair programming, in which engineers review and challenge each other's work in real time, has always been about catching what a single perspective misses. That principle is worth expanding. The collaborative whiteboard sessions that designers, engineers, and product managers currently use to ideate and explore are likely to evolve into something more generative: a room where the collective experience of the team manifests directly through the prompts they craft together. The quality of what an AI tool produces will reflect the quality of the thinking brought to it. That makes the cross-disciplinary working session not a nice-to-have but a core part of the process, and it puts a premium on teams that already know how to think together out loud.

A perspective on design craft, and what AI exposes about its absence AI amplifies judgment. It also amplifies the absence of it.

Design organizations have spent the last several years investing heavily in design systems, and for good reason. A well-maintained design system is one of the clearest expressions of a team's craft: consistent, considered, and scalable. But design systems are a foundation, not a finish line. They are one piece of a larger picture that also includes the color, voice, and tone guidance that typically originates in marketing, and that designers don't always treat with the respect those decisions deserve.

There is an irony worth naming here. Design systems represent some of the most deliberate, painstaking work a design organization can produce: documented decisions about spacing, color, typography, interaction patterns, and component behavior, all maintained with a level of care that most product work never receives. Teams spend years building and refining them. And yet a near-future AI design tool, given a clear description of a design system as a reference constraint, will likely reproduce its output with reasonable fidelity in minutes. The bespoke, wholly-owned design system, the one that took three years and a dedicated team to build, becomes a prompt parameter. That doesn't make the work of building one worthless today. But it does raise a question that design organizations should be asking now: if the system itself becomes a commodity, what is the durable value? The answer, I'd argue, is the judgment that built it. The decisions about what to include, what to leave out, and why. That knowledge doesn't live in the file. It lives in the people.

The design field also has a supply problem. A significant number of people entering design today arrive via three-month bootcamp programs that teach tools but not craft. They know Figma. They don't yet know what they don't know, and that gap is consequential. It's a version of the same hubris I described in the PM section: the confidence of someone who can operate the instrument without yet understanding the music. Tools come and go. Figma will eventually be replaced by something else. What endures is depth of knowledge about the discipline itself: the dedication to simplicity, the understanding of why a decision works, the judgment that only comes from years of doing the work badly before doing it well.

There's a story, often attributed to Picasso, about a woman who asks him to sketch her portrait. He does it in moments and names a substantial price. She objects: it only took a minute. He replies that it took forty years. The point isn't that speed is suspect. The point is that what looks effortless is usually the product of a long, unglamorous accumulation of judgment.

This is why AI design tools are most consequential, and most risky, in the hands of junior designers. A tool that can generate a working interface in minutes will feel like a shortcut to someone who doesn't yet have the experience to evaluate what it produces. Some organizations will lean into this deliberately, preferring to hire junior designers at lower cost and rely on AI to close the gap. That is a reasonable short-term calculation and a poor long-term one.

The pivot, though, is this: experienced designers who understand the craft deeply are exactly the people best positioned to work with these tools well. Knowing what to ask for, recognizing when the output is wrong, understanding the constraints that make a solution viable rather than merely possible. These are not skills a bootcamp education produces. They are the accumulated product of years of attention to the discipline. The designers who have done that work will find that AI amplifies their judgment rather than replacing it. The ones who haven't will find that it amplifies their gaps instead. That distinction is going to matter more, not less, as these tools become more capable and more widely adopted.

“Someone at IDEO once put it well: your job in any collaborative room is to be thinking about what no one else is thinking about.”

Below are some images of the SOC 2 scanning application I created using Loveable.