cpdeol
Working in public

AI is my leverage, not my replacement

A transparent look at where AI accelerates my work, where it falls short, and what stays irreducibly human.

How I think about it

Accelerate the loop; own the direction

AI-native delivery is a different operating model: shorter feedback loops, stricter risk controls, and measurable value at each boundary. I use AI to compress time inside a direction I have already argued for — not to skip the argument.

The prompt is a design artifact

How you ask shapes what you get. I refine prompts the way I refine flows: constraints, examples, success criteria, and explicit failure modes. Reusable prompts become shared team leverage, not private magic spells.

AI-native is not AI-dependent

There are moments — ethics calls, brand-sensitive visuals, fragile stakeholder dynamics — where the best tool is silence and a whiteboard. Knowing when not to reach for a model is as important as knowing when to.

How it shows up across the arc

This is the shape of a typical engagement — not a feature tour. Tools change; the rhythm of pairing acceleration with judgment does not.

Delivery terms used on this page

Discover
Understand users, constraints, and the problem worth solving.
Define
Align goals, scope, and success criteria before building.
Design
Shape the solution across experience, architecture, and decisions.
Deliver
Ship reliable increments with fast feedback and clear ownership.
Adopt
Enable teams to change behavior and use the solution well.
Value
Measure outcomes and compound gains through iteration.

01·Discover

Before pixels or PRDs, I stay close to the problem: interviews, desk research, and the messy signals that do not yet form a story. The goal is not more transcripts — it is a defensible read on what people need and what the business can credibly ship.

Where AI helps

I use models to cluster themes across long interview sets, draft follow-up question banks, and pressure-test whether my synthesis is overfitting to a loud anecdote. That frees time for listening again with fresh ears instead of drowning in note-taking.

  • Claude
  • Granola
  • Notion AI
  • Otter.ai

What stays human

Choosing who to talk to, what to ask, and which tensions matter is judgment work. AI cannot replace the trust you earn in a room or the instinct to dig when an answer feels too tidy.

02·Define

Good delivery starts with a problem statement everyone can argue with productively. I treat framing as a design activity: boundaries, assumptions, success signals, and the risks we are willing to name out loud.

Where AI helps

I generate variants of the problem statement, red-team assumptions, and draft decision memos in different tones — crisp exec summary versus engineering detail. The point is speed through alternatives, not outsourcing the stance I take in the meeting.

  • Claude
  • ChatGPT
  • Cursor

What stays human

Stakeholder alignment and picking the hypothesis portfolio are human calls. AI helps me think in parallel; it does not decide what is worth funding.

03·Design

This is where exploration should feel abundant: flows, narratives, and rough IA before anything is precious. I want enough divergence to surprise the team, then a disciplined path to convergence.

Where AI helps

I use AI to sketch narrative arcs, list edge cases, and compare mental models across personas. In Figma, AI-assisted layout exploration speeds early directions — always as disposable scaffolding, never as the final visual language.

  • Claude
  • Figma
  • Figma AI
  • ChatGPT

What stays human

Taste, brand coherence, and the courage to kill weak directions stay with the designer. AI broadens the menu; I still own the edit.

04·Deliver

I like prototypes that earn their keep: clickable enough to learn, thin enough to throw away. When the risk is interaction or narrative, I bias to something people can react to — not a slide that pretends to be a product.

Where AI helps

Cursor is where AI shows up hardest for me: shipping throwaway UI in React, wiring stub data, and iterating with designers and PMs in the same artifact. The loop tightens from days to hours, with guardrails on scope so we do not prototype fiction.

  • Cursor
  • Claude
  • GitHub Copilot
  • Figma

What stays human

Defining what we are trying to learn from each build — and when to stop coding and go talk to a human — is still the craft. Speed without a learning plan is just busywork.

05·Adopt

Adoption is where intent becomes behavior. I treat enablement as product work: clear rollout paths, role-based messaging, and support plans that make the change usable under real operating pressure.

Where AI helps

I use AI to draft training outlines, implementation runbooks, and support macros tailored to each audience, then sharpen manually for accountability and tone. It is strong at consistency across many touchpoints once decisions are set.

  • Claude
  • Cursor
  • Notion AI
  • Linear

What stays human

Change leadership, stakeholder trust, and what we explicitly ask teams to do differently remain human responsibilities. AI can structure the materials, but not carry the commitment.

06·Value

Value is where delivery earns the next investment. I focus on outcome evidence: what changed, what did not, and where we should compound gains in the next cycle.

Where AI helps

I use models to summarize outcome trends, draft executive readouts, and generate follow-on experiment options from KPI movement. AI accelerates synthesis so we can spend more time deciding the next bet.

  • Claude
  • Notion AI
  • Linear
  • Looker Studio

What stays human

Interpreting tradeoffs, choosing what to fund next, and owning the narrative with leadership are human calls. Metrics inform judgment; they do not replace it.

For the operating model behind AI-native programs — hypotheses, guardrails, and evidence — read the longer essay. For the full map of how I partner with teams, start from What I bring.