The Problem After The "Why"
In our previous newsletter, we talked about the question most failed AI projects skip: Why?
But here's what usually happens next: Someone finally answers the "why" question. They know why theyβre implementing AI. They're excited. They have clarity.
And then they immediately jump to building.
Big mistake. π₯΄
Because between "why" and "how" sits another critical question most people rush past:
WHAT exactly are you building?

Uncover the solution at: https://workshop.purposedriven.ai/
The WHAT Gap
Here's the pattern we see constantly:
β WHY: "Our team spends 5 hours/week searching for documents during client calls"
β WHAT: "We need an AI chatbot" (too vague)
β HOW: [starts building immediately with ChatGPT] (too fast)
The problem? They skipped defining WHAT the solution actually needs to do.
Not what category it falls into ("chatbot," "assistant," "automation").
But what specific capabilities it needs, what knowledge it requires, and what success actually looks like in practice.
Answer This First π
When you say "I need AI for my business," what do you actually mean?
What specific task should it perform?
What information does it need to access?
How will you know if it's working?
What could go wrong if you don't define these things?
Most teams can't answer these questions. And that's exactly why their projects fail.
Real Example: Amazon's Hiring AI
Amazon built an AI recruiting system to screen resumes. The goal was clear: automate hiring to find top talent faster.
What they knew:
WHY: Automate the search for top talent (started in 2014)
HOW: Machine learning trained on 10 years of historical resume data
What they didn't define:
WHAT biases existed in βhistorical dataβ
WHAT fairness criteria it needed to meet
WHAT audit processes should be in place
The result?
According to a Reuters investigation:
"In effect, Amazon's system taught itself that male candidates were preferable. It penalized resumes that included the word 'women's,' as in 'women's chess club captain.' And it downgraded graduates of two all-women's colleges, according to people familiar with the matter." (Source)
πππ
Amazon tried to fix it by editing the program to be neutral to specific terms. But that didn't solve the fundamental problem. They couldn't guarantee the machine wouldn't find other ways to discriminate.
They disbanded the team in 2017. π«£
Questions Worth Asking
Before Amazon built anything, what if they had asked:
What does "successful hiring" actually mean?
What are all the ways this could go wrong?
What data are we using, and what biases might it hold?
What safeguards need to be in place?
How will we measure fairness, not just speed?
They knew WHY (speed up hiring) and HOW (machine learning). But they never properly defined WHAT success looked like beyond "faster decisions."
What questions aren't you asking about your AI project?
Exercise: The 3-Circle Solution Definition
Before you build anything, define your solution in three parts:

See more: https://workshop.purposedriven.ai/
1. Input or "feed"
What information does the AI need to know? Or as we like to ask it, what do you need to βfeedβ the AI?
2. Process or "steps"
What steps does the AI need to take? What does it need to do, and in what order?
3. Output or "result"
What is the result of the AI's work? What kind or deliverable or output are you looking to end up with?
Take 5 minutes right now. Pick one AI idea you've been thinking about.
Draw three circles. Fill them in.
Can you answer all three clearly? Or are you realizing you've been too vague?
Want Help Defining Your WHAT?
We've created a free assessment that helps you think through exactly what your AI solution needs to do, know, and deliver.
It's the same process we use in Day 2 of our Co-Build Sprint, but you can do it on your own, right now.
And if you want to see how the full WHY β WHAT β HOW process works:
Coming Next
Next time, we'll tackle the HOW. The actual building part. See you soon!
Until then,
Maaria & JosuΓ©