"Just because a system is smarter, doesn’t mean that we can rely on it for everything."
That's how Íñigo Cavestany Villegas, who leads IBM's embeddable AI ecosystem in Spain, crystallizes one of the most critical questions facing anyone building with AI today: just because we can automate something, does that mean we should?
While much of the AI conversation centers on efficiency gains and cost savings, Íñigo offers something far more nuanced. A framework for thinking about AI as a partner rather than a replacement, and a sobering reminder of the responsibilities that come with putting your name on AI-powered systems.
If you're building AI solutions for your business, training others on AI implementation, or simply trying to navigate the overwhelming landscape of AI possibilities, this conversation offers a masterclass in intentional technology adoption.
The Two Categories You Need to Understand
Before we dive deeper, Íñigo clarifies a fundamental distinction most people miss:
"There's two categories of Gen AI powered models that we are using right now. One is the AI assistant that just as you perfectly phrased and how the name stands for is an AI that is assisting us on tasks. And then we have the AI agent that is actually trained to reason and act on behalf of us on specific tasks."
AI Assistants need your input to produce output. Think: "Claude, create a picture of a butterfly." You give the order, it responds.
AI Agents work autonomously on your behalf. They don't wait for orders. They're trained to complete specific tasks without constant direction.
Íñigo shares a brilliant analogy from one of his students: think of AI like dogs. An obedient pet who sits when you say "sit" is like an AI assistant. But a service dog trained to guide someone safely through a city? That's an AI agent. Autonomous, trained for a specific purpose, acting on your behalf without needing constant commands.
Understanding this distinction isn't academic. It fundamentally changes how you approach building, implementing, and taking responsibility for AI systems.
From 1% to 100%: The Democratization No One Talks About
"We passed from having 1% of the world capable of building tech to 100% of the world, although most people haven't realized it yet, that they are actually software developers all of the sudden."
This is one of the most profound shifts Íñigo identifies in the AI revolution. For decades, building software required knowing programming languages, understanding complex systems, and having technical training. Now? Anyone who can write in their native language can create functional AI applications.
The problem? Most people haven't realized they now hold this power. And with great power comes, well, you know where this is going.
The Responsibility You Didn't Know You Signed Up For
Here's where Íñigo drops one of the most important truths in the entire conversation:
"Whenever your AI agent commits a crime, if it's under your name, you are the criminal."
Read that again. Let it sink in.
When you build an AI system, train an AI agent, or implement an AI solution in your business and put your name on it, it's yours. Not the platform's. Not the AI's. Yours.
As Íñigo explains: "Even if you have a human intern helping you do something, the moment you sign something, it's yours. So the same happens to AI."
This is especially critical given what else Íñigo observes: "We have made everyone a software developer, but we don't know what is in the back end, in the grounds of that that we are developing."
Think about that. You can now build powerful AI systems without understanding what's happening under the hood. It's like giving everyone the ability to perform surgery without medical training. The tools are accessible, but the knowledge of when and how to use them responsibly? That's lagging dangerously behind.
The "Reverse Engineering" Framework for Responsible AI
So how do you navigate this responsibility minefield? Íñigo offers a counterintuitive but brilliant approach:
"When it comes to AI, I think a great way is to reverse the process... Instead of thinking, 'I want this AI agent to increase sales by 50%,' I would ask myself, where are the worst things that could happen by implementing this AI agent?"
This is the opposite of how we're typically trained to think.
Usually, we:
Define our goal
Plan activities to reach that goal
Execute the plan
Deal with problems as they arise
Íñigo suggests reversing steps 1 and 2:
Identify the worst possible outcomes
Build safeguards against those outcomes
Only then pursue your positive goals
"By reversing things, we start thinking more on all the potential problems and negative impacts that our technology could have," he explains. "If you think that an AI that is very unethical would destroy your business, start by building the principles of that AI. And until you don't determine that it doesn't have a probability or at least a relevant one of causing that pain, don't launch it."
The Problem with "Black Box" AI
Íñigo highlights a troubling trend: just as technology was becoming more open-source and transparent, AI is moving in the opposite direction.
"When it comes to AI, sadly, there's this race where nobody wants to be imitated, copied, et cetera. And a lot of the models and platforms we use are black boxes that we have no clue what's behind them."
This creates a dangerous paradox: the tools that give us the most power are also the ones we understand the least. And when those tools are making decisions on our behalf, speaking in our voice, or representing our values? That lack of transparency becomes a serious ethical concern.
"How explainable, traceable, transparent it is" should be one of your primary evaluation criteria when choosing AI tools, Íñigo argues. But how many of us are actually asking these questions before we adopt the latest AI platform?
Teaching the Next Generation: An Exercise in Digital Detox
As a professor teaching technology and entrepreneurship, Íñigo has a fascinating first assignment for his students:
"My first assignment tends to be, hey, why don't you stay one whole day or at least eight hours in a day without any technology?"
Why start a tech class by avoiding technology? Because of what Íñigo identifies as society's "huge challenge when it comes to retaining anyone's attention."
"By simply telling them, hey, why don't you check the analytics on your phone and see how many hours you use a day in your phone, that can be scary for some, but for many others, they will not be really impressed by it," he explains. "But when they actually go through... the first day and say, today I'm not going to do anything with tech for eight hours... a lot of people actually tell me that that's probably one of the best exercises that we do in my entire class."
The goal isn't to reject technology. It's to build awareness of our relationship with it. As Íñigo puts it: "It makes them reflect on something that they didn't stop a minute to think about."
This exercise mirrors the broader question facing all of us building with AI: Are we using technology intentionally, or has it started using us?
The Assistant That Knows When to Step Back
Íñigo practices what he preaches. He's built an AI assistant version of himself for his students, but with very specific boundaries:
"I tend to have the assistant for very objective answers... Whenever it comes to more open-ended reflection-based questions, I train the assistant to redirect those to me."
The AI handles the repetitive, objective questions: When is the exam? What's the grading rubric? What format should the project take?
But questions about student performance, personal struggles, or subjective feedback? Those go to the human professor.
"I don't want to classify my students in the assistant in a way that they feel just like numbers and that I don't put enough attention or care into them."
This is amplification, not automation. Using AI to handle the mundane so humans can focus on what requires genuine human insight, care, and connection.
AI as a Tool Requires a Problem First
One of Íñigo's key teaching principles: don't start with the technology, start with the problem.
"Technology is just a mean. If you don't bring technology to solve a specific task or to solve a problem, a challenge that someone has, if I just show them, hey, this is how you create an AI assistant, nothing will ring a bell at first."
Instead, he asks students to identify something they actually care about. Finding an internship, learning to cook, preparing for difficult conversations. Then he shows them how AI can help with that specific challenge.
"Once they have that experience that is based on a process they value, is when they will be motivated and curious enough to build their own assistance for other things that I didn't even think about and that are valuable to them."
This is the antithesis of the "AI for AI's sake" approach that dominates so much of the current discourse. If you can't articulate the specific problem you're solving, you're not ready to build the solution.
The Physical World Is Next (And We're Not Ready)
While most people still think of AI as something that lives in screens and apps, Íñigo offers a sobering reminder:
"If you go to tech events or even certain innovative cities, regions of the world, you will see [robots] and find them doing some tasks... Once we have given access to the AI models to record and be part of our surroundings, we are essentially teaching them how our world looks like."
This isn't science fiction. It's happening now. AI isn't just learning from digital data anymore, it's learning from physical reality. And once that genie is out of the bottle?
"We better hope that whatever we are building doesn't go through that checklist of what are the worst things that this robot could do in my office, for example."
The Cost of "Smarter"
Íñigo highlights an often-overlooked reality of AI development: most AI-driven startups aren't profitable. Why?
"The more you use their applications, the more resources they spend... The kind of consumption you have when dealing with scalability in AI, it's massive."
This has implications beyond business models. It affects:
Environmental impact: Massive energy consumption and GPU requirements
Accuracy: Broader models trained on everything are less precise than specialized models
Sustainability: The current approach isn't economically or environmentally sustainable long-term
The solution Íñigo advocates? "Smaller models and very trained models that deliver assistance or agents that are very prepared for one specific task."
It's better for performance, better for energy usage, better for accuracy, and better for return on investment. But it requires us to resist the temptation to build AI that does everything and instead focus on AI that does one thing exceptionally well.
Will Humans Become the Luxury?
Íñigo poses a thought-provoking question about our AI-saturated future:
"Maybe it will be luxurious and special to have humans in customer service in the near future, because we will be so bored of dealing with all these digital channels."
Think about the last time you ordered fast food from a touchscreen versus a human cashier who smiled and asked how your day was going. Which felt better?
"Sometimes I feel it feels better, right? When we speak to a human and say, good morning, I have this coffee? Give you a smile and have a great day. Probably the robot will also do that, but will it feel the same for us?"
This isn't just about nostalgia. It's about recognizing what makes us human and what we might lose if we automate everything that can be automated without asking if it should be.
The Checklist You Need Before You Build 👇
Based on everything Íñigo shares, here's a framework for anyone building with AI:
Before You Start:
What specific problem am I solving? (If you can't articulate this clearly, stop.)
What are the worst possible outcomes? (Make your list comprehensive and honest.)
What safeguards will prevent those outcomes? (Build these before you build functionality.)
As You Build:
How explainable and transparent is my AI? (Black boxes are risky.)
What's training this AI? (Understand what knowledge it's drawing from.)
Where's the human in the loop? (Some decisions need human judgment.)
Before You Launch:
Have I tested for unintended uses? (People will use your AI in ways you never imagined.)
Am I prepared to take full responsibility? (Your name on it = your responsibility.)
Does this amplify human capability or replace human connection? (Choose amplification.)
After You Launch:
How am I monitoring actual usage? (Anonymous chat logs reveal the truth.)
When should this AI defer to a human? (Build in humility and escalation.)
What skills might atrophy if people rely on this? (Be honest about trade-offs.)
The Bottom Line
As Íñigo reminds us at the end of our conversation:
"Technology will be as good as the people who built it. So that's what we have to keep in mind."
The AI revolution isn't just about technological capability. It's about human responsibility, intentionality, and wisdom. We've been given unprecedented power to build systems that can amplify our expertise, extend our reach, and solve problems at scale.
But that power comes with a price: we must step up. We must ask harder questions. We must design systems that enhance rather than diminish what makes us human.
"We are gonna be working along with digital labor. It's already going on many times without us even noticing. And now it's a matter of being more human in understanding what are the implications, what are the benefits, and especially what are the things that we want to be leading without intervention."
The future of AI isn't about choosing between humans or machines. It's about designing partnerships where each does what it does best. Where AI handles the repetitive and scalable, and humans bring judgment, creativity, empathy, and ethical consideration.
The question isn't whether AI will reshape how we work. It's whether we'll reshape it intentionally or let it reshape us by default.
What's your answer?
Want to explore these ideas further? Watch the full conversation between Maaria Tiensivu and Íñigo Cavestany Villegas on the Purpose Driven AI podcast.
