As Madalina Buzdugan, a sustainability strategist working at the intersection of sustainability, strategy, and AI, recently shared in our podcast:
"For a lot of my peers right now, there are some question marks. You know, like, what, how do we wanna use it? Do we wanna use it? Does it contribute to our sustainability goals? Does it not?"
If that resonates with you, you're not alone.
While some of us are racing to adopt the latest AI tools to stay competitive, others are quietly wrestling with a more fundamental question: What's the actual cost of all this innovation? And more importantly, what can we do about it without falling behind? š¤
Madalina has spent the last year and a half deeply researching how AI intersects with sustainability. From setting up sustainability frameworks in EV charging companies to understanding the energy footprint of our daily ChatGPT queries, she's done the homework so we don't have to (though we probably should).
Her insights reveal a landscape that's more nuanced than the usual "AI is destroying the planet" vs. "AI will save the planet" binary. And if you're someone who uses AI in your work, builds AI products, or simply wants to make more informed choices, this conversation offers a roadmap for navigating the complexity without the guilt trip.
The Numbers That Should Wake Us Up
Let's start with some context about sustainability itself. When Madalina explains sustainability to her parents (who still think she studied economics), she keeps it simple:
"How can we use at the very least as many resources from the environment as we put back out there."
In an ideal world, we'd actually regenerate more than we take. But here's the scary reality: On a global level, only 6 to 9% of our waste actually gets recycled.
Read that again. Not 60%. Not even 16%. Six to nine percent. š±
We're operating in what's called a "linear economy" AKA extract, consume, dispose, repeat. And while movements toward circular economy and regenerative practices are growing, the urgency is real. We don't have unlimited time to fix the planetary boundaries we've already crossed.
So what does this have to do with AI?
Everything. Because AI is following the same pattern at breakneck speed, and most of us have no idea.
The Two Sides of AI Sustainability (That Most People Mix Up) š«£
Before we go further, Madalina makes a crucial distinction that changes everything:
Sustainable AI = How we build and use AI ethically and efficiently
This is about the foundations: energy efficiency, data protection, governance frameworks, and ensuring we're not just building AI because we can, but because we should.
Think of it through an ESG lens:
Environmental: How much energy does this AI use? Can we optimize it?
Social: Are we protecting people's data? Is this AI built to serve people, not just generate outputs?
Governance: Do we have ethics controls, testing, validation, and standards in place?
AI for Sustainability = Using AI as a tool to achieve climate and social goals
This is about application: deploying AI to solve actual environmental and social problems.
Madalina shares a beautiful example: In the Amazon Forest, an organization programmed brick phones with AI to capture the sound of illegal logging in real-time, then alert local rangers. That's AI directly tackling an environmental crisis.
Another example: In drought-affected parts of India, AI-powered weather prediction helps farmers optimize crop rotation and irrigation, enabling communities to feed themselves more effectively.
One is about building AI responsibly. The other is about using AI purposefully.
Most discussions conflate these two, which is why the conversation gets muddy. Understanding the difference? That's your first step toward making better choices.
What You Can Actually Do (Starting Today š)
Here's where most sustainability conversations lose peopleāthey feel overwhelming, abstract, and often preachy. Madalina takes a different approach. She's clear from the start:
"The responsibility is less on the individual than it is on the big players on the AI market."
Your consciousness is working just by reading this. But that doesn't mean your choices don't matter. Here's how to think about it:
1. Choose Smaller Models When Possible
Big AI models have billions of parameters your query travels through. Most of us don't need billions. Most of us will get perfectly good answers with millions.
Think of it like switching from incandescent bulbs to LED lights. Same result, way less energy.
Practical alternatives Madalina recommends:
Neither are as well-known as ChatGPT, but that's kind of the point.
2. Get Smarter With Your Prompts
Can you batch your requests? Can you be more specific so you don't need 10 back-and-forth queries that each consume computational power?
As Madalina puts it: "Can you spend a bit of extra time educating yourself on how to prompt efficiently so you don't have to send 10 requests in a row and get this beautiful ChatGPT essay for every single question that you have?"
Prompt engineering isn't just a hot job skill anymoreāit's an environmental consideration.
3. Track Your AI Consumption
Madalina compares this to individual carbon footprints: "It's information that is useful to have." Knowing your impact doesn't make you responsible for fixing the climate crisis, but it helps you make informed decisions.
Easy tool: AI Impact Tracker is a browser extension that shows your energy consumption per day and lifetime across major generative AI platforms. At the end of 2025, you could see: "I've used this much energy, equivalent to X car rides or Y phone charges."
It grounds your choices in reality.
4. Ask Vendors About Their Footprint
When you're in a demo with a new AI vendor, ask: "What's your carbon footprint? Do you measure it? Can I get that information?"
Madalina advocates for transparency the same way we have nutrition labels on food: "Why wouldn't we label our AI usage?"
She's even started disclosing her own AI usage at the end of LinkedIn posts. People ask, "Aren't you afraid they'll think less of you?" Her response? "If we take responsibility over our own actions, that also means we take responsibility over our own consumption."
The Travel Comparison That Actually Makes Sense āļø
Remember how 10 years ago, jumping on a ā¬20 flight across Europe wasn't a big deal? Now many of us think twice. Some refuse to fly. Others travel less intentionally.
Madalina sees AI following a similar trajectory: "Sharing the knowledge about sustainable AI usage, even on a peer-to-peer level, can really shift the conversation."
It's not about moral superiority. It's about awareness shifting behavior over time. Just like flying went from purely aspirational to something we consider more carefully (at least in some circles), AI usage is likely heading in the same direction.
The question is: do you want to be ahead of that curve or behind it?
Why Size Actually Matters (In Models)
Let's get specific about why smaller models are better:
Big models (like what OpenAI offers) send your query through billions of parameters. The computational power required is massiveāwhich means massive energy consumption, massive GPU requirements, and often massive environmental impact.
Smaller models are:
More energy-efficient (fewer parameters = less processing)
Often open-source (better transparency)
Typically don't use your data to train future models (ethical win)
More specialized (better accuracy for specific tasks)
As Madalina explains: "Most of us for the searches and needs that we have, we don't need that many checks to happen in the bigger architecture."
It's like using a sledgehammer to hang a picture frame. Sure, it works. But a regular hammer would do the job with far less collateral damage.
When NOT to Use AI (Yes, Really)
This might be the most important section.
Madalina encourages us to build a decision framework: When do you actually NEED AI, and when is it just habit or hype?
Consider NOT using AI for:
Text formatting
Basic calculations
Calendar organization
File organization
Simple automations that don't require AI input
Consider using AI when:
The task is genuinely complex and creative
You need to connect multiple tools and knowledge sources
You've tried simpler solutions and they failed
It's a technical problem beyond other available solutions
As Madalina wisely notes: "If it's something that's complex and creative and you really need to connect a variety of tools together, all right. Set it up, see if it gives you what it needs to give you. But if you just need to do text formatting... maybe that's just an automation that you can set up. Maybe you don't need AI input there."
The goal: Intentionality over automation.
The Prompt Paradox (Long vs. Short)
When asked whether it's better to use one long, detailed prompt or multiple shorter prompts, Madalina gives an answer you won't like but need to hear:
"It depends."
It depends on the model. It depends on the mode (research, deep thinking, etc.). It depends on what you're asking.
But here's her general guidance:
Start by changing your first behavior: Don't shoot three-word queries into the AI void expecting magic. Instead, build a rule for yourself: "Every query has to have at least five sentences."
Define what you want clearly. Give context. Be specific.
Then track your consumption for a while using tools like AI Impact Tracker or Code Carbon (for developers). See if being more thoughtful upfront actually lowers your overall energy use.
Madalina is clear-eyed about the challenge: "When we use a variety of toolsālovable for vibe coding, gamma for slides, something to transcribe meetings, maybe an agent for LinkedIn contentāit gets tricky to add up that impact."
Which brings us to...
The Communities Doing This Work š„š„š„
If tracking all this feels overwhelming, you're not alone. That's why Madalina recommends joining organizations actually advocating for sustainable AI:
Check out AI Energy Score (a dashboard by Hugging Face showing energy ratings for models)
Look for local Sustainable AI Collective groups or NGOs and alliances working on responsible AI in your region
These communities are asking the hard questions: "What is the impact? How are we measuring it?" They're building frameworks, certifications, and standards so individual users don't have to become experts.
As Madalina says: "As long as we bring the conversation back to the table... then we stay anchored in the conversation that matters."
What's Coming (And Why You Should Care)
We're entering a new regulatory era. Europe has the EU AI Act. The US is working on similar frameworks. With regulation comes certification.
Madalina predicts: "Some certifications that have to do with sustainability for AI companies and products will rise up. That could be another thing to start looking forāsolutions and products that have some sort of sustainability certification."
In other words, the "organic label" of AI is coming. Companies with ESG frameworks baked into their AI foundations will differentiate themselves. Those without will face increasing scrutiny.
If you're building AI products, this matters now. If you're choosing AI tools, it will matter very soon.
The Individual vs. System Trap (And How to Escape It)
One of the hardest things about sustainability conversations is the tension between individual action and systemic change.
Madalina is refreshingly pragmatic: "The biggest polluters in the world are companies and corporations and governments."
Your individual carbon footprint matters less than what happens at scale. But that doesn't mean your choices are meaningless.
Remember the flying example? Cultural shifts happen peer-to-peer. When you make different choices and talk about why, you influence your circle. They influence theirs. Norms shift.
It's not either/or. It's both/and.
You make the best choices you can with the information and resources available to you. And you advocate for systemic change. You don't have to choose one.
The Disclosure Practice Worth Copying
Madalina does something most people are afraid to: she discloses her AI usage publicly.
At the end of LinkedIn posts where she's used AI, she adds a simple note: "Created with Green GPT" or "Used ChatGPT for this."
The response? Some people ask if she's worried about judgment. Her answer is perfect:
"If we take responsibility over our own actions, that also means we take responsibility over our own consumption. We have foods that we put nutrition labels on. Why wouldn't we label our AI usage? In the name of transparency, which is also one of the principles of responsible AI."
This simple practice:
Models accountability
Normalizes transparency
Invites conversation
Shifts expectations
Imagine if everyone building with AI had to disclose: "This was created using [X model] which consumes [Y energy]."
The conversation would change overnight.
The Question That Should Guide Everything
If you take one thing from this entire article, let it be Madalina's closing question:
"I want AI to do my chores and my laundry and my dishes, so that I can do the work that I care aboutāmy art and my friendships and my hobbies. Not the other way around. I don't want AI to do my work and the things that I'm passionate about so that I can do more chores."
This crystallizes everything.
If you're building AI just because that's "the direction right now," Madalina asks you to add intention behind the build.
If your intention is three-digit growth year-over-year, fine. Own that choice. Go fast, go hard, accept the impact.
But for many of us, that's not actually what we want to build. And it's important to know there are communities and spaces where AI is a tool we use to support people, restore ecosystems, and create a little bit of extra safe space for those around us.
As Madalina beautifully puts it: "That is also an option. That is also a way to have impact so that we can do more art and spend more time with people and allow ourselves to live life a little bit more."
Your Sustainable AI Checklist š
Based on everything Madalina shares, here's your practical framework:
Before You Use AI:
Do I actually need AI for this task, or is it just habit?
Can I solve this with a simpler automation or tool?
What's the complexity level? (Simple tasks rarely need big models)
When Choosing AI Tools:
What model size am I using? (Can I go smaller?)
Is this model powered by renewable energy?
Does this tool have built-in sustainability features or certifications?
What's the vendor's carbon footprint? (Ask them directly)
As You Build Your Prompts:
Am I being specific enough to avoid multiple rounds of queries?
Can I batch related questions into one request?
Am I tracking my consumption to understand my actual impact?
After Implementation:
Can I disclose my AI usage transparently?
Am I sharing knowledge about sustainable AI with my peers?
Have I joined communities working on these problems at scale?
Ongoing:
What am I offloading to AI, and am I okay if that skill atrophies?
Is this AI helping me do work I care about, or replacing work I care about?
The Bottom Line š¦¾
Sustainable AI isn't about perfection. It's about intention.
As Madalina reminds us: "If we're simply building with AI because that's the direction right now, I would simply ask people to add the intention behind the build."
The AI revolution is here. The question isn't whether to engage with itāthat ship has sailed. The question is HOW we engage with it.
With awareness? With transparency? With consideration for its actual costs and impacts? With intentionality about what we're trying to achieve?
Or do we just keep shooting one-word queries into the void, hoping for magic, while the computational carbon piles up in ways we never see?
The choice is ours. The tools are available. The communities are forming. The frameworks are being built.
But it starts with each of us asking: What am I using AI for, and is it actually worth what it costsānot just in money, but in energy, in resources, in the kind of world I want to help create?
As Madalina says: "There's a lot of communities and a lot of spaces where AI is the tool we use to build solutions, to support people, to restore ecosystems, to maybe create a little bit of an extra safe space for people around us. That is also an option."
It's not just an option. For many of us, it might be the only option that actually makes sense.
Want to dive deeper? Watch the full conversation on the Purpose Driven AI podcast, or connect with Madalina Buzdugan on LinkedIn.