Podcast

Episode 68 - A dive into every GenAI rabbit hole with Wilco Burggraaf and Mark Butcher

November 11, 2025 - 5 minutes reading
GreenIO Blog - Episode 68 - A dive into every GenAI rabbit hole with Wilco Burggraaf and Mark Butcher
Gael Duez, resident Green IO podcast host, invites two prominent thought leaders to debate the hot topic of GenAI & sustainability. In episode 68, Mark Butcher, an expert on green ops, is joined by Wilco Burggraff, a green software champion, putting to the test Green IO’s aim of creating a safe, diplomatic space where responsible technologists can debate, question, argue, agree to disagree - but always do so with respect.

Wrap Up Article

GenAI - a powerful tool in the right hands

For Wilco, GenAI has brought tangible benefits, particularly in helping to structure individual thoughts and knowledge, leading to clearer, more precise written text. In Mark’s own words, Gen AI has brought ‘pain, worry and revenue’. GenAI is seen as a double edged sword, where the technology itself is not inherently bad or villainous, though the application of AI by humans is more than questionable. There are obvious environmental impacts too, through increased resource use and pollution, yet there are wider impacts on the very fabric of our societies.


Adoption of AI - in whose interest?

Notwithstanding environmental effects, there is a growing worry that algorithms are infiltrating how we think, and therefore impacting on our own agency. Another example of AI’s wider impacts is that of its knock on effect on job opportunities for young graduates. As tasks that have been carried out by humans are now being delivered by AI, skills and knowledge bases are lost or not even formed in the first place, as entry level jobs succumb to machines. This is perhaps in strong contrast to those who have already gained significant experience, knowledge and skills, where using AI tools can lead to self improvement, not just a job done well (or not, as the case may be). Declining graduate recruitment also has ramifications on the number of people contributing to taxation systems, leading to less support for ageing populations, particularly in Western Democratic societies.

Leaning heavily on AI tools certainly improves the revenues of hyper-scalers and those in the data center industry, but, some very good use cases of AI / GenAI aside, is humanity actually profiting from this technology? Or, as Wilco points out, is the small, but very wealthy, Big Tech elite monetizing knowledge and productivity through GenAI?...We must ask ourselves who will have access to these tools in the future, and whether tech giants are forging a wealth extraction path to serve their own interests.

Environmental footprint of GenAI

Obviously, alongside minerals and water, energy consumption is a hot topic, and if in recent years there has been a shift where certain hyper-scalers are becoming energy producers, most of the energy requirements are outsourced. And this leads to difficulties in obtaining and measuring the right kind of data to produce reliable and meaningful metrics to help operators increase efficiency and deliver impact. Grid networks need to be stabilized too, and offsetting is still an important part of the reporting equation - procuring ‘green electricity’ doesn’t actually mean that the energy actually used in-situ is directly generated from renewables. The rapid uptake of GenAI is also leading to a spate of data center construction projects. As use increases, capacity has to match (or perhaps it is the other way round?...). That said, current data released by hyper-scalers shows that perhaps the water and power consumption is not quite as high as we think it might be.

Big Tech & statistics

Once again the question of transparency and metrics comes up, and recent reports on AI & energy efficiency / sustainability (Google Gemini, Mistral, Oracle, Alibaba etc.) show that, from an operational perspective, the methodologies used are a step in the right direction, and useful to build a reporting framework. Still, there is a feeling that the numbers fed into it are not the right ones. Careful reading of reports is the watchword, as much is omitted, for example training phases, and related scope 3 emissions. Essentially, companies are choosing the data to spin a specific story, and as Mark puts it bluntly, they have: ‘manipulated the data and it's what they've wilfully excluded from the data is what causes the problems’.

 Data center capacity & regulations

Legislation has a part to play in accounting / reporting, and also on local planning laws. For example, data centers in the UK are designated as ‘critical national infrastructure’ which often means local planning regulations don’t apply. Local communities have little room for maneuver in objecting to plans or in imposing stricter criteria on any proposed developments. Governments like the UK are also locking themselves into long term deals with Amazon or Microsoft, instead of perhaps encouraging home grown service providers.

The power per rack has also increased, and so certain data centers are being underused as they do not meet current standards and demands, whereas others are at maximum capacity. So there is still a conundrum to be solved for businesses whether outsourcing compute is the best option, or whether it should really be in-house. Different factors come into play, including sovereignty, cybersecurity, sustainability, plus CapEx vs OpEx and, of course, the resulting balance sheet at each year end. 

Steps in the right direction

Sharing data, working together and being transparent is critical in providing robust methodologies to track environmental impacts of AI/GenAI, and to ensure sustainability is embedded into all business decisions. This means revising current strategy, governance, and even best practices. Mark is a firm believer of using risk to drive progress in large enterprises. The fact that many companies are currently resetting net zero targets means there is now a certain degree of understanding on climate change, sustainability and reporting on impact. Greenwashing is in the spotlight, and companies are very aware of the potential damage to their bank accounts and reputation if they don’t walk the walk. The upcoming generation now entering the workplace has a different set of ethics and values too, which is beginning to drive change. We just need to ensure that all sectors are on-boarding AI/GenAI for the right reasons, and that users actually master such a powerful tool for the good of business, society and the planet.

Full Transcript

Gaël Duez: 

Hello everyone, welcome to Green IO! I'm Gaël Duez and in this podcast, we empower responsible technologists to build a greener digital world, one bite at a time. Twice a month on a Tuesday, guests from across the globe share insights, tools and alternative approaches, enabling people within the tech sector and beyond to boost digital sustainability.


Today is a first on the Green IO podcast. The first which I have been reluctant to host for the past three years. And this first is a debate. Now, don't get me wrong. I welcome the scientific discussion. Actually, I opened every Green IO conference with a reminder that these events have been built on three pillars which are respect people, respect planet earth and respect science. And I'm proud to have created via these conferences places which I call a diplomatic safe place where every responsible technologist can debate, question, argue but always respectfully. A place where we can safely agree to disagree. But I wasn't comfortable doing it on this podcast. Maybe because I'm not confident enough in my self-taught journalist skills. Maybe because our social feeds are plagued with false controversy and cheap punchlines created for the sole purpose of clickbait. Maybe because I simply didn't have an opportunity to have two guests who I will trust to have such a healthy debate. And this time has come. Thanks to Generative AI. I created these two guests via a of prompts and optimized them for my audience. No, I'm joking. This time has come because of GenAI and its environmental footprint. Over the last two months, several studies were published about it, including one by Mistral AI and another, quite famous now, by Google about a median Gemini request. And two of the people which I admire the most in the green IT field, started to debate the opportunity and the conclusion the Gemini prompt study. These thought leaders are Mark Butcher, my favorite old man yelling at the cloud, as he described himself in a famous talk, who is a renowned English expert on green ops, tracking relentlessly the other costs of the main hyperscaler builds for his clients and helping them to reduce their carbon emissions as well, but with numbers that he has calculated for them. And God knows he loves doing this math. And the second thought leader is Wilco Burggraff a Dutch green software champion, who is also a serial writer on the energy consumption of our code and a keen expert on, well, on pretty much everything related to computing, as he loves to embrace code at very low level. Sometimes it's assembly language and scale infrastructure at high level. So welcome both of you. It's an honor to have you both in the show and thanks for trusting me to host this discussion between you two. It means a lot to me.


Wilco Burggraaf: 

Yeah, great to be here.


Mark Butcher: 

Looking forward to it. Thank you. Thank you for having me


Gaël Duez: 

You're welcome. one of my favorite podcast hosts, Julien Devorex, who launched the podcast Sysmic, loves to kickstart his long discussion with his guests with an interesting question. Could you tell me about the glasses you wear to see the world? A nice way to remind his audience that we always speak from somewhere and that our understanding of a situation is rooted in own history belief system.


And for once I'm going to borrow his approach to help our listeners to better understand where you folks speak from when it comes to GenAI. So here's my first question. What has GenAI brought to your world so far?


Wilco Burggraaf: 

So for me, of course I started, well, of course I started GenAI before I knew about green software and green IT, right? And I'm always have been someone who's very passionate about IT in general, also green IT, but about IT since I already am very small. And I'm also open, think about it, I'm an autistic person, neurodivergent. And for me, always the issue was, is that my brain abstractively thinks way quicker than and often how I write or speak. So from the moment that I started using DNAi and kind of dumped all the thoughts I had in my head right into this prompt and there came more structure out of it. I was like, hey, wait, this thing is distilling the things that I'm thinking and I can also then use it kind of to more structure how I think and critically how I think and.


That like, how do you say it? It almost, yeah, launched a way of how I did my job. And I kind of already had, I think a career when I did some really crazy stuff from like building as a junior developer, an application that eventually calculated the real estate value of 1 million of the 8 million objects in the Netherlands to developing a complete new programming language for the .NET language while finalizing it Xsharp.


But then now also could like do way more stuff than I already could. So for me, it was really the accelerator where I think that it's now often over-promised by the way, to be honest, the extent and that kind of stuff. But for myself, I really had kind of that experience. and this really also helped me in writing because I know what the topic is. I know all the details, but often I have sometimes problem with the structure of how do I set certain paragraphs or certain sentences. And if I then read them back in traditionally if I completely write themselves, it didn't read that well. So everything I write by the way is me. I just let AI help me a little bit in structuring and how I set it up. And this is also the reason why I think I can create so much content and I hope also that it's very useful content because that is for me the biggest reason why I do it. But yeah, that is I think a high overall of this.


Gaël Duez: 

Thanks a lot for sharing because that's not always easy to state. yeah, it's exactly what my question was about. Like what GenAI has influenced, has created in your world. And well, that's pretty big what you've just said. So thanks a lot for this. Mark, what about you?


Wilco Burggraaf: 

Thank you.


Mark Butcher: 

I think you can probably guess the route I'm going to go down. So what it's really bought for me it's pain, worry and revenue.


Mark Butcher: 

Yeah, and it's a so there's a small chance that I'm a pessimist in life. And it's the way it's the way that AI is being applied. And whether you call it AI machine learning, whatever you call it, it's the way that human nature is using this technology. It's it's pretty nefarious, because there are some there are some awesome good use cases for it. And it's not that the technology is absolutely not the problem. It's always humans are the problem. And when I say kind of the worry side of things is I try and take a wider viewpoint rather than just the how this is burning energy and how people are using it kind of thing and how suddenly large companies who pretend they have no money have suddenly got billions of dollars to throw at this. It's amazing how they can't pay their staff well, but they can still find billions of dollars to throw into a new technology that's unproven. But for me, the worry actually is the damage we're doing to ourselves as people and also the damage we're doing to the next couple of generations. So what I see with AI at the moment is it's not really about AI sort of cutting jobs as such. It's more about AI limiting the availability of jobs. So I'm already seeing this if you look into space such as the graduate recruitment space where our kids are now really struggling to find jobs because entry-level jobs, new jobs are disappearing. So it's not that large organizations are wholesale cutting hundreds of thousands of people because they're provably not. But what they are doing is they're slowing their net new recruitment. So suddenly, whereas a graduate recruitment scheme might be taking on, 10,000 graduates in large enterprise, now it's only taking on 50 or 100. So we're effectively throwing an entire generation under the bus. And we're expecting them to go and find fictitious new roles in sectors that don't yet exist. It may be those sectors will arrive in one or two generations' time. But at the moment, they don't.


So what are these people going to do? They've gone to university, they've been educated, they spent a lot of money on it, and now suddenly there's nothing at the end for them. There is no upside. And that's why we're seeing a generation who've changed their thinking about loyalty with employers, because they now know there is no loyalty. They know that they can be cast aside at a moment's notice if a machine application or service comes along that does what they do one cent cheaper per hour.


For me, and you've probably heard me say this quite a lot, I see it as being effectively, it's a wealth extraction exercise. It's moving more of the revenue away from the people who should be earning it. It's moving it into the hands of the 0.01 % who really don't need it, because they don't need another $10 billion, another $20 billion. It doesn't change anything in their lives. So whilst I always talk about the whole environmental impact, it's the wider system impact that worries me.


And that's before you get onto, you know, when Wilco was talking there about using AI to structure writing. So he's, he's one of the people who I would kind of trust to use it. ⁓ because what he's not doing is AI generated slot. He's taking well thought out arguments and just using them to structure it in a way that makes sense for an English speaking audience, which is valuable when you're not a native English speaker. But yeah, if you look at LinkedIn, you see every day now,


Wilco Burggraaf: 

Yeah.


Mark Butcher: 

The amount of garbage, nonsense and crap that's out there and you can smell it straight away. The moment I see a post, can almost smell that cadence. You just see it in the post and you go, come on, big guy, have you not written this yourself? Have you not got a single opinion yourself? And for me, the worry is that, again, going back to the slightly nefarious nature of the big tech sector, it's also about the, we're letting an unknown algorithm control our thoughts, views and beliefs.


And gradually it's shaping how we think. It's giving us facts that don't really exist. Look at them. I just look at the recent example of the infamous Deloitte's report in Australia, where it cited completely made up references, it made up documents, it made up case studies. And no one really seems to think that's a major problem in Deloitte's. They actually think that that's okay to be doing. And it's shocking. so we're seeing, so my real biggest problem is humans. By our very nature, we're a little lazy and we're letting a machine now think for us, increasing numbers, it is saying handing over our thoughts to the machine that we don't know who's behind it.


Wilco Burggraaf: 

Yeah, I could add something to that because I really believe that if you if you would like how do you say it summarizes to the smallest way is that almost everything around AI is like get rich quick or the almost a scheme literally so do something very quick without actually doing the work and I think this is also creating inequality in a lot of ways so if you now in the current, ⁓ current, if you go to school, if you do it like how it always have been, and then using GenAI, you're getting like the answers without having the mental challenge without having doing the work. And also I think expecting everybody already from the first second use GenAI when they start in the workforce, you're not learning actually the basics. You can use GenAI for that, but I think it skipped a lot. And I think because of that, that there is becoming a very big gap between people who already had a lot of skills and then can use the GenAI to become better. And a lot of people who just use GenAI but do not improve themselves. And that inequality in a lot of ways is, I think, very bad. And the other thing that you mentioned, Mark, is also through the business cases. I think even the one of the biggest one is like using GenAI for code generation.


The whole setup now is that it simulates a developer while we know because there has been 20, 20 or 30 years of code generation tooling and models. And we already could do this before GenAI was here. And the whole thing now is how we get it delivered is in a way where especially more money is made with it because you need a lot of hardware for the coder simulation. And it costs a lot of how do you say it services? Well, there are way more efficient ways to do it like more low code almost kind of solutions, but it is not profitable for the people who are now delivering it. So in a lot of ways, I'm agreeing with you on that, I think.


Gaël Duez: 

What is interesting is that your point of view to keep with the metaphor of the glasses are from very different perspectives. Because Wilco you really started from your own perspective as I would say a micro perspective. Like, you know, for me, it has helped a lot because the tools are powerful if used in a smart way. And Mark, you came from a very macro perspective. Like if we don't pay attention, I'm really scared because we are sacrificing one or two generations for the sake of more profits rather than hey we found an interesting tool and let's think a bit before scaling it how we could leverage it for the greater good.


Mark Butcher: 

Yeah, I'd also step it out even one step further, which is this is an entirely kind of society selfish kind of point of view which is We need we need a tax base Right to pay for all the things in society. We have a particularly in the West We are rapidly aging society So we need more tax revenues to pay for the things that are needed to support society And the problem we have is by this whole wealth extraction path So let's assume that an organization massively adopts AI and they reduce the headcount overall over time hugely


That means there's money not going to the taxpayers. Those taxpayers are the ones who actually contribute a huge amount of the tax base. Because we all know the 0.01 % who this is actually going to do not pay the same marginal rate of tax. They pay a large amount of tax, but that's more that's disguised by the fact that they literally worth billions or nearly trillions in some cases now. But their marginal rate of tax isn't the or 50 % that most common normal people pay. It's normally in the region of three to 5%.


So all we're doing is we are hugely reducing the future tax take in nation states. And that means how are we now going to pay to support society? And we hear this nirvana being spoken all the time, which is a universal basic income. The argument I get thrown at every single time, which is it doesn't matter because AI is going to enable us to have this lovely way of living in the future. We're to have time to sit around in the fields and write poems because we will have a universal basic income because the billionaires will gift us that. No, they won't.


That's never going to happen. It's never ever going to happen because if it was going to happen, they would already be doing it. If it was possible and they were willing to do it, it would already be here. We could already be, we know that we could already be solving world hunger if we wanted to.


Wilco Burggraaf: 

Yeah, I would even make it worse. They now kind of monetizing knowledge and productivity through GenAI. It's something that a lot of people I think don't see coming yet, because a lot of people think GenAI is free. Of course, you are then the product, but no, this is the case. A lot of people think GenAI now, the free versions are good enough, but at some point, they're gonna eventually decide what you have access to and whatnot.


Mark Butcher: 

Yes.


Wilco Burggraaf: 

We thought Google already did this, but GenAI will be next level in this.


Gaël Duez: 

Sure. We can also see some similarity with actually the discovery of the Internet technology with a lot of mistakes being done, safety not being applied. mean, think about cybersecurity now, it's pretty obvious for everyone, but it was like an open bar 20 years ago. And this is the same the same approach that we discovered a technology, we scale it and then suddenly we realized, oops, we did some mistake. But I got you, Mark, with the money transfer and then the wealth transfer and all the issues. I think the scales make it even scarier. Now it could also be argued with a few laws or regulations or open source alternatives. Things could get in a much nicer direction, would say, but that's beyond the scope of this podcast. So, okay, so I got both your point of view. I think we could debate about this high level stakes and they're real and thanks both of you for sharing them. Now, most of the time, not most of the time, but for many technologies, for many people working with tech or in tech, the first approach to...


Wilco Burggraaf: 

Yeah.


Gaël Duez: 

What is a responsible use of AI comes around its environmental footprint before, Mark, as you say, the wealth transfer, the So I'd like us to focus on the environmental footprint for a moment and especially about the discussion you had when the Gemini report went out. Because over the summer and the back to school period, I think we've seen the Google's report about the Gemini prompt. We have also the Mistral AI LCA. And a bit more recently, this very interesting research paper by Ms. Falk and quite a lot of other contributors as well on LCA of an Nvidia chip. And I think it was very interesting to see how much debate it created and discussion it sparked. And what I found interesting is that, and this is really why I wanted this first question for the audience to understand where do you speak from and actually how much you have already in common and how much you value a more sustainable world and a more respectful world.


But you didn't agree that much about the value of the Gemini report. So maybe once again, Wilco, would you like to start and tell us why you were, I would say, optimistic or cautiously optimistic about the fact that they produced this report.


Wilco Burggraaf: 

Yeah, so maybe I should more first take it high over. think, by the way, so eventually end up, why do these all these hyperscalers and other companies build so much data centers? And that has on kind of the biggest impact. And a lot of people have tried also companies to bring that more to almost the usage, right? The prompting that kind of side.


So eventually I will end up, think, where I will say, like, there are reasons why we built so much data centers, and ⁓ I think that is a bad thing. But let me go to a little bit in the details, also some numbers. So if I not even look to any of these reports, and if you would say just for my session that I use with a GenAI engine, I'm using that maybe on a, always say, GenAI, the workload of inference is very predictable. It's between 70 and 80 % of the TPU or GPU. So time getting as quickly as possible from value. So the first prompt to value is almost a good start, but then going to the number. So from that perspective, if you understand kind of the hardware that is behind it and you're not looking at the reports and just really from the bottom up perspective, and you would say like, if you take a range between five and 1500 watts, that is like a ballpark range, a really big range. And you look at it from that perspective and you say how much watt hour is that then per second, then you eventually end up between zero dot 14, sorry for the numbers, but zero dot 14 watt hours and zero dot 42 watt hours. So that is kind of a range. And then if I look for the Netherlands, we have on average in a few years ago, we had 220 grams of CO2 per kilowatt hour.


So that is around like per second, 0.03 grams to 0.09 grams of CO2. And then you have the water. And I'm the most uncertain about the water perspective. The main reason is because I think the science is still, it's difficult. Like with data proxies and networking, is a shared complex, certain resource we use.


So, but also there, if I have that to do ballpark number, that would be just on that 500 to 1500 watts, including PUE and everything, would be between like 0.25 milliliter and 0.79. So if I then extract that to the following concept of like, you have like 11 million units sold in 2024 and 2025 by Nvidia. So TPU related units.


If those units in a certain setup would run for a whole year, like say 2025, that would be around 33 terawatt hour. So even if we increase this year to 600 terawatt hour, I believe it's maybe not the numbers that you often hear that we go to the 700 and 1200 terawatt hour in just four or five years. What I'm aiming at is like, this is like lower than you may be expecting. So what is then the reason that they built so many data centers, especially in the USA? And I think that the main reason is because I think AI is often used as an excuse to get licenses and to build data centers and that there's also a lot of traditional default racks with CPUs and other services that they try to sell added to this.


So now I'm going back. if I'm going to the core, how I started with the 500 to 1500 Watts kind of ballpark, then even the Sam Altman, although I think it was a little bit too low, is within that frame. But then we get to the following. I agree 100 % that what the reports say is that they often do not disclose all data. And we have distrust there because we asked for many, many years, and I think Mark does this every day for more correct data.


We need more data. But from my perspective, would mean potentially then mean that it will be 100 % more or something. I don't think it will affect or be 10, but that is my perspective. Now I'm going to my perspective in the operation and I think Mark is going to be more in LCA strategy. But my operation for us to be more responsible is more if we create a baseline and we can do this now, how can we improve that baseline? So I'm less focused on the reporting and the life cycle assessment.


But on the reporting and lifecycle assessment, you want to be correct. You want to do it right. And especially at scale and especially because a lot of marketing and sales, there are a lot of figures mentioned that are far off if you do the total aggregation of all the numbers. And that's why I think that all the, AI numbers are very, a sensitive topic. And especially if you find this topic very important, you want it to improve. You want to have less negative impact.


So this is my overall perspective on this. Where the last thing I want to say is that the Mistral numbers, I think were the only one that were if I did my last calculations, if I based it on that 500 to 1500 watts, it was four times the amount that I would expected it to be. So it was a bit higher than the Google and what Sam Altman said.


Gaël Duez: 

Just to wrap up what you've just said, basically if I understand you right, say by an order of magnitude what Google shared with us is good enough to help us create a baseline or Google or Mistral on the operational side.


Wilco Burggraaf: 

Yeah. No, on the operational side the little numbers for me as someone who wants to improve a certain AI ecosystem, that was a nice framework. But if you look at what they mentioned about the numbers, if you would have to report on it, it was not correct. And then Mark is completely correct at that from my perspective.


Gaël Duez: 

But from an operational perspective, that's still useful.


Wilco Burggraaf: 

Yes, and that's why I was happy with it while a lot of people I think were less happy with it.


Gaël Duez: 

Okay, so let's talk to the less happy people. Mark, how did you welcome this? let's I mean, I'm not I'm trying to balance things here a bit. From the four to five, I would say massive hyperscalers worldwide. So Google, Alibaba, and Oracle. Google is the only one who at least did a bit of a calculation and share it with the others. So I guess this is why it was very much welcomed by many people in the community. But then a lot of voices, very respected voices such as yours, say that it could actually drive things in the wrong direction rather than being just one step in the direction of the most sustainable use of AI. Could you explain maybe why you were so vocal about it?


Mark Butcher: 

There's so many reasons, stepping back on it a little bit. So was it good that they released a report? Yeah, absolutely. Yes. But the report was littered with so many glaring errors, mistakes, made up numbers, nonsensory. Effectively, it's a PR and marketeers piece of content. That's all it was intended to be. It's intended to drive revenue growth for their services. basically, it's like hiding in the open. Because none of the numbers that are in it, whilst the underlying methodology seems absolutely sound.


The way they did it, no problems whatsoever. It's how they've presented the data, it's how they've manipulated the data and it's what they've willfully excluded from the data is what causes the problems. Because you cannot do anything with the numbers they shared. They don't mean anything. There's no totality, there's no breakdown. There's nothing in that that you can use to drive a different decision inside your organization. What it was, it was virtue signaling coming out and saying, here's something, here's something we've tried to do.


And what really frustrates me, particularly about the giant tech companies, is they sit there and they whine about this being really, really hard. They say, you know, don't understand how difficult it is to do this kind of thing. you seem to find it easy enough to find ways of charging people. You have no problem working out the unit costs when it comes to amounts of money. That's sublimely easy for you. If I was to go to them and say, I've got a hundred billion dollar market pick any of the hyperscalers. I've got a hundred billion dollar market you can have tomorrow morning but you've got to invest time and money to do it. They would throw everything at that. Yet when it comes to this they say oh we don't have the time and the resource and it's really too difficult. These are the smartest people on the planet. If they wanted to solve the problem they could solve it tomorrow. so one of the things I find is I'm a little bit vocal online about this so obviously I can I alienate the hyperscalers sometimes and I know I'm one thing I'm certain is I know I'm definitely not getting a job with any of them anytime soon. the one thing I find is I get a lot of information from people who work at those hyperscalers. So a lot of the stuff I get in the background comes from people who actually work in the organizations who say, I can't share this publicly, but this. And some of the but this stuff is absolutely shocking. So some of the things that I hear around saying, know, they've they've massively cut sustainability, they've massively cut investment in reporting, they told us no one can talk about this.


No one is allowed to talk about it on pain of being fired. It has to go through the PR team. You go to some of the hyperscalers and you ask them a sustainability question and you get pointed at the same canned nonsense content that has not been improved, changed or updated in years. Or you get a series of vague promises or you get their account teams coming out and doing, you know, doing that nodding thing where they go, yes, we're hearing this and we'll take this back internally and we'll definitely get back to you. And then years passed and nothing has changed. But if there was revenue associated, by God, they would move quickly. So why aren't they? What is it they don't want to share? so I don't even mind, and it sounds really weird, but I wouldn't even mind if they came out and said we are terawatts of energy and billions of tons of carbon, because then we can make conscious decisions as consumers. Because at the moment, the biggest problem actually is not really which hyperscalers AI is more or less worse than the other hyperscalers AI. It's actually people want to be able to compare. They want to be able to make decisions about, I put this on premise?


How do I optimize it? How can I make the most efficient use? And at the moment, so I work with development teams, engineering teams, finance teams, and the same questions are coming up. go, we just don't know. We have no data that we can use to frame it. And when this kind of nonsense report Gemini one came out, mean, even Google's own AI thought it was nonsense. And so the analysis that I did of it, I deliberately used.


Google's Gemini to do the analysis of it just for fun. I thought I'd troll them with it. And even their own AI said it was a worthless piece of content. It said it was low quality, low fidelity and of no particular use to anyone, but it was good for marketing. That was basically the summary of their own tool set of their report. Because the problem is when you get these metrics, you can't do anything with them because they selectively pick things like they did market based emissions instead of location based emissions.


Why would they do that when their own carbon reporting tool uses location-based emissions? They deliberately selected a median prompt. What that meant was they excluded all the stuff that people are actually doing in the largest... Complete rubbish. And what they didn't do is there was no totality as well. So you can't even tell the scale of the thing, because they didn't tell you... When someone tells you it's this X per prompt, I saw that figure being abused so quickly in presentations.


Wilco Burggraaf: 

We agree on the median one. The median was really bad. Yeah.


Mark Butcher: 

Everywhere where engineers literally go, guys, we don't need to worry about the environmental impact of our AI now because look, it's only 0.01325, whatever.


Wilco Burggraaf: 

I have an addition to that. I find it really important. So because I get this question a lot on location based, what you already mentioned, if you have 3000 GenAI containers, you're already getting into the kilotons. So, and this is people are so bad at big numbers, sometimes myself as well, that they hear a small number. So I agree with myself with the small number, but if you total it, it really adds up over a whole year very quickly.


Gaël Duez: 

To unwrap a bit of what you say. Because at the very beginning, you say something super important and that I've heard a lot. The methodology and the people behind the methodology at Google are pretty good, are pretty sound. And this is something that I've heard a lot and that created a bit of confusion. Like, they're experts. So the methodology is okay, but what frustrates you is the data they provide via this methodology. Am I getting it right or not?


Mark Butcher: 

So the people who doing it, really intelligent people who know exactly what they're doing, they're experts in their fields, the technologists are brilliant, they build highly efficient services, they use the right calculation, they just put the wrong data into the calculation. And then they obfuscated the output to an extent that, and I'm really glad I managed to say that, I'm not gonna say it again, because I never say it right.


Gaël Duez: 

Okay, got it.


Wilco Burggraaf: 

Yes, opcificate, that is the right word there. ⁓


Mark Butcher: 

So what they basically played with a number until it manipulated it, until it set the narrative they wanted to get out there. So they went, how can we make this look good? Well, let's divide X by Y by Z by this. And eventually they get to a number which is so small, as well, like I said, no one can roll it upwards. No one can get the total idea of this.


Gaël Duez: 

Okay, and what is missing is A, ⁓ the total number. overall, we emit that amount of location-based carbon emissions. And second is also the way they use median, is completely out of context. So people will tend to say, okay, one prompt equals that amount of grams.


Wilco Burggraaf: 

Yeah. Yes.


Gaël Duez: 

Even if they actually uploaded, I don't know, 20 pages of a previous report and tons of data, et cetera, et these the two main pitfalls that you see? Because you said you had a lot of things against this, what you named a PR exercise, but is there any other aspects of this study that you're against?


Mark Butcher: 

So, the exclusion of training on its own is so vast, it's unbelievable. Because anyone who consumes AI, this is a bit like the whole Scope 3 side of things, you have to take accountability of the impact of the training in the first place. And no one knows how big that is. No one, because no one's sharing that figure. And the problem I have is even when you took, say, the thing Sam Altman wrote in his very detailed blog, which just made up a figure, which was plucked from the… PR bin. And you look at the numbers that you're using, when you actually add them up to what they claim to have, nothing adds up. Nothing makes sense. It doesn't add up to the investment they claim to have made. If I was an investor, I'd draw some of these figures. I'd go, hang on, why do you need a hundred


Wilco Burggraaf: 

Yeah. But Mark? Isn't that the issue? I think, with my 33 teller per hour, even if someone would say that is not completely right, if you double it, I know 100 % is right, but it's still too small number to the investments in the data centers they built and everything they do.


Mark Butcher: 

And it doesn't align with what we're seeing about the increase in power consumption in all the major territories. I if you look at things like in the US, coal consumption has gone up 20 % in the last year. even though they talk this nonsense about we're investing massively in renewable energy, actually that's just not scratching the top of it. In the background, they're deploying gas generators. They're massively increasing coal consumption. So actually this claim about, and they always say, we use 100 % renewable energy. It's like, no, you don't. You're not even vaguely close to using it. You're using like 10 % of Newfoundland and you're buying a bunch of offsets for the cheapest price you can.


Wilco Burggraaf: 

Yeah, but that also happens in a kind of interesting way. So data centers often work with multiple contractors. And one of the things is like, hey, your company is a power supplier, we make you responsible for the renewable energy. So if they often say we do it ourselves, they outsource it, of course, because they outsource a lot of stuff.


And then what often happened is that not always the renewable energy at the location is actually used. That is also one thing. And that is also, if we talk about location based, I'm often also talk about indirect emissions because grids needs to be stabilized and you also need certain power sources for this. So location base is even not where we eventually hope we end up at the correct, correct numbers. We're getting eventually hopefully there, but yeah, it is a complex topic. I understand that for some people it's hard to follow at some point.


And maybe that is also that sometimes they try for marketing and sales, whatever company are you try to make it simple, but oversimplification is also dangerous, I think, especially in combination with climate change and climate impact.


Gaël Duez: 

Electricity grid is definitely a complex topic. And just as a reminder for people not familiar with market based and location based and even grid aware computing and that sort of things, market based is based on accounting trick, to be honest, where you buy offsets. So it can be different sort of offsets, quality might differ, but it is not actually related to


Wilco Burggraaf: 

Yeah.


Gaël Duez: 

The electricity you consume at one specific spot for your data centers versus location based where we use the carbon emissions average of the very location where your data center is based to put it into the numbers and calculate the emissions. So it can be absolutely possible for someone doing market based calculation to have a data center powered in very coal heavy region with almost 100 % non-renewable energy to claim to be 100 % renewable energy and have very low emissions because they actually buy electricity somewhere, which is not directly used by them, but by someone else. But they claim that they invested amount of money, to produce this clean energy.


And this is this sort of accounting trick. to be fair several people, explain to me that it's actually a good thing because it forces them to invest massively in renewable energy. It can be highly debatable whether this investment would have occurred with or without the hyperscalers putting this money on the table.


Wilco Burggraaf: 

I would add to that because if I talk to customers or potential customers I have to say, the first question is how do you deal with offset within your group or organization? If they say, well, we buy certain biodiversity or credits and that kind of stuff, it already creates for me like an extra… makes it more difficult to potentially convince us, think not the right approach, but eventually explain why they should do something. Because then they often say, we have, but we have no incentive to even start. And if a company says, well, that is less important for us than already, then the next conversation is like, okay, but our power is 100 % green because we have a green power contract. So that's cool.


And then the often the debate is that while discussion is more, okay, so take, do you take an account and see it too, or do you want to mainly focus on kilowatt hours? That is kind of the conversation I keep having almost every week with that. So, so for me, it doesn't make it easier to have a good conversation for green AI or sustainable AI related work.


Mark Butcher: 

I think it's also whilst there's a lot of the obsession about kilowatt hours and scope 2 which affected what you're talking about there, the whole power rated emissions. It's really important to note the huge amount from scope three, which is the construction, the manufacturing of these GPUs, TPUs, technology itself. And then don't forget the wider impacts of things like the water consumption, which you touched on earlier Wilco, which is no one's really aggressively looking at that. And some of the data sets in which this stuff is deployed, while they are water hungry, particularly the ones in the US.


Wilco Burggraaf: 

Yeah. Yeah. Yes.


Mark Butcher: 

European DCs tend to actually be kind of better in their water usage. And I know data center operators get quite frustrated when they get the water conversation thrown around because some of them are highly efficient, but a lot of them really aren't. And we are now seeing issues and this is where one of the challenges I see is the societal impact again, which is the big hyperscalers and data center providers are putting themselves up against society now because they're starting to force these things through.


Because if you look at what's happened in the UK over the last year alone, since our new government came in, they designated data centers now as critical national infrastructure. And the reason why that's really important is it means the planning laws no longer apply. So local people can now no longer object to a data center being put in their backyard, in their back garden. It can be pushed through from central government really quickly. And it means they get priority over control over access to things like power now as well.


So we're now seeing power and water, we're now seeing limitations in areas where you've now got this giant multi-megawatt facility going in, and that now means that you cannot build a factory next door because there's no power. You can't build a new housing estate because there's no power. We're now going to see the future where there's going to be water shortages in areas simply because it's all being used for the data centers. And if there's an issue locally, just say there was a gigantic power issue locally on the grid, it means the data center gets priority. So locals don't get priority.


Wilco Burggraaf: 

Yeah.


Gaël Duez:

Mark, just to play a bit of the devil advocate here, we hear a lot, especially in Europe, question about digital sovereignty and the fact that we are completely dependent on the US providers, to an extent which is almost unthinkable. It's almost above 90 % for sure. But is it something...


Wilco Burggraaf: With GenAI it's almost 100%. I'm not gonna lie.


Gaël Duez: 

That makes sense for a government to say we will build our own data centers and we will build our own infrastructure to make sure that actually we will be a bit less dependent of a foreign country which used to be a very close ally and it's a bit less reliable this last years.


Mark Butcher: 

If we, if we kind of pick apart the, how people use enterprise IT, going even wider than the whole AI thing is the cloud providers, they have done such a good job of convincing us to use all this new shiny technology. Even if it's set as a container or serverless or functions or platform, it's nonsense. You look at how most people consume their cloud services. They're still deploying virtual machines. It's for 80, 90 % is about virtual machines with some storage and some networking connecting it.


Wilco Burggraaf: 

Yeah.


Mark Butcher: 

All you're doing is by going to the public cloud is you're finding the most expensive way of consuming IT. There is no more expensive way of consuming. And actually every single company who uses the public cloud could actually reduce their IT spend by about 40 % immediately by just behaving like service provider themselves. It's really weird that they went like the public cloud providers came along and went, you know, we'll give you the ability to scale up or scale down.


Wilco Burggraaf: 

Yeah. 100 % agree.


Mark Butcher: 

And they went, oh, awesome. And everyone threw all their stuff onto it without even thinking they could have done the same thing themselves. Because this whole idea they give you is limitless scale. You look at the average enterprise and say, how variable is your usage? You go, oh yeah, well, we grow on average by 15 % year on year. You go, okay, well, do you need to massively spike and peak and grow and shrink it? Well, one of our applications does, but the other 900 don't. Everything else is predictable usage.


Wilco Burggraaf: 

Yeah.


Mark Butcher:

So you could actually put all these workloads into better locations. And also we always say it's hyperscaler versus on-premise. It's not. We used to have a vast economy of managed service providers who could provide infrastructure as a service. And we just chose to stop buying from them. It's really weird. We will not support our own companies. in the UK, we always say, why don't we have any unicorns anymore? Why don't we have any big tech companies coming through? It's like, well, because we won't spend any money with them.


How do we expect them to grow if we won't even spend money with them ourselves? And I've seen this recently with the UK government has just invested in a bunch of AI growth zones, they're calling it, which is so ludicrously short sighted and stupid because in the background, they've also signed giant long term commitments with Amazon, Microsoft and others. So you're throwing billions of taxpayers money into a black hole to create services that you won't even buy yourself and your 40 % of the economy.


Wilco Burggraaf:

Yeah.


Mark Butcher:

How do you expect to grow tech companies if we're not willing to consume them ourselves? And the whole sovereignty thing, again, I mean, it was that brilliant court case recently in France where Microsoft were forced to admit that none of their services were actually sovereign. And it's finally been unpicked, but everyone knew it. It's not like it's been anything.


Wilco Burggraaf:

Yeah.


Gaël Duez:

But my question was more related to the infrastructure and of course then comes a question who operates the infrastructure and what kind of services will be run on this infrastructure. But I was wondering if the UK under equipped in data center or if it was just more related to, we want to be champion of AI…


Wilco Burggraaf:

Yeah.


Mark Butcher (44:31)

That was it.

Until a year or so ago, the UK had echoing amounts of data center space, but we were still building it because the venture capitalists wanted to gouge and get a bit more money. But suddenly, because suddenly the massive scale of AI and people throwing billions into it, even trillions now effectively being thrown into it, suddenly actually all the data centers are now very, very full and it is now increasingly hard to get capacity.


Gaël Duez:

Okay, got it.


Wilco Burggraaf:

Maybe in the UK, in the Netherlands maybe we have and also heard other stories that we have a lot of data centers in the Netherlands that to be equipped, I'm not an expert in this. I recently also talked with Max Schulze about this. He also been on the podcast in the past. Is that there are now more, how do say it, requirements to power per server rack.


And that they kind of need to rebuild certain or how do you say it to convert them into more modern data centers and it costs a lot of money and the investment for the normal data centers is often not there. So what now happens is often that a big space of the data centers in the Netherlands that we could use is empty. And then the places that are then filled is mainly because that is the maximum power capacity of that data center.


Gaël Duez:

Refactor.


Wilco Burggraaf:

And this is apparently happening on a bigger scale than I expected because I also thought like, yeah, apparently we're building all these data centers because all the ones we have are nothing that you could add to it or anything. sadly, apparently that is not the case.


Gaël Duez:

Obsolete.


Mark Butcher:

Certainly, the grid's ability to support the availability of power in the locations where they want to put it is the problem. also a lot of the time it's also linked down to how the whole connectivity to the grid works in the first place, where particularly if you get in the UK, it's a long queue and you can't jump the queue. And that's what really is frustrating the data center builders is that they want to put these megawatt facilities everywhere, but they cannot actually jump the queue to get a connection.


Wilco Burggraaf:

Yeah.


Mark Butcher:

And so they're waiting, they're waiting way too long. And these queues can be 10, 20, 30 years long as well, if you're at the wrong point of it. And so it's not necessarily the lack of availability of the data center. the, isn't enough power to drive what's going to go into the data center. And Wilco's right. The power per rack is gone exponentially upwards. So I'm now seeing, I mean, if I go back even just five years, I used to work selling data center services and we see the average power per rack was about four kilowatts.


Wilco Burggraaf:

Yeah.


Mark Butcher:

then it went up to about seven kilowatts. And now, so just take one requirement I saw just last week, a customer came and said, we're going to be deploying 40 racks, each with 50 kilowatts.


Wilco Burggraaf:

Yeah.


Mark Butcher:

And that itself, going back a few years, would have been a giant data center in its own right. Now it's just a row of racks. But that takes out an entire facility's power consumption now.


Wilco Burggraaf:

Yes. I think also it's a double role. So, so three things. So I think by the way, also earlier about sovereignty, I think that there were for a long time, a lot of company because of security and wanted to be in control. They didn't went to the public cloud. And then at some point, I think the IT budgets at companies, they rose above, and I mean, big companies above five, seven, 8 % and they started thinking, maybe we can outsource some because then it potentially can be cheaper.


And I think that was at some point a certain mistake. And also, yeah. Yeah.


Gaël Duez:

And also, Wilco, if I can interrupt you here, this was really an OPEX versus CAPEX approach. It was mostly for financial perspective, like we want to reduce our CAPEX, so let's put it sometimes a deal we're not that smart. Sometimes it was, sometimes it was not, but it was still looking good on the accounting sheet at the end of the year


Wilco Burggraaf:

Yeah.


Gaël Duez:

We reduced capex.


Mark Butcher:

And that was actually one of the biggest changes was the international accounting laws. So when the international accounting laws changed, because it used to be, most big companies used to lease huge amounts of IT equipment, and then that enabled them to take it off the balance sheet. And going back about, it must be about eight or nine years, maybe a bit longer, when the international accounting laws changed, I forget which one it was, it's IFRS 13 or 17, something like that, it changed to make that leases were no longer off balance sheet. So suddenly, all this debt came piling back onto the balance sheet.


And the accountants went, we don't want that. so things had to be bought as a service and it had to be a true service. It had to have scalability. So that was when suddenly the public cloud looked very attractive because they could suddenly take all this back off the public.


Wilco Burggraaf:

Yeah. So I believe that because we cannot fix this, especially for GenAI without kind of government, I believe, because I think that ⁓ companies apparently now are okay with how they set up certain contracts and how they operate right now. so Max Schulze gave a workshop and we had to think for from the Dutch citizen perspective.


If you look at just the GenAI kind of situation. So if you have like 100 euros, was, I hope I'm having the numbers right, but 25 euros, just an example, 25 euros went to the USA for a Claude or an OpenAI 25 euros went to Nvidia and Microsoft in the data centers. Then 25 euros or maybe more to China and Taiwan and India to build stuff kind of what you're running. And then maybe of the 25 euros, was then eventually spend in Europe, but then potentially through Luxembourg, kind of without any tax payment kind of going on. his perspective was also to let us think, okay, from a citizen, what do we want from government to change to maybe fix this issue? And I kind of also came to the conclusion, I think we cannot like how to say, break this paradox almost without I think certain government rules and European rules and that kind of stuff.


I think we even need that more than an extension on the how do you say it CSRD or that kind of stuff.


Mark Butcher:

And the problem I see with that is we all know the power of the tech lobbyists So every time, I the EU who are accused all the time publicly by the tech sector of being draconian, holding back entrepreneurship, holding back investment and things because of their ridiculous demands for open reporting, how dare they want them to actually report the truth? And then what happens is the legislation comes out, it's quite hard in what it demands to be reported.


But then gradually over the course of it going into actual production and getting put out there, it gets diluted again and again and again when all the big players in the background, it isn't just tech, it's the other big enterprises putting pressure on to say, you know, it'll be too hard, it'll hold us back. We can't possibly have this. And then you see the president coming over and pushing the same narrative. And it's all designed actually to enable them to do whatever they want and get away with whatever they want, whenever they want no transparency and no accountability.


Gaël Duez:

So maybe on that specific point, and of course we can go on and on from a citizenship perspective, but if we go back to a professional perspective, because this change in legislation and the behavior of our elected, I would say, officers doesn't seem to happen anytime soon. So what can we do? From our own perspective, at our own level, from maybe only operational perspective, but also maybe strategic perspective. Okay, so we got it. Brussels should do things way better. We should have better laws, better regulation, pushing open source solution, pushing for solutions based in the UK or in the UE. But right now, what could we do regarding what we've learned or didn't learn from these reports? To reduce the environmental footprint of GenAI.


Wilco Burggraaf:

I could start with that, think. So I believe that currently, because of many different reasons, there's a lot of overheads in the use of GenAI, also in the, how do you say it, proof of concepts that are going on without often a return of investment. But if you are knowledgeable in IT, there are a lot of things you can do that eventually make it more lightweight only kind of use it when it's really when you have to but it's also then reducing the cost because I think because of FOMO I think that was also kind of I was a connection with earlier what Mark said a lot of companies do things from FOMO but now start realizing like yeah where we don't have an IT limited unlimited I have to say a tea budget what you currently see at a lot of companies in the Netherlands and Europe is that the IT budgets.


There's a lot of pressure on it. That's like that the CFO says like this is the budget and you have to be more efficient. It is not always the case and we all know this that if you do less that you also reduce footprint. ⁓ I always use kind of this generic example where if you and this could also count for on average how much actually use your GenAI or or that kind stuff. If you have thousands for two machines on utilization, in this case with CPUs on 30%. Then you do all these kind of things to use less virtual machines to, to eventually end up in 400 virtual machines. And this is just an example. In this example calculation that I've done also in a lot of presentations, the cost went down, but because utilization went up to a 75%. And if you'd really do a workload estimates, the footprint is exactly the same. So really, I think the, the, business case can be cost reduction, but I think you also need knowledgeable people who understand how to calculate all this stuff, which I think is not as difficult as some companies make it to be. You can make it pretty simple. I always say if you almost instead of like a CO2 or kilowatt hour would like to make an estimate of all the virtual machines or containers running, even in the past SAS, all the solutions, you can do a very, way better estimate than we often do now.


So yeah, I think there's a lot of possibilities. And that is sometimes I say we need more knowledge on that from the operational and also on strategy I know, but on the operational side, we need to practice more, we need to experience more, we need to more share that knowledge. And I often think that we're now as a community are not doing this enough. I think we should do more in this domain.


Gaël Duez:

Just to rephrase what you've just said, because I know you as a very strong advocate of energy consumption being the by default metrics that people should follow even before trans...


Wilco Burggraaf:

I also love embodied Garmin.


Gaël Duez:

No, no, I agree. I'm not trying to water down the richness of your thoughts. That's not what I intend to do. It's just that as a first rule of thumb, I would say, ⁓ you would say, go for the energy. And right now you say, even before the energy, go just by counting virtual machines. Am I right or wrong? In time.


Mark Butcher:

Okay.


Wilco Burggraaf:

Yeah. And time, if you would already and time. like I said, CPUs is very complex, even on if you have a lot of containers for two machines or whatever. But just with GenAI although you have an ecosystem with MCP and rag and everything set up, what now is acceptable because there is this label GenAI on it that sometimes a simple process what was first done with manual programming and minutes. Now can take up to hours and they're like, yeah, but it's cool, right? Because it's GenAI. Overbloading all the extra overhead in the systems. There is too much, I think, acceptance now for it, just because someone is busy on a very important topic. And I think we should be more honest. And as a Dutch person, I'm very direct always on this kind of things. So yeah, cool project, but that doesn't mean we should not like do everything we learned in the last 60 years on how to do actually good IT.


Gaël Duez:

Got it. And Mark, what would you say to a responsible technologist not having a direct line to the Prime Minister?


Mark Butcher:

So many things. A lot of this is we need to stop expecting the individuals at the bottom to solve the problem of making this thing sustainable, efficient, whatever. And also what we need are numbers that make sense to the business. So the greatest success that I've had in this area has been in tying environmental metrics into business metrics, aligning it with something that business actually cares about. And there is no one number to rule them all either, at all.


Wilco Burggraaf:

Yeah.


Mark Butcher:

It's always contextual depending upon the persona you're talking to. So what you need to do is to get the data integrated into all the departments that span across the whole of IT and you need to let them get the number into the format structure and approach that they want to use to make their decisions. So you have to understand how do they make decisions in their space, whatever it is they're doing. But it needs to come from a central repository where you know the methodology is sound, the calculation is sound, and the data is sound. So you can stand behind it.


Technologists in general, they will do nothing with bad data. They're very good at smiling at you and saying that's really good and really wise. And if they think it's rubbish, they will ignore it. But then also this comes back to my point about we need to stop letting our leaders and our leaders are within our organizations, stop letting them get away with not taking any accountability at all. Because the one thing I find is we always talk about this whole thing about cost saving and efficiencies, but enterprise IT in most organizations is woefully inefficient.


Wilco Burggraaf:

Yeah.


Mark Butcher:

Fast amounts of waste. You could walk into any big enterprise tomorrow morning and I can tell you I could find 30, 40, 50 % straight away in a heartbeat. But they claim to be cost focused. They're not. The reality is people don't really care about saving their organization money at all. They pretend they do. So at the moment we give people metrics they don't care about. And what I find is environmental metrics are something people actually do care a little bit about or enough people care about. So if you can give them the data in a format they trust, in a way that's granular, in a way that they can manipulate to align with, whether it's a footprint per application, per service, per business unit, if you can start looking at not just the totality, but the efficiency of your consumption. Because again, most CTOs that I talk to, if you ask them how efficiently are they consuming their digital services, they don't actually have a clue. But it's a great unspoken truth, if you go into most cloud deployments, woefully under consumed, massive amounts of waste, and the same thing applies on premise.


The same thing applies to every decision about what they buy. We're not making the best use of what we buy. But actually, we held our leaders accountable. If, for example, at leadership level, they had a KPI for efficiency of consumption, we could make a massive dent in consumption overnight. But the biggest thing that I talk about when being more sustainable in IT, we have to shift from reporting the thing. People then move to optimizing the thing, which I always describe it as that's like trying to stop the tide coming in. You're sweeping away the water every day and you think you've won. But then the next wave hits you. So we have to shift away from that to avoidance. And avoidance is like, don't build your house right on the seafront, build it at the top of the hill. Build it in the right place at the right time in the right way. So change your strategy, change your governance, change your best practices. And the biggest success overall that I found, if you want to drive progress in a large enterprise, so if you're at the bottom and you want your business to care about sustainability, tie it to risk. If you can get into the risk, people, and they understand the implication of this, whether it's the future of the business, whether it's climate risk, whatever it is, that they have so much ability to drive projects and to control things and make things happen. And because you have to change your alleged best practices and standards, you have to do things like address, how do you develop things? And for lot of this in the first place, if you weave in all these metrics and data points into the risk process right at the very start, you'll find that a lot of these AI projects won't get funding in the first place because they don't actually stand up to any form of interrogation because the most sustainable service in IT is a service that you don't build or run.


Gaël Duez:

Yeah, it makes a lot of sense.


Wilco Burggraaf:

Yeah, yeah, maybe also a little bit. So I know that I talk a lot about low level coding and software and architecture. Something that I did not mentioned, I think in the podcast is that, beside being a solution architect, I also been a CTO for three and a half years of a small company where we delivered portfolio management systems to pretty big boardrooms.


So that really also learned me a lot about how they think in

Written by Jill TELLIER

Join +2000 responsible readers

Icon bottom about

Green IO newsletter

Once a month, carefully curated news on digital sustainability and Green IO contents delivered in your mailbox

Do you also want to get notified for the new podcast episode ?


Icon Bottom About