Current stats
Induced workloads in data centers now account for approximately 180 million tons of CO2 emissions, roughly 0.5 % of all global fuel consumption CO2 emissions. For the most part, they originate from electricity consumption, though this figure excludes emissions produced from backup power generation, refrigerant gases, or emissions from the equipment (embedded carbon). Yet although AI has led to an increase in electricity consumption, it must be put into context. Other sectors are on the increase too, for example electricity expenditure in the transport sector has seen an increase of 9%.
Tips to make AI models more energy efficient
There are several opportunities to contribute to AI efficiency. Algorithm optimization uses different techniques: compressed models; pruning; quantization; and knowledge distillation. Compressed models are used to develop computation efficient AI. Because of their smaller size, they use less resources in the cloud, without compromising on accuracy. Energy reduction is primarily found in the inference phase, but can also occur in the training phase. Pruning includes the removal of neurons or weights that have negligible contribution to model accuracy. The decreased network size therefore reduces memory storage and computational power. Quantization is another technique, which allows for lower precision formats to be used, speeding up the AI model performance e.g. a 32-bit floating-point (FP32) is lowered to 16, 8, or even 1-bit. Another possible technique is Knowledge distillation, which is sometimes called the ‘teacher-student’, involving a pattern transfer for learned data. Here, the teacher model uses an extract of the trained knowledge and passes it to the student model, resulting in a more lightweight and efficient network operation.
An additional opportunity is that of using efficient ML model architecture: narrow AI. The narrower the task is, the smaller the model with fewer parameters, which means less energy is required. This is not quite the same as ‘frugal AI’, where certain functions are cut out, and which can possibly lead to reduced accuracy. A final option is that of switching to lower level programming languages which have fewer memory requirements, such as C or Assembly, instead of Python or Java.
Hardware optimization techniques
Choosing more computationally efficient hardware can also contribute to energy savings, for example, some GPUs have substantially higher efficiency in terms of floating point operations per second for watt of power usage compared to CPUs. Specialized hardware accelerators such as tensor processing units (TPUs), are specifically tailored for machine learning tasks, enabling the customization of machine learning models. Also, with careful planning, the training time of algorithms is reduced through distributing computation among several processing cores. This procedure, known as parallelization, is often easier to control when the model is being trained. In addition, edge computing can play a part. This means performing computation and allocation specifically where data is collected or used. By doing so, the time to transmit the data from a data center to the cloud is reduced, and goes some way to help preserve data privacy.
Data centers themselves can also be optimized, and the first port of call is to look at the energy source used to power them. Renewable energy sources help to reduce the carbon dioxide emissions and this includes the back up power sources (generators) too. Modular infrastructures and sustainable materials can also help limit the carbon footprint of a data center, plus algorithms are being developed that optimize resources allocation according to the tasks submitted to a datacenter. A circular approach is beginning to be adopted, for example, using the heat produced in cooling to power other nearby facilities.
AI for sustainability & Sustainable AI
Despite heavy resource and energy consumption, AI enables breakthroughs in the various areas across climate and energy sectors, and it accelerates climate research at scale, playing a role in detecting and limiting emissions. At the Earth Observatory in Singapore, Iuna explains how AI is applied to forecast atmospheric changes, identifying trends in climate variability. Machine learning mechanisms are used to analyze patterns in vegetation health, deforestation rates, urban expansion, or water availability. Iuna also points out that it is not all GenAI, and statistical models are still being used. In current climate science, hybrid AI is the go-to tool. This is a combination of physics-informed models with big data, tapping the strength of fundamental science. Interestingly enough, GenAI is being used for detecting misinformation in climate science communication.
Tech community, users & human agency
The current ‘Red AI’ trend is another aspect to explore. Higher accuracy is usually preferred over the energy efficiency of a model, but if the accuracy was reduced (e.g. from 95% to 75%), then efficiency gains could be quite significant. The same goes for all users of AI / GenAI, and so we need to be more discerning about whether AI is actually needed for a task. Perhaps a simple search engine can do the job when very detailed information is not needed. We need to be aware of AI forcing too, and find ways of opting out of imposed AI. We must remember that we humans still have agency.
📧 You can also send us an email at [email protected] to share your feedback and suggest future guests or topics.
Gaël Duez: Welcome Iuna, thanks a lot for joining Green IO today.
Iuna Tsyrulneva: Thank you so much Gaël, such wonderful introduction. Happy to be here.
Gaël Duez: I'm very happy to have you as well.
And I really thank Tebo for putting us in touch. Bouncing back on what I said in the introduction. I've got a very basic question. How did you start getting involved with AI? Was it from a practical angle during your PhD, more as a concerned citizen when you joined the Earth Observatory? Or was it something else?
Iuna Tsyrulneva: Thank you so much for this question, Gaël. I guess my career path was not so linear. I think I jumped from one topic to another. So basically I was trained as a chemist during my graduate, and then I obtained PhD in Material Science. So there I did not use AI for my research at all. So I was working on the development of bio sensor for detection of fatigue.
Completely different subject, right?
But then after that, I thought that I want to do something that is more relatable to the society, to the community. I want really study the impact of technologies on, on society and for public good. So I moved to studying responsible AI and I explored governmental regulation of responsible AI in Singapore. So we provided the consultancy services management in terms of what is the landscape of responsible AI in Singapore is, and sustainable AI was one of the topics that we covered there as well.
Of course, there we mostly looked at how we can green the algorithms used for AI models, but we didn't cover that much of how AI can be used for sustainability or sustainable development goals. And this is something that we're going discuss today as well. And then later, when I joined the Earth of Silver Singapore Young Technological University, I focused on exploring how AI can be used to achieve the environmental goals and how we can use it to assess the climate change impacts on the society.
Gaël Duez: Yeah, so two angles for AI and sustainability, and I think we will discuss the semantic of it. Now you say something super interesting, and I know that the Singaporean government and, and people based in Singapore tends to be very hands-on and very practical when it comes to AI. You mentioned how to make AI models more energy efficient, and I think I've got a, a few dozen questions about it, but my first one would be, (what) could you provide AI practitioners, so in a broad definition with, with practical tips to make AI models more energy efficient?
Iuna Tsyrulneva: Absolutely. And it's going to take quite a long time. Yeah. So please relax, lounge back in your chairs, yeah, because it, it'll take yeah, quite some time. When you talk about the sustainable AI, let's cover four main things that contribute to AI efficiency.
The first of them is algorithm optimization. A study from 2018 revealed that the computational needs to train large machine learning models have doubled every 3.4 months since 2012. This deviates quite a bit from Moore Law, which states that this happen close to every two years. So you can see how big is energy demand for AI right now.
So one of the most productive strategies in green algorithm development is the design of optimization techniques that reduce the computational resource management and thus they minimize the energy consumption. So we're gonna cover four main techniques that are used for algorithm optimization. So first of them is actually developing computation efficient AI that uses compressed models, which are smaller, but they're still yet accurate.
It can be done via pruning quantization and knowledge distillation. So these techniques can be applied in resource limited systems, and by this, they accelerate the model accessibility from different devices.
First of all, what is pruning? Pruning includes the removal of neurons or waste that have negligible contribution to the model accuracy. It reduces the memory storage and computational power due to the decrease network size.
Second is quantization. Quantization allows for scaling up the AI model performance due to transparent 32-bit floating point parameters into and numerical precision of 16, eight or one bit. So combination of these two techniques, pruning and quantization in the models with similar network architecture leads to the more efficient AI model.
And finally, the third one is the knowledge distillation technique, which is called teacher-student. It involves this teacher-student pattern transfer for learned data. So both models have the similar architecture that facilitate their compatibility and flexibility. And the teacher model uses the extract of the knowledge of the trained knowledge and it passes it to the student model.
So it makes student network more lightweight and the strength in operation.
Gaël Duez: Among all of the, those you listed, are there anyone that you will qualify of quick wins, and are there some that you've experienced being harder? Like is quantization, for instance, easier to implement than prune-ification? Or does it really just depend?
Iuna Tsyrulneva: So I've been talking to a couple of corporate engineers and so I've been asking them, is it really difficult to implement sustainable AI in some corporate settings, right? And they told me that sometimes it can be very difficult to change the entire code and software that the company already uses, but where they can actually use their sustainable AI approach is when the big company acquires startups, and requires their codes, right?
So, the engineers usually are required to revise the entire code, and this is where they can make the changes that eventually lead to more energy efficient AI models. One of the changes can be everything that we mentioned, like pruning, quantitation, and knowledge distillation, however I feel that knowledge distillation requires a little bit more work, a little bit more revising of the code, whereas pruning and quantization can be implemented quite easily.
Because eventually it also leads to more computational efficiency and less resources used in the cloud.
Gaël Duez: Okay that's interesting that when you've got already a large code base, and it’s the same I would say, with every code base whether it's an AI driven product or not, you've still got some work, extra work to do to implement these techniques.
There is another clarification question I would like to ask you is, the energy gains, are they mostly during the training phase or are they mostly during the, the so-called inference phase?
Iuna Tsyrulneva: There was a study, you know, as a scientist to speak from the position of studies and research that has been done.
Researchers from Google and from the University of California (Berkeley), they have shown that the carbon footprint of large language models can be reduced by 100 to 1000 times with the appropriate algorithm that also cover the, the customized hardware and energy efficient cloud data centers. So it can be done without affecting the quality of the AI models themselves.
Gaël Duez: Is it the energy consumed during the training phase, or is it the energy consumed during the inference phase that is reduced by these techniques?
Iuna Tsyrulneva: I believe it's a combination of both. So the majority of energy is being used of course in the inference state, right? When a lot of users use the models, however, certain things as, for example, training the entire model in cloud or maybe using various kinds of hardware, GPUs compared to GPU or CPUs, right?
So it all has an effect on the energy consumption during the training stage.
Gaël Duez: Okay. Got it. Maybe my last question, and this one is a bit more specific because it doesn't come from me directly, but you know, Wilco Burggraaf, who is a respected voice in the green IT community and a long time fan of the show mentioned, an interesting fact in a recent discussion. Generative AI is slow. And by that I mean, that actually is super true. We're talking about less than 10,000 tokens per second, and we keep adding ways to, you know, add things and like agentic, MCP plugins, et cetera, et cetera.
But do you see some synergies between what you listed as one of the main contributing factors to make models more sustainable and the design pattern in AI software engineering, to prevent bloat?
Iuna Tsyrulneva: So there is, there is actually another idea of how to make the AI more energy efficient, right? And this is just using an efficient ML model architecture, so called ‘narrow AI’, right? So we do not really need to use the foundational, foundational LLMs for any kind of task.
The more narrow the task is the smaller the, the smaller the model is, right? And the less energy it requires to, and the less time, the less parameters it required to be trained on. And this initially leads to the lower energy consumption.
Gaël Duez: Fun fact, because there was another discussion about the wording with, with Maxim Fazio as the co-organizer of Green IO London.
For you, this word 'Narrow AI”, is it similar to the word “Frugal AI”? Because we were discussing that the word starts to pop up here and there. Is it a word that you use? Is it the same definition, “Narrow AI”, “Frugal AI”, “whatever AI?”
Iuna Tsyrulneva: I believe literally, this is the first time when I hear about “Frugal AI”.
But that's somehow, I think I would attribute “Frugal AI” to more like energy efficient AI. Because it means that sometimes it cuts on certain functions, sometimes it cuts on the accuracy. But what it prioritizes is the computational efficiency. And then it means it doesn't go like very luxurious.
It is not, it cannot serve many purposes, but still it does its job and in a very efficient manner.
Gaël Duez: Thanks, Iuna for answering all these questions. If I understood you well, the first big topic is all about algorithm optimization, quantification, pruning, et cetera, et cetera, “Narrow AI”. Is there more or is it only about algorithm optimization?
Iuna Tsyrulneva: Let me add more on algorithm optimization because this is actually not enough. So some of the approaches that are currently used is actually using the lower level programing languages because they have fewer memory management requirements and thus they are more energy efficient, right? So this is, for example, using C and Assembly versus Python and Java.
I understand this is quite a big requirement and initially the entire software development has to be planned with these requirements in mind. This is something that can have a huge impact on the energy demand of AI model.
Gaël Duez: And is it something that you've seen? Is it the best, is it the sort of best practice that people write saying, okay, you know, we should do that, but it's never done?
Or did you see with, with all the interactions you've got in the Singaporean ecosystem, companies starting to use C or Assembly language or rewriting piece of code to tremendously increase the efficiency. Or, is it just, you know, a wish list?
Iuna Tsyrulneva: There have been some comparative studies that highlight the energy efficiency of lower-level programing languages, but from my experience, I've never heard anything like that being implemented in Singapore or in any other countries.
This is still in the pipeline.
Gaël Duez: Yeah. Okay, thanks a lot.
Because the cost, I mean, you know, I used to be a CTO and whenever someone told me, you need to redevelop everything with a new language, I was like, are you kidding? I mean, usually it's just when you need to make a major improvement to your platform or simply restart an entire product or set of features.
But because, the change in the needs or in the way you want to implement them, that might justify switching the code, but switching the code for the sake of efficiency, even like regular performance, I would say usually doesn't cover the cost of training your workforce, et cetera.
Iuna Tsyrulneva: Yeah. This sounds crazy.
Yes, I agree. And that's why I believe that everything should be done since the very beginning, right? So the sustainability principles should be implemented and discussed and spoken about just at the very beginning of the software development. So, it should be put at the front, the fundamental basis of software.
Gaël Duez: That's a sound advice.
Thanks a lot for this. Starting with sustainability in mind from the start. You mentioned algorithmic optimization, is more to add on algorithm optimization, or do you also want to list other things to enable more energy efficient or actually more sustainable AI models?
Iuna Tsyrulneva: There are two more points that I wanted to add.
So first is actually promoting open-source resources because currently software development ends up becoming energy efficient due to continuous development process, right? So tapping on reusable open source components can remove the need to develop the AI models from scratch every time.
And this is essentially where Singapore walks the talk. Because Singapore government encourages using the public and open access models for the development of other applications. We have even a database where like everybody can get their access, they can use for their own purposes, right? And finally, the last point that I also wanted to highlight is so-called “Red AI”.
Red AI is actually where the higher accuracy is preferred over the energy efficiency of the model. So the analysis of 60 papers presented (at) the most prestigious computer science conferences showed that up to 90% of the works prioritize accuracy over efficiency. And if you think about this, do we really need, in the majority of cases to have like 95% accuracy, triple the time needed to provide a result at 75% accuracy, right? Which is already much higher than the accuracy of the result provided by humans.
So we also need to understand what kind of tasks we're doing and whether we need to use like high precision models. So this is all that I want to cover about algorithm optimization.
Gaël Duez: An entire new field, or not an entire field, but a way to re-implement things that, yeah, were already discussed previously in software engineering, but within this very specific AI loop.
But I guess it's not all about algorithm optimization. Are there any other tips that you'd like to share, with AI practitioners to make AI more sustainable?
Iuna Tsyrulneva: So, we have spoken about the algorithm optimization and then the second point would be hardware optimization. Choosing more computationally efficient hardware can also contribute to energy savings. As for example, some GPUs have substantially higher efficiency in terms of floating-point operations per second for power usage, compared to CPUs
Other specialized hardware accelerators are TPUs, Tensor Processing Units, tailored specifically for machine learning tasks, and they give the ability to customize the machine learning models to be used in specific hardware. Another important approach to hardware optimization is parallelization.
It's a way to reduce the training time of algorithms by distributing computation amongst several processing cores. Of course, it has also, it needs to be done in a very thoughtful way.
Gaël Duez: Just about parallelization as it requires more equipment, how do you see it as a potential benefit from a sustainability angle?
Iuna Tsyrulneva: I, I look at this problem from the data management perspective. Because sometimes what happens in data centers, those cores that as opposed to use for, for certain tasks are idle for a while because there is no computation happening over there. And instead of keeping them idle, you know, so it's better just to run another process (in parallel), so that it can be run faster and in a more efficient manner.
Gaël Duez: So, it's really using wasted resources rather than trying to achieve a faster training time?
Iuna Tsyrulneva: In a way. Yes, yes.
Gaël Duez: This parallelization technique, for you, is it mostly during the training of the model or is it also something to consider when the model is being used?
Iuna Tsyrulneva: Good point. I believe that easier to control it when the model being trained. However, I think in certain instances it can be used during the inference, but it requires a little bit of learning and I haven't heard about the instances when it's used right now.
Gaël Duez: Hmm. Me neither, hence my question. Sorry, I interrupted you. So, you also wanted to mention other hardware optimization techniques.
Iuna Tsyrulneva: Yes. And finally, we are moving to the edge computing. This is not the key strategy in this context as the idea to perform a computational allocations where the data is collected or used, that's within the time to transmit the data to a data center to the cloud. So this is currently what is being implemented on our smartphones, right?
So the data being collected at the locations where we use it and it's been processed right on our smartphone, it also adds more to the security of the data and preserving the privacy.
Gaël Duez: So hardware is quite pivotal here. I heard that there is a tendency today to go for GPU all the time, where actually for some models, good old CPU, which sometimes in idle states, and maybe underused, could work as well. Do you have any clarifications on this or examples where actually maybe not chasing the most energy efficient, latest GPU, will work better than, oh, sorry, let me rephrase it because I've got a double negative and that's not really understandable.
So I've heard that sometimes. Good old CPU can do the job as well. So do we always need to chase the latest GPU, which happens to be most of the time, the most energy efficient yet, very energy consuming, rather than CPU being at idle state? It seems to be a trade-off between CPU and GPU and energy efficiency and, hey, we already got some equipment, let's use it.
Iuna Tsyrulneva: Yes. Currently I believe a lot of machine learning scientists are working on GPUs. However, if you're not working on image processing, then maybe using GPU's is not required. I also know that a lot of engineers can use Tensor processing units when they train neural networks and when they develop their AI models.
All TPUs are (a) proprietary model owned by Google. So only if you work with Google Notebooks, right? So you can actually choose to work on TPUs. This also requires certain agreements and certain willpower and action from the management of the company that you're working at, so they can promote the implementation of the regulations, what kind, what kind processing units you can access to and work on.
Gaël Duez: Thanks a lot for the clarifications. We're done with hardware efficiency, but you also mentioned a third point, which is data center efficiency, after algorithm optimization and hardware optimization, there is also data center optimization.
Am I right?
Iuna Tsyrulneva: Yes, correct. So let's talk a little bit about data center optimization. Right? So we all know that with every new AI innovation energy demands increases and this always results in the Jervon’s paradox where the improvements in energy efficiency lead to greater overall energy consumption and not to less.
This paradox actually highlights the needs for energy efficient data centers, that are powered mostly by alternative energy resources. The carbon footprint of the data center is directly proportional to its efficiency and to the carbon intensity of the location. So with regards to this, you may want to think about the carbon footprint of a data center that is located in Norway, for example, or where the total carbon footprint would be less 20g of CO2 per kilowatt hour.
And compared to Australia or to several US states where the carbon footprint will be over than 800 grams of CO2 per kilowatt hour. To address this, we can optimize, first of all, our data center sizing. We can increase the demand flexibility and load shifting depending on the needs of the population the data set centers source’s to.
It would be definitely beneficial to move through the renewable backup generators and as we do, to renewable energy. Then some researchers speak about the introduction of modular infrastructures that enables the scalability and flexibility and actually the mobility of certain blocks, and making (it) easy to reassemble the data center in any point.
And of course, the use for sustainable materials and techniques is also highly encouraged for data centers, but also researchers have developed the algorithm framework that dynamically manage their server loads. They just pull in systems and they optimize the resource allocation effort into the tasks that are submitted to the data center.
So this would be the main technical approaches to data center optimization. And actually there is one of my favorite examples, which highlights mostly creative approach of how you can use data centers for good.
There was this thing that happened in in New York that was the data center that was underground and then there was a lot of like heat emitted by the functioning data center, right?
What one of the entrepreneurs did, they constructed the sauna center on top of the data center, so they used the heat produced by data center to warm up their swimming pools and their spa facilities. I love the creative approach and I guess that there is a lot that done in this area as well.
Gaël Duez: Heat reuse is getting traction in cold countries, but that's the first time I heard about a sauna being powered by a data center.
It's quite fun!
Now, Iuna, you mentioned right from the start the Jevons paradox, and this is music to my ear. What I didn't fully understand is the connection between the Jevon paradox and all these techniques regarding data center optimization. As far as I understood, Jevon’s paradox is more about how we cap the demand because otherwise it will bounce back over and over and over.
Could you maybe highlight this?
Iuna Tsyrulneva: Yes. I believe in this context when we, when we speak about the Jevons paradox, right? So we mentioned that with more and more AI models being introduced, as much sustainable as we make them, fuel, energy consumption will increase over time because more models come to the market, right?
And we use them more and more. So what could help is actually optimizing the data centers. And we make them green by introducing the right techniques.
Gaël Duez: So it's like Jevons paradox will happen and to tackle this, it's not about making it not happen, but making sure that this extra consumption of energy is managed as nicely as possible by data centers and, and via data centers optimization?
Okay. So algorithm optimization, hardware optimization, and data center optimization. This is your three pillars. Is there anything else that we should consider to make AI more sustainable?
Iuna Tsyrulneva: I think we have already covered the AI developers, then we covered the data centers. And finally, maybe we can talk a little bit more about the users, the final users, and how we use the AI models in a more sustainable way, right?
Because also a lot of things depend on us. So maybe what can be done here from the perspective of a regular person… we can just limit the number of times when an algorithm is run, so this is like undoubtedly the easiest way to reduce energy consumption. So maybe we don't need to go to AI anytime we, we seek some information.
Another possible strategy would be to use less exhausted searches. Maybe we don't need that detailed information all the time. For example, we do not need to use deep search offered by LLMs if everything I need is just the list of the landmarks in Singapore, right? This can be easily searchable via the net.
Then one of my favorite points, you know, so like a lot of people used to say thank you to any kind of prompt from response or offered by, an LLM. I understand they want to be polite, you know, and just in case machines arise, the AI will always remember that this specific person was respectful towards the machine.
So my suggestion here would be just to say something like, okay, ‘for all your future responses, I feel grateful, so I'll not thank you anymore, but just please accept this right now’.
Also something that personally boggles me is AI powered search offered by Google. For example, every time when I'm searching something like quantum computing, what I want to see is just like a Wikipedia page, right?
So if I general information, but I do not need AI overview, which basically just sites or references a Wikipedia page, that should be some option to opt out of AI overviews in search engines.
Gaël Duez: And that's a very important point because we don't have that much choice today. I mean, there've been a study in in France about how embedded AI is in today every tech product or digital product.
And that opt-out option is not that much on the table. It's really like pushing AI through every product, whichever they are, and it doesn’t really matter if we need them. How do you see, a change in the trend, or is it even possible? Because we, we tend to put a lot of guilt on the end user, and they do understand that there are some issues with using AI for everything.
I mean, when people are slimly aware of energy challenges and climate change crisis, et cetera, et cetera. But we don't have that much of a choice.
Iuna Tsyrulneva: I believe still that we shouldn't give up because we have choice, right? There are some other search engines that we can use. For example, Ecosia. As far as I understand, it doesn't provide any AI overview, right?
I'm not sure about Bing, I think it does that as well. But we can always go deeper and make the conscious choices of the products that we use. For example, even if we need to use LLM for simple search, right, ChatGPT, let's assume. We have a choice of various models used to reply to our question, to our prompt. Just if you don't need that detailed response, if you're looking for something simple, or maybe if you're looking just for the social partner, you can spend time, you can just unselect the button that offers you deep search. This is simple way, how we can reduce the power consumption, the energy consumption of AI models.
Gaël Duez: But it's not always possible, but at least when we have the choice, we should opt out when it doesn't make sense, obviously.
Now, because time is running out there is a, a fun fact that I would like to share with you and that connects with something you mentioned where when we were preparing the episode. Pascal Joly, another fan of the Green IO show, attended the latest San Francisco Climate Week, he lives there, and he told me that 95% of the talks were about how AI will solve climate crisis, and less than 5% was about its environmental impact.
And I guess this is a good starting point to discuss the difference that is very keen to your heart between sustainable AI and AI for sustainability. So could you elaborate a bit on it?
Iuna Tsyrulneva: So maybe something that we can just conceptualize here is that green or sustainable AI mostly stands for algorithms that maximize energy efficiency, right?
And AI for sustainability supports the use of AI to respond to different environmental challenges.
So now we are going to talk a little bit more about AI for sustainability. Researchers analyzed the impact of AI in accomplishing the United Nations SDGs, and they found that AI by technological enhancement has the potential to empower 79% of the targets of the SDGs.
The most positive effect around 91% observed in achieving environments related goals. So 35% of these SDG targets may be subjected to negative outcomes from AI applications, with tightly oriented goals being the most effective, more than a third of them. So despite heavy energy consumption, AI enables breakthroughs in the various areas across our climate and energy sectors.
And it accelerates climate research at scale, right? So recently Google's Deep Mind has discovered around 2 million crystal structures, including 380,000 stable materials that put power, batteries, computers, and solar panels. Now, you might think, like, so what? But this discovery is actually equivalent to about 800 years worth of knowledge.
It demonstrates the scale and the level of accuracy in predictions. So you can save like a lot of manpower hours on this. Then the International Energy Agency in its report outlined the industries where AI is already used in the way that it can help limit emissions, including detecting the methane leaks in oil and gas infrastructure, which contribute to the methane and to the general greenhouse effect.
They make our plants and manufacturing facilities so more efficient and they reduce energy consumption in the buildings. So we work a lot with AI and sustainability at the Earth Observatory in Singapore. And as a part of the climate transformation program, we apply AI to forecast the atmospheric changes, identify trends and climate variability and optimize mitigation strategies.
So our scientists utilize the light base damage mapping of disasters, and this enabled the detection of an anomalous surface changes based on the difference between forecast and observed co-event adherence. They also examined the processes underlying the biodiversity change for climate change in Singapore and Southeast Asia.
We also apply machine learning mechanism to analyze patterns in vegetation health, in deforestation rates in urban expansion, water source availability, and other key indicators of environmental sustainability, and unsustainable practices. Moreover, utilizing AI, we address climate risks across various domains, including agricultural insurance, flood risk modeling, catastrophe pricing and mortality risk. So you can see that AI can be also used in application, as climate and finance, and climate and health. And these are some of the examples that can, that can deploy AI to address the issues of sustainability.
Gaël Duez: And there is a lot to unpack in what you said, and I've got several questions.
My first one might be, so let's put aside the Jevons paradox, we already talked about it. But it tends to have a quite huge discrepancy between what was forecast 10 or 20 years ago about IT, and now you can change (to) ‘IT by AI’, and how it will help improve efficiency, more than efficiency sorry, also increase sustainability and help us reach your sustaining goals and what has been actually delivered.
So my question is, have you already spotted examples where it works rather than some reports say it might work and it might save you know, it might make the commercial fleet more efficient, it might make more transportation, more efficient. But eventually with the Jevons paradox, we don't really reach this overall level of efficiency.
So have you already spotted examples for AI where actually this is not true, and the efficiency gains are substantial and are not offset by the Jervon paradox.
Iuna Tsyrulneva: This is a very interesting question because as a scientist, you know, so I would prefer to see research paper that compares the achievements to sustainability, you know, as a consequence of Jervon’s paradox. For example, imagine that you wake up to a notification on your smartphone that tells you that in the next 48 hours there will be a cyclone approaching the Southeast Asian shore, for example, Macau. So you have 48 hours to, to evacuate or to to hide somewhere, right? Or to protect your assets. So far, currently we do not have this capability of predicting the development of extreme events that far in time.
But eventually AI has the opportunity, has the chance to, to build a good forecast for the events. And it can help save the, the lives and bring a lot of economic benefits to the society, to the communities that live along the shore.
Gaël Duez: And that's a very interesting example because I'd like a clarification from you whether, which kind of AI are we talking about?
Because in the general public mindset, I would say, AI equals ChatGPT, equals generative AI. And fun fact is so far, but please do correct me if I'm wrong, most of the practical examples about AI being used for sustainability are not that much in generative AI, but like good old AI, I would say, machine learning.
And obviously benefiting from all these new techniques, all this improvement in existing models or new models, but it's not that much about Generative AI.
And of course it also benefits from all the hardware improvements that we've seen over the past years. But I'm struggling to find example where generative AI is that much of a use, except maybe, it's the example you provided with the crystal and you said several hundred thousands of new crystal being found and some of them which might be used in battery development.
But am I right or completely wrong to state that actually most of the AI used for a good from a sustainably angle is mostly non generative AI?
Iuna Tsyrulneva: When you ask, how do we call that specific AI, if it's not generative AI… so my approach to that would be just considering this as machine learning with decision making properties, right?
So this is what we say. We use statistical models to predict or to analyze the development of extreme weather, weather events, and then the decision making power of AI, actually thinks whether it should inform the community on the shore. How, how big is the risk? What are the, what are the certainty of this events approach in the sea shore.
So that would be my response to your first question. Can you briefly repeat the second one?
Gaël Duez: Well, the second one is how much of generative AI is actually being used? Or is it just good old machine learning AI? Things that we've been doing previously to, you know, all this generative approach? It's a bit like large language model versus good old machine learning.
Iuna Tsyrulneva: Currently generative AI is just three years old, two, three years old. A lot of agentic AI tools have been developed to be implemented for climate science, but so far it's not that widespread. What we use instead is sometimes we call ‘Hybrid AI’.
This is the combination physics informed models, with big data. This is how we, we tap on the strength of fundamental science, and then we use our access to the big data collected from the various sensors across the world.
And we made the predictions of certain events based on these models. So far, generative AI is mostly used for detecting the misinformation in climate science, in climate science communication.
Gaël Duez: Which is a fun fact because it helps both sides because it's also widely used to spread misinformation as well with fake videos and generating, yeah, climate deniers post and that kind of stuff. So it's a bit like a neutral weapon provider, I would say.
Iuna Tsyrulneva: Yeah, this is true so far. Research mentioned that good old LLMs to GPT or Anthropic or Claude, so they provide right information about climate science. So far.
Gaël Duez: So far, yes. Thanks a lot Iuna for all this clarification.
Now, you already mentioned quite a lot of our reports and sources. If people want to deep dive a bit more in these topics, whether it's sustainable AI or AI for sustainability, would you have some sources or other books or videos or whatnot to share with them?
Iuna Tsyrulneva: Absolutely. So two of my favorite materials have been produced by Tony Blair Institute for Global Change.
One of them is called Greening AI: A Policy Agenda for the Artificial Intelligence and Energy Revolutions, and you can find a lot of global and Singapore regulations put in place to regulate sustainable AI.
Another one is also produced by the same institute, and it's called Responsible Progress, Sustainable AI: The Benefits of Greening Our Digital Future.
Gaël Duez: I will as usual, share all the links to the resources you've mentioned, including this last two.
And my final closing question, and I'm sure you're pretty well aware of it, do you have any positive piece of news to share about sustainably and maybe AI for sustainability or sustainable AI to close the podcast and finish on a positive note?
Iuna Tsyrulneva: You know, I try to stay optimistic since the very beginning, right?
So when we spoke about the energy consumption of AI, we mentioned that it's not that bad. Still not that bad, right?
I understand that we’re a snowball and we're moving very fast, but yeah, so far we can do something and that that gives us some optimism.
Basically, as a researcher, I acquire an idealistic and tech optimistic view of the future of AI's role in sustainability. And as far as I'm working on this, I'm invested in making the good use of AI for achieving sustainable development goals. And the good thing is actually that in the course of past years, I've been talking to various corporate engineers, which has similar views, similar optimistic views with me.
I see that the issue of energy dependent AI is usually on the tables during the discussions. It is recognized and acknowledged by the practitioners. We also see that some governments are fast tracking the adoption of energy efficient data centers, and they create the roadmaps for the introducing more sustainable AI processes.
So I assume that awareness, understanding, and knowledge are already there, and this gives me hope. So what we are required to do is just to nudge the C-suite to adopt sustainable practices or sustainable algorithms, and either through financially profitable approaches or maybe by changing the entire sustainability frameworks…maybe adjusting KPIs, for example, like introducing the index of energy, upper AI use industries, or maybe through the regulations imposed by governments, maybe a harsh approach by the governments to match the industries to adopt sustainable practices.
Gaël Duez: Yeah, thanks Iuna and thanks for remaining optimistic. And actually, if you indulge me to bounce back on what you said, I know it is supposed to be closing words… We didn't mention that much of financial incentive and you just did. From a C-suite perspective, the savings from energy efficient models is not always enough to offset the time or opportunity loss.
I would say like, basically it's like, let's launch an AI product, whatever it costs, whatever, it can drain energy or impact or sustainability, KPI, because we need to be first in the market because there is such a big hype at the moment around AI that we need to have something labeled, ‘AI blah, blah, blah’.
And sometimes it's not useful. Sometimes it's not really well done. How, how would you tackle this product? Is it only via regulations and enforcing more sustainability goals or do you believe that there is also some sort of discussion that can happen with executives regarding this trade off, like long term, short term and (those saying) ‘let, let's do AI, let's do AI’.
Iuna Tsyrulneva: Thank you for this question. I don't really want to take over the sustainable business leaders and experts in that area because I don't quite work with the businesses and industries. I do not know how they implement everything. But my take on this, because we work at the intersection between research and industries and governmental agencies, right?
So we are providing the knowledge that is supposed to be adopted by the industries and implemented for the better use. One of the approaches is the force to involve the researchers into the discussions quite earlier, either as in-house researchers, as engineers or they are university researchers, because we afford ourselves to be more idealistic and to think about the problems of sustainability at the higher scale. We can provide, provide this knowledge to the decision makers. We can teach them or advise them on how to develop sustainable algorithms in this way.
This can be one of the approaches. The second approach, as I mentioned, what happens during the acquiring of business? Acquiring of a startup by a bigger company, right? So the code is always revised. There could be some regulations, some requirements towards that goal to be more sustainable, a little bit more sustainable.
That can also encourage the new startups to develop more energy efficient, sustainable also, algorithms, if they want to have like a good exit strategy. Something along these lines.
Gaël Duez: Thanks a lot for the answer and thanks a lot for joining Green IO today Iuna. It was very packed, almost like a full report, so very interesting. And truly looking forward to seeing you at Green IO Singapore 2026 and it will be in April.
Iuna Tsyrulneva: Thank you. I'll be happy to participate there.
Gaël Duez: Thank you for joining the show.
Iuna Tsyrulneva: Thank you for having me, Gaël.