What’s at stake in public and private sector climate modelling efforts? Listen to Jason Mitchell discuss with Professor Tapio Schneider, CalTech, about how computational science is improving our understanding of tail risks and extreme events; why small-scale processes are critical for building more accurate climate models; and what the finance sector can do to better to integrate climate information into investment decision making.
Recording date: 10 October 2025
Professor Tapio Schneider
Tapio Schneider is Theodore Y. Wu Professor of Environmental Science and Engineering at California Institute of Technology. Tapio specialises in climate research and mentoring the next generation of scientists to become leaders in this critical field. His research has contributed to understanding how rainfall extremes change with global warming, how changes in cloud cover can affect climate stability, and how atmospheric dynamics on planetary bodies such as Earth, Jupiter, and Titan operate. He leads the Climate Modelling Alliance (CliMA), a multi-institutional consortium of scientists, applied mathematicians, and software engineers working to develop the first climate model that learns directly from diverse data. This model aims to provide accurate climate predictions to assist in mitigating and adapting to future climate changes.
Episode Transcript
Note: This transcription was generated using a combination of speech recognition software and human transcribers and may contain errors. As a part of this process, this transcript has also been edited for clarity.
Jason Mitchell:
I'm Jason Mitchell, head of Responsible Investment Research at Man Group. You're listening to A Sustainable Future, a podcast about what we're doing today to build a more sustainable world tomorrow. Hi, everyone. Welcome back to the podcast and I hope everyone is staying well. So I've got to start out with an anecdote. Well, a characterization really, is told to me by a very well-known climate scientist. It goes like this, they said, "Jason, if you wanted to put together a dream team of climate scientists around the world, you wouldn't just include Tapio Schneider, he'd be on your starting lineup." Now, needless to say, I was immediately intrigued. It led to this episode which pulls back the curtains on academic public efforts to more accurately model climate change. And make no mistake, the stakes are high here. The nuances in these models could ultimately determine how we allocate or misallocate capital towards climate adaptation.
That's a really important description because we talk about how the prediction problem in climate has different parts to it. Weather forecasting is an in-distribution problem, while climate modelling is an out-of-distribution problem. In fact, it reminds me of the old saying, the map is not the territory, or as Tapio says, it's about the distinction between modelling states and modelling statistical probabilities. It's much great to have Professor Tapio Schneider on the podcast. Tapio is one of the leading climate scientists doing work on the fundamental side, like his recent paper on the AMOC weakening, as well as efforts to innovate climate modelling. We talk about how computational science is improving our understanding of tail risks and extreme events. While small-scale processes are critical for building more accurate climate models. And what the finance sector can do to better integrate climate information into investment decision-making.
Tapio is the Theodore Y. Wu, professor of Environmental Science and Engineering at the California Institute of Technology and Tapio specialises in climate research, which has contributed to understanding how rainfall extremes change with global warming, how changes in cloud cover can affect climate stability and how atmospheric dynamics on planetary bodies such as Earth, Jupiter, and Titan operate. He leads the Climate Modelling Alliance or CliMA, a multi-institutional consortium of scientists, applied mathematicians, and software engineers working to develop the first climate model that learns directly from diverse data. Welcome to the podcast, Professor Tapio Schneider. It's great to have you here and thank you for taking the time today.
Tapio Schneider:
Thank you for having me and I'm looking forward to our conversation.
Jason Mitchell:
Absolutely. So Tapio, let's start out with some scene setting first. As the lead scientist behind the Climate Modelling Alliance, what really is at stake here? What is coming out of the climate modelling endeavour? And I guess how does it compare to other more traditional modelling efforts? Let's start there.
Tapio Schneider:
Yeah, I mean, climate modelling has existed since at least 1960s and has correctly predicted the mean rate of warming and so forth that we have seen. But the core challenge in climate science has now shifted from whether the climate is changing, to predicting how fast it's changing and where the impacts will be most severe. And current climate models are good at capturing large scale trends, but they have significant uncertainties and shortcomings in predicting the pace and regional change of change.
And so this uncertainty, it acts like a tax on society. It forces decision makers to either over invest in the wrong adaptations or under invest and leave communities vulnerable. And CliMA's mission is to substantially reduce this uncertainty, cut it in half or so, by building a new generation of climate models. The goal is to provide the precise and actual predictions needed to mitigate the avoidable impacts and adapt to those that are not. What's slightly different about what we do relative to traditional climate models, the core really is use data more extensively. And by data first and foremost, I mean, Earth observations we have, it's like 50 terabytes of data every day collected by NASA alone for example, and they have been somewhat underused in climate modelling. And our goal is to make extensive use of those data to achieve a new level of accuracy in climate modelling and prediction.
Jason Mitchell:
Yeah. It seems like you've put together a really strong team that writes climate models from the start, leverages machine learning and AI, which I guess I kind of read as an attempt to kind of clean up and drive us a real step change in the process of climate modelling.
Tapio Schneider:
That's the goal, yeah. I think we are fortunate to, in some ways, be a university-based team, so we have the best and the brightest of the next generation of scientists, students coming into the field who want to make a difference here and to contribute. And yeah, it's a relatively young team and young and dynamic and I think readily nimble compared to service climate modelling centres. It gives us the ability to move more quickly perhaps than established centres.
Jason Mitchell:
If you took a step back, how do you think about the broader climate model landscape and particularly how the private sector, I'm thinking of efforts like Google's GenCast and NVIDIA's Earth-2 platforms, I mean, how can they coexist with academic and governmental efforts? In other words, do you see increasing permeability between the private sector and academic fronts and what are the opportunities there?
Tapio Schneider:
Absolutely, there is increasing permeability and I think it's a tremendous opportunity. But first, I mean, the public and private efforts are, they're tackling different parts of the prediction problem right now. They're complementary. For example, Google's GenCast or NVIDIA's efforts, they're focused on data-driven by their forecasting. So this is take the initial state of the atmosphere right now and make prediction days to a few weeks out. That's primarily an in-distribution problem. You learn from the plethora of weather data we have, you learn patterns and use that to predict the short-term future. The climate problem is very different. So this is not about taking a state of the atmosphere right now and saying what it will be in two weeks or we obviously don't try to predict the weather 20 or 30 or 40 years out, but we try to predict statistics, aggregates of weather decades out.
And inherently, this is an out-of-distribution problem, meaning we need to predict, for example, how clouds will behave in an atmosphere with more CO2 in it. And we can't learn that by looking at the data we have right now because the CO2 variations that the data we have right now exhibit are relatively small and don't reveal the essential dynamics we need to predict. So right now, I think these efforts are fairly complementary, that the private sector tends to be more focused on weather. NVIDIA, they have their weather forecasting efforts and they're also building a platform, what they call Earth-2, that's maybe best understood as a toolkit that provides AI models and computational infrastructures to build applications, for example, for high-resolution downscaling of coarse resolution climate predictions. But again, it's not focused on what CliMA is doing and I would say the public sector broadly is doing, which is focusing on the foundational open source, long-term climate projections. That's really the key difference.
And of course, there's a lot of permeability between the two and I think a lot of learning from each other. I would say it's a very collaborative environment. We all know each other, we all interact and we learn from each other and try to learn, take what you can learn from the weather forecasting enterprises for climate to the extent that's possible.
Jason Mitchell:
Yeah, I want to go back and sort of really investigate that word uncertainty that you started with. Climate models generally agree that global temperatures will continue to rise in response to humanity's greenhouse gas emissions, yet there are obviously significant uncertainties that remain about the exact pace and magnitude of that change. As climate models have grown increasingly sophisticated, how much closer are we to answering practical questions that matter the most, particularly when it comes to climate adaptation planning for instance? Can you give a sense for what all this sophistication means for more precisely forecasting the intensity of heat waves over the next decade or extreme precip?
Tapio Schneider:
Right. So the first thing to appreciate is that what you want to know is what you mentioned, the probability of a local extreme heat wave or of local extreme precipitation. But in order to make that local risk assessment, you first need to get the large scale, the global aspects right, because ultimately, for example, the probability of extreme precipitation essentially scales with global mean temperature. So the first thing you have to get right for any such prediction are things like the global mean temperature. Of course, the global mean temperature in itself is not what anyone directly experiences in itself. It's not the relevant prediction target, but any relevant prediction target depends on it. And so where the uncertainties come in is actually at, they enter at large scales, although arise at small scales.
What I mean by that is this, for example, think about clouds. The dynamics of clouds is governed by small scales, take low clouds over tropical oceans off the coast of California. The dynamical scales are of order metres, maybe tens of metres. And the climate model resolution standard right now is maybe 100 kilometres. We can go towards 10 kilometres quite routinely, but that's still orders of magnitude away from these cloud scales. And so these small scale dynamics of the clouds control cloud cover, the marine layer, say off the coast of California. And that exerts very large scale influences. These low clouds over tropical oceans reflect a lot of sunlight and they thereby cool climate. And so even if say you want to predict probability of extreme precipitation in New York City as taking as one example, you don't get around worrying about low-lying stratus clouds, for example, off the coast of California because they're fundamentally important for Earth energy balance that fundamentally controls the global temperature, and the global temperature ultimately affects rainfall intensity anywhere, say in New York City.
So the uncertainties enter it at small scales but then percolate upscale to large scales. And the fundamental challenge in climate modelling and prediction is to represent these small scales well so that we get large scale features, global mean temperatures, circulation and the like right. And once we get that, we can start worrying about the impacts on regional scales. So you were asking how has the situation changed over a timescale of decades here? The biggest uncertainty remain clouds. That has been known for many decades now, 30 years at least, and I would say even longer. I mean, clouds were not predicted in early climate models, but for example, Suki Manabe who the Nobel Prize for climate modelling ultimately, he was very well aware of clouds being a central source of uncertainties already in the early '90s and probably before I know from the early '90s because then that's when I met him.
The situation is actually kind of interesting. If you think about Suki Manabe again, he built the first climate models 1960s, '70s. Right now, our computing power, the computing power we have available is something like a factor 10 to the 12th more than what he had available. And you would think if you scale something by 10 to the 12th, that these problems should already be solved. And the interesting thing is they're not, and that's in some ways the crux of fluid mechanics, that it scales very unfavourably with the computational effort you invest. So we still cannot resolve clouds, and that remains the central source of uncertainty. And in the CliMA project, what we're trying to do is build physical skeleton conservation laws and the like that we achieved this out of distribution generalisation, but then use data-driven methods, AI, ML, and the like to model small scales, turbulence and clouds for example, that we have good reason to believe is fairly universal.
Jason Mitchell:
Yeah, it's so interesting. So I guess you've made a couple points, but I mean, the first one being the fact that these current client models struggle with these unresolved small scale processes like cloud dynamics. On the other end, there's this idea that these kind of small scale phenomena generally go ignored/overlooked in global climate models. I guess I'm wondering how could their misrepresentation end up affecting our understanding of future climate scenarios? What are we losing by not factoring in the small scale processes?
Tapio Schneider:
Climate models do attempt to take them into account. It's just done in fairly empirical ad hoc ways that are not always very well-informed by data. And what you're missing by doing that, I mean, let me give you one example. The equilibrium climate sensitivity is a useful yardstick to measure how sensitive a climate model is to increases in carbon dioxide concentration. It measures the long-term equilibrium warming after doubling CO2 concentrations. And that equilibrium climate sensitivity, the canonical uncertainty range that's being given is somewhere between two and a half and four and a half or five degrees or so warming you expect for doubling. It's a very wide range and that range has stayed that wide for decades now. It has not substantially shrunk.
You could say models have become more complex. So one I think not so interesting stance might be to say, "Well, it could have gotten worse, given that models got more complex. The uncertainties could have increased and they didn't." But still, it's an uncomfortably large uncertainty, so it makes a big difference whether we get two degrees warming up under doubling of CO2 or five. It's a huge difference in terms of climate impacts. And more than half of that spread in the equilibrium climate sensitivity comes from the representation of clouds and climate models and how they differ. So that's where models diverge, that's where uncertainties enter on large scales from uncertainties and small scales.
Jason Mitchell:
Interesting. I want to take a small tangent here because I guess I'm super interested behind the motivation behind CliMA, in your work. Traditionally speaking, academics tend to get rewarded for publishing papers, but the development of more comprehensive models don't necessarily provide that. So I guess I'm wondering how do you end up recruiting these people, these applied mathematicians, scientists, and basically hunkered down for years, I'm guessing, on this project when shaking up the climate modelling space doesn't necessarily provide the same academic prestige or tenure track trajectory that traditional academics follow?
Tapio Schneider:
Yeah, I mean, I think the core for all of us is a focus on the mission. I think we are all united in what we want to achieve, which is climate predictions that are more actionable, more useful for making decisions and meaning there have reduced uncertainties and quantified uncertainties. So I think that's the first and most important piece, that there's a clear vision of what we want to achieve and that unites us all. You're putting your finger on a central point though. What we do is, in my view, the perfect marriage of fundamental science and applied science.
There's a, of course, fairly substantial engineering component to building climate models and make them run fast on GPUs and the like, and we do all that, but to really achieve the step change in the quality of models and predictions, there are scientific advances that are necessary. And so there is both, there's fundamental science and what makes this project really fun and interesting to me and I think to most of us working on it is that if you make fundamental science progress, it immediately translates into if you wish a product, a model that you can use for predictions.
Because there is a large scientific component, foundational science component, we do publish a lot of papers. I mean, in the CliMA project, I'm not quite sure what the count is, but we have well over 100 publications, so there's no lack of papers, but I think where it's not as well aligned with traditional and academic incentives as you mentioned, is perhaps that our success depends on success as a team. This is not a project any one individual alone or two people or something could succeed in, and it truly is team science. And the traditional academic structure, it tends to be... It goes back to romantic ideas really of the lone genius in some ways. So it tends to be centred around individuals. And that's not how we work. We work together as a team, we work closely together, and so that when it comes to writing papers, there is an asymmetry and the academic reward structure in our field that the first author gets a lot more credit than others. So that becomes intentional. Who should be first author when many people contributed.
In some ways, the way we deal with it is writing enough papers that everyone can be first author some of the time, but there is some misalignment that that is not so easy to deal with. At the same time, I should say this type of work where we iterate quickly, we closely work together in a team, it's a lot of fun and I think everyone really enjoys this type of work more than the sitting by yourself in some quiet chamber and figuring stuff out. And I would hope that over time, universities evolve to reward this type of work more.
Jason Mitchell:
Yeah. Well, no one can fault you for not being prolific, but when it comes to climate science, popular discourse seems to veer towards extremes. In other words, papers talking about why estimates are worsening, seem to disproportionately rewarded. And I wonder if that ends up crowding out more constructive discourse or discussion. For instance, you've got a recent paper in Nature Geoscience, it's titled Observational constraints imply limited future Atlantic meridional overturning circulating weakening, which argues that the AMOC isn't worsening as much as feared relative to some other more extreme projections from some other climate models. I'm really curious here, what kind of feedback have you received? I personally feel like it's a pretty provocative paper simply by the nature of its more moderate view.
Tapio Schneider:
Yeah, that paper was led by Dave Bonan, who's now at the University of Washington, and I think the key to the progress in that paper was to advance the understanding of, in this case, the Atlantic meridional overturning circulation and particularly how it relates to the stratification, to the temperature structure of the ocean, and then look at climate models, how the temperature structure is represented in the models, how that impacts the AMOC in the models, and then look at real data at observations. Well, what's the real temperature structure?
And what you find is that the models that tend to project a strong weakening of the AMOC, they also tend to have a stronger AMOC in the present climate than observed, and they do have a vertical structure of the AMOC that's less consistent with observations than the models that project a weaker weakening, a more dampened weakening from which we then concluded as well, yes, AMOC is likely to weaken, but not as much as some extreme scenarios suggest. The more extreme scenarios are less consistent with observations and less consistent with the physics that links the observations and the stratification to the AMOC in itself.
I think what this paper shows well is that you've got to understand these predictions on some physical basis in the end, otherwise, it's hard to trust them. Right? And the broader implication is you need models that you can understand, that you can scrutinise to figure out why models produce a certain response to climate change and others produce a different response.
Jason Mitchell:
Really interesting. To follow onto this, how do you think about the thesis behind your AMOC paper relative to a recent paper in environmental research titled Shutdown of Northern Atlantic overturning after 2100 following deep mixing collapse and CMIP6 projections? The paper essentially argues that a collapse in the AMOC is now likely in 70% of high emission scenarios and that a tipping point in ocean convection could occur as early as the 2040s. So it's curious to me that at least a lot of people I know in sustainable climates have kind of glommed onto this alarmist view much more than your paper, and I'm curious how we weigh the different perspectives around the risk of the AMOC collapse.
Tapio Schneider:
Maybe a more general point, I think it's important to appreciate that climate change is progressing, it's getting warmer, that has very clear impacts, for example, on the intensity of extreme precipitation. And irrespective of more extreme tipping points and the like, that is something we have to worry about. The discourse around more nonlinear effects like AMOC shutting down perhaps irreversibly, it sometimes, I think, distracts a bit from the more immediate problem that's right in front of us, Earth is warming and we have to adapt to it and deal with it.
If you just look at models, they're useful scientific tools, but of course, they are not replicas of reality. So an ocean model by no means is replicating the details of the ocean adequately. It tends to be, for example, much too viscous, much too gooey relative to real water for numerical reasons that are unavoidable. So you can get a fairly wide variety of responses to, for example, increased CO2 in models. I want to come back to the earlier point. I think it's important that you have a physical basis for understanding for scrutinising these models and then that you can go back to observations and see, well, the model is producing this response for that reason, are the data we have actually consistent with it? And the paper that Dave Bonan led was saying, well, the data are not consistent with some of the more extreme scenarios. But still, AMOC will weaken. I think that's not in question.
Jason Mitchell:
I want to stick on tipping points because you've studied cloud feedbacks that could lead to a climate transition or tipping point, namely the breakup of stratocumulus decks. These topics are just absolutely fascinating and clearly consequential for understanding how stable our climate is. But what do you think of the role of tipping points in public discourse? What's your reaction to that Robert Cobb et al piece, the article I think that came out last year with the perspective that tipping points confuse and can distract from urgent climate action?
Tapio Schneider:
Yeah, I think Bob Cobb and colleagues were saying exactly as I say that this can be distracting. It oversimplifies phenomena and it can distract from the more immediate issues about climate change that we are aware of. It can also lead to public confusion between actual physical thresholds. So if there were some actual threshold beyond which say the Antarctic ice sheet would become unstable and collapse versus we talk about thresholds like 1.5 degrees or 2 degrees warming above pre-industrial levels that are fairly arbitrary political targets, not in any sense tipping points in the climate system we are aware of, and it's important not to get those two confused, and the discourse about tipping points can get them confused. I agree with that criticism in that paper of Bob Cobb and his colleagues there.
That being said, in the long run, so beyond the next few decades that our immediate planning horizon, it's of course important to understand what the climate system might do that's strongly nonlinear. And I think it's important to study such tipping points, be that physical tipping points, uncertainty in ice sheets or the stratocumulus decks that you mentioned that we studied a few years ago, or tipping points in ecosystem where life for certain types of ecosystems where they are right now becomes unsustainable. It's important to study those and elucidate as much as we can what the potential nonlinear consequences of global warming might be. They tend to be fairly far out. So I think if you think about planning action horizon now over the next few decades, most of these things are not crucial, including the question about the instability of stratocumulus decks we studied.
It was an interesting physical question, can cloud decks become unstable? And there's clear evidence and clear physical theory that says yes they can, they become unstable, they can break up and you can get a lot more extra warming. In my mind, the main practical implication was not so much what might happen in the future, but what could have happened in the past. We've had extremely warm climates in Earth's past, for example, during the EOC and roughly 50 million years ago, Earth was 10, 12 degrees warmer than it is now, and it's not entirely clear why. I mean, CO2 concentrations were higher, but they were not as high as you might think they should be if it was that warm. And so tipping points instability, for example, in that case in cloud decks might be one explanation of how we might have had much warmer climates in the past than we have right now, without implying that that's necessarily our future.
Jason Mitchell:
How do you think about the current paradigm and culture of climate science through the lens of CliMA? I'm thinking for example, about the relationship between IPCC model development cycles and research, the role of funding agencies or the relationship between big science, which is generally IPCC driven and internationally coordinated versus little science which is much more decentralised. I guess what I'm asking is, is the focus and the resources dedicated to areas like new IPCC models ultimately productive or are they producing diminishing returns?
Tapio Schneider:
I think the IPCC as an institution and the efforts organised around the IPCC, like what you mentioned CMIP, the Climate Model Intercomparison Project, are important for the field and very good. I think it's important for the field of climate science to take stock and assess what we know and what we don't know as best as we can, produce consensus reports, which is what the IPCC does. And I think it does it quite well. I mean, it's of course produced by hundreds of people and whenever you produce something by hundreds of people, it's a bit of a committee report, but that's what it is. The comparison among models that come out of that organised by the World Climate Research Programme, WCRP, CMIP, has also been, I think, very fruitful because it opened up models to public scrutiny. Anyone can look at the model output, try to figure out what's wrong with the models, and that obviously leads to progress.
Where I think there is a potential downside is that the IPCC cycle is relatively short, five, seven years or so, which has not left enough time for modelling centres to innovate more dramatically. It fosters a certain kind of incrementalism once you're done with one round of IPCC simulations, submit your results. You have a little bit of time to work on your model, but it's not in the end enough to make larger scale changes. I mean, one concrete example would be, obviously, computing architecture has changed dramatically over the last 10, 20 years. At this point, most flops you will find on graphics processing units, GPUs, not CPUs. Climate models are complex, they're large, and it's a lot of work to transition from a CPU-based model to a GPU-based model. And by and large, that has not happened yet in the field because I think there is this operational imperative that leaves too little time for more substantial innovation in the modelling centres that some ways have to contribute to every IPCC report.
So I'd say there is a downside, and that's where CliMA has a bit of an advantage. We have more freedom, we could take the time to radically rethink what the climate model should look like, given the computer architectures we have now, given the data we have now, and that's what we did. But all that being said, I mean, for example, also for CliMA, I think it will be very valuable for us to contribute to the IPCC report to submit our model to CMIP, the Climate Model Intercomparison Project, just to subject it to the public scrutiny that all the other models are subject to.
What's interesting in this development cycle and the kind of operational pressures there is if you look at the number of climate models we have in the world, I mean, I think it's probably around 60 or so right now, and last IPCC round, it was around 30, and it sounds like a large number, but it's actually very few models because by and large, when someone makes a new model, what usually happens is they take some existing model and modify it somewhat or maybe don't even modify it very much. A lot of models are open source, so you can take the Anchor model, CESM, or the GFDL model in whatever country you're in and make it your own. And so that obviously has stymied innovation that while it looks like there are 60 models, they all go back to very few, two, three, four progenitors and have descended from them.
Jason Mitchell:
How can machine learning be used judiciously in a climate modelling context? So that models, I guess, perform outside the range of observations. Maybe talk about some techniques and principles that guide your work here, which marry the strengths of physics with a real data-driven approach. Have you run into any problems that seem particularly tricky or intractable?
Tapio Schneider:
Yeah, I mean, machine learning is of course presenting a huge opportunity for the entire climate prediction workflow on various levels. So let's talk about modelling what we do first, but I'd like to give some other examples thereafter. Again, for climate prediction, it's essentially an out-of-distribution problem. It's essentially a problem about predicting statistics, not states. We want to say something about how likely is it that temperatures exceed a certain threshold in New York City 20, 30 years from now, or precipitation exceeds a threshold. So it's not a question of predicting a state, but a statistic, from which it follows that the learning from data should also focus on statistics, not states. So it makes it immediately very different from weather forecasting where you take the state of the atmosphere now and ask what is it in two weeks? And you train a model with sequences, trajectories of atmospheric states. That's probably not the best way to train a climate model.
So number one is we use statistics of the climate system, so various statistics like probabilities of precipitation extremes or average cloud cover and so forth, and learn from those. So that's number one. You want to learn from statistics and that makes the learning problem quite different from the weather forecasting learning problem. Standard methods become difficult to use, the statistics are noisy, for example, because of the chaotic internal variability, you average cloud cover, you get noisy representation of some average cloud cover, which poses a problem for standard methods based on gradients like backpropagation that tend to get stuck in noise-induced minima. So you need to have mathematical tools that can deal with noise better, and we develop those, first issue. Second issue, it goes back to our conversation about the AMOC. I mean, in the end, you want to understand what's going on in the model to have some trust in the predictions. So it's important that what we do remains interpretable in the climate system.
It's not important for weather forecast. If a weather forecast day after day is good, well, that's how you trust it. It doesn't matter that much what the exact physics are. But for climate, we don't have verifiability on a useful timescale. So there needs to be a physical scaffolding that we can scrutinise and interpret. So the way we use machine learning in the context of climate modelling is that we have a model that represents core processes that we know are important and that we know we cannot learn from data. For example, radiative transfer and its effect, it has a direct effect on clouds. You increase CO2 concentrations, it affects the thermal infrared radiation in the atmosphere that directly affects cloud cover even without temperature changes. And that's not something you can learn from data because you can't disentangle CO2 and temperature variations. In any case, those variations are relatively small relative to the chaotic variability of the climate system.
So we built this physical scaffolding. And within that scaffolding, there are places where physics, in some ways, reaches a dead end, first principle reasoning can't go further, and that's where we use data and models. To give you one concrete example with respect to clouds, it turns out that the turbulent mass exchange between the cloud and its environment is critically important for controlling cloud cover. So how much a cloud mixes with the air around it is critically important for controlling how much cloud cover we get. And it's an important control on how sensitive clouds are to increase CO2 concentrations. And so that mixing, key aspects of that mixing, we can learn from data both high resolution simulations and observational data, and that's where we use ML to represent small scale mixing processes that have... Where you can make a plausible case that once we reformulate them in a physical way, that you can learn those processes from data right now and that will hold in the future.
So that's one example. There are different examples for how we can use machine learning. Let me give you one other example. For example, snow. Snow is of course an entirely physical system. It turns out to be very complex as you can imagine. Snow melts and refreezes and compacts and wind blows over it and so forth. And if you want to model snow thickness or the reflectivity, the albedo of snow, you could try to do this with physical models and that's commonly done. In the end, these physical models have many empirical pieces just because it's a tremendously complex system. This is one example we said, "Well, the physics here is very complex, it's not very well constrained, but we have data." There is no reason to think that the physics of snow in the future of warmer climate will be radically different from now and that the data we have right now are not adequate to span the future, which I mean, we have snow data and the dependence of snow and environmental conditions and lower latitudes and higher latitudes and lower altitudes and mountain ranges and higher altitudes.
Of course, what will happen in the future is that the types of snow you find right now at lower altitudes or latitudes in the future of warmer climate, you might find at higher altitudes, higher latitudes, but there's no reason that you can't learn the relation between precipitation, temperature, humidity and say snow thickness or albedo from snow right now. So there, we have entirely data-driven models for snow thickness, no albedo that are almost physics free, except that they still respect conservation laws. You still want to respect the total mass of water that remains conserved. So whatever snow falls down in the long run equals whatever melts and so forth. So that we take into account. But otherwise, it's entirely data-driven. So it's a bit a case by case decision of how far you go with physics or process-based models, chemistry models versus data-driven models, and that's ultimately a scientific choice you have to make.
Jason Mitchell:
Yeah. Wow. Really, really interesting. I guess I wanted to change lanes a little bit and maybe talk about this through the lens of finance. And I guess from that perspective, how do you see a research approach improving our understanding of tail risks and extreme events? For example, to what degree can your models better capture changes in compound extremes, which are often material for areas like infrastructure and insurance underwriting?
Tapio Schneider:
Right. I mean, that's the key question. For adaptation decisions, you want to assess risks and especially correlated risks of high wind, dry conditions leading to wildfire or even worse, wildfire in summer followed by strong precipitation, mudslides in Southern California in the subsequent winter. I mean, that's the key we need to get at. How do we get to be able to assess risks at the asset level 1 to 10 kilometre resolution including compound risks? So again, first of all, get the large scale right, and then you need to do several things, get to smaller scales and to risks of damage to certain assets.
So to get to smaller scales, and I wanted to mention it before and can mention it now, this is another really good example of using AI tools for taking the relatively coarse resolution output from climate models, which for example may not have tropical cyclones, hurricanes, then use generative AI to learn the mapping from these coarse resolution climate simulations to finer scale hazards, whether hurricanes for example. And that can be done. For example, NVIDIA has done such things or in collaboration with people at Google, we have done such things using generative AI models to what we call downscale climate model output. So that's the first thing you need to do.
And the nice thing about using, for example, generative AI is that you can do this in a multivariate sense, meaning you can jointly downscale winds, temperature, humidity, all the variables you want, preserving the correlations in space and in time so that then, you can get at compound events such as likelihood of dry conditions and Santa Ana winds in Southern California occurring simultaneously, leading to extreme fire risk and the like.
So that can be done, and I think there's a whole pipeline to be built from using data as we do in CliMA to make better climate models, but then make the output of the climate models more usable and useful with downscaling. And ultimately, you want to do what the insurance industry does, run catastrophe models, cat models on top of it to assess what the potential damage to certain assets is. And I think the nice thing right now is that I think we have all the tools to build out this pipeline, all the way to getting to the point that end users, asset managers, whoever it might be, can get tools in their hands that are actually a joy to use and easy to use for assessing risks. I think it can be done now.
Jason Mitchell:
How close are we to producing robust climate intelligence at, let's say, 1 to 10 kilometre scales? The kind needed for real estate agriculture and urban risk modelling.
Tapio Schneider:
So we can already, the downscaling part of 1 to 10 kilometres that can be done now. It had been done with dynamical downscaling, so numerical models that can now be done with AI models successfully. Of course, what you downscale there is only as good as the large scale input that you provide. So again, we need these climate models that have better predictions, reduced uncertainties. And then I also think you need an ecosystem of tools that translates that information into what decision makers actually need. And that varies from sector to sector, from business to business.
So how close are we? I would say the downscaling can be done now, better climate predictions two years or so. We have them, we have some parts of that already. But then what you really would want is platformize the whole tool chain, build a platform where all that information is easily available and you can build your own AI models on top of it if you're an insurance and you're interested in fire risk in certain areas that you build tools that are fit for purpose, given this upstream input of downscaled climate projections that are easily interrogated.
So building that platform, technology-wise, it can be done now. I think it will take some time to do it, but I hope it will happen soon. So that, I mean, ideally, on this two to five-year timescale, I would like to get to a point that climate risk information, information about risks of extreme weather on a decadal timescale and right now, that that's as easily available as the weather forecast is today. And I think that can be done on a five-year timescale with a dedicated effort in less time.
Jason Mitchell:
I guess more broadly, I mean, what ways do you think climate information could be delivered more effectively to inform financial and economic decision making? This seems like a key bottleneck not only in pricing climate risk, but making climate information more, I guess, actionable in a real world context.
Tapio Schneider:
Oh, yeah, I agree. We have IPCC reports and the latest was close to 4,000 pages and throwing 4,000 pages over a fence is not an effective communication strategy or decision making strategy. We need those reports, but that's not what you need for decision making. I think weather forecasting is a good example here. We have, if you want to go hiking this weekend, you check your weather forecast on the phone, it's very easy. You have many products there. Ultimately, climate risk information should be as easily accessible. Now, what information you need will depend very much on your interests, whether you're an infrastructure planner in a small town who wants to know what stormwater management systems to build, it will be quite different from the needs of an insurance company that wants to price contracts for the next year. The latter primarily needs information of what is the risk right now for next year and then for the former, that is more 20, 30, even 50 years from now, lifetime of infrastructure, what is the probability of a hundred-year flood? What's the intensity of a hundred-year flood and the like?
So what I would envision is a platform that ultimately can host an ecosystem of apps for different purposes that are fit for purpose for different types of users, whether this is real estate, insurance, or geostrategy defence and the like. The key thing would be building a platform that can host in some ways the climate risk app store and the various end-user facing apps. Well, they would have to be built in some ways sector by sector by the people or with the people who need that information that the information reaches them in their workflow. I mean, as another example, we have been working with building technology companies that are planning buildings, they want the buildings to be comfortable on a decadal timescale so that occupants are comfortable in those buildings. And so they need information to model occupancy comfort and energy use in buildings given expected weather 20, 30 years from now or so.
So what they need is essentially weather, a year of weather of typical weather a few decades from now, and that's one type of information that can be provided that then enters the workflow, their design workflow where of course, this is only one small part that enters the decision-making. People don't design the buildings on the spaces alone. There are of course many other factors, and do you want to make sure that that information enters in a way that it can influence even perhaps only in a small way, the decision-making?
Jason Mitchell:
Last question. So a lot of your research has a theoretical emphasis and uses idealised models of Earth such as aqua planets. I'm thinking about the influential paper you wrote with Simona Bardoni in 2008, which presented a new theoretical view of monsoons, effectively rejecting the previous characterization as giant land sea breezes, I think, which had held for over 300 years. So how do you see fundamental research like this informing the more applied impacts-oriented concerns of the public?
Tapio Schneider:
Yeah, I think what I do and what we do in my team right now still has a very strong fundamental bend to it, but then it immediately translates into a model and code into ultimately the product rebuilding. And I think that makes it exciting to me. This type of work you're referring to with Simona, for example, I still do some of that work. It's asking the fundamental why questions on large scales, large scales in the climate system. These are the scales that are relatively well represented in climate models, which is say, we could use these idealised models. They were not representing Earth, but an idealised version of Earth, just like you built a frictionless inclined plane in a laboratory to study gravitational acceleration and frictionless as an idealisation. And of course, an aqua planet is in the same way an idealisation, but we could do that because the large scales are relatively reliably simulated, and we have trust in what these idealised models produce.
Again, I still ask those big why questions, but most of my days is focused on why questions on smaller scales with the smaller scales translate into the applied aspects very easily. I mean, ultimately, I'm a scientist, and I would say also the CliMA project clearly has a strong applied motivation. However, a lot of what's behind it are still similar kind of why questions. Why do clouds change as the climate warms and how do they change, and then how do we quantify those accurately? And we are building a model right now that is our best encoding of the scientific understanding we can come up with. Ultimately, I want to use this model to ask questions like what we asked with Simone about the monsoons, but then perhaps about ice ages. We all know that we had glacial interglacial cycle, a cycle of ice ages and warm periods and last ice age, the last glacial maximum ended about 20,000 years ago.
We also know that in some fashion, these ice ages relate to variations in Earth orbit, something called Milankovitch cycle after Serbian applied mathematician who calculated these small variations in Earth orbit. But these are pretty small variations, wiggles in Earth's spin axis, how they translate into getting dramatic changes like ice ages and warm periods like what we are currently in, that's still mostly unknown, and my hope is that we're building a model that I want to be used for assessing risks to assets. But I also want to use it for asking questions about how do ice ages come about in much the same way that with Simone, we ask, well, how do monsoons come about?
Jason Mitchell:
Excellent. That's a great way to end. So it's been fascinating to discuss how computational science is improving our understanding of tail risks and extreme events, why small scale processes are critical for building more accurate climate models and what the finance sector can do better in terms of integrating climate information into the investment decision-making process. So I'd really like to thank you for your time and insights. I'm Jason Mitchell, head of Responsible Investment Research at Man Group here today with Professor Tapio Schneider at Caltech. Many thanks for joining us on A Sustainable Future, and I hope you'll join us on our next podcast episode. Tapio, thanks so much for your time today. This has been super interesting.
Tapio Schneider:
Thanks so much for having me.
Jason Mitchell:
I'm Jason Mitchell. Thanks for joining us. Special thanks to our guests, and of course, everyone that helped produce this show. To check out more episodes of this podcast, please visit us at man.com/ri-podcast.
You are now leaving Man Group’s website
You are leaving Man Group’s website and entering a third-party website that is not controlled, maintained, or monitored by Man Group. Man Group is not responsible for the content or availability of the third-party website. By leaving Man Group’s website, you will be subject to the third-party website’s terms, policies and/or notices, including those related to privacy and security, as applicable.