Numlock Sunday: Casey Crownhart on the dream to out-compute storms
By Walt Hickey
Welcome to the Numlock Sunday edition.
This week I spoke to Casey Crownhart, who wrote “How two new supercomputers will improve weather forecasts” for MIT Technology Review. Here's what I wrote about it:
The National Weather Service’s computational capacity jumped substantially recently, from a level of 4.2 petaflops in 2018 to 12.1 petaflops in 2020. The two supercomputers — one in Virginia, one in Arizona — each have a 12.1 petaflop capacity, an upgrade that cost $300 million to $500 million. The thing is, that translates to significant gains in forecasting potential, as it facilitates more simulations and more sophisticated weather models that can better predict complicated storms like hurricanes, and add days of notice to potentially impacted communities. The last time the system was upgraded — from 2.8 petaflops to 4.2 petaflops in 2018 — the resolution increased from 34 kilometers to 25 kilometers, and the number of models jumped from 21 to 31. From 2005 to 2020, errors in intensity forecasts within 48 hours are down 20 percent to 30 percent.
I really enjoyed this story; the computational side of weather forecasting has been seriously interesting as a direct usage case for supercomputing, and I found the pretty direct relationship between computing power and material improvements in forecasts to be deeply cool.
We spoke about the role of the NWS, why the big gains lately have been in not just predicting storm landfall but also the much harder storm intensity, and what role climate change has in this.
This interview has been condensed and edited.
You wrote a really fascinating story all about how the government's buying two new computers and it's going to make us considerably better at predicting the weather. What's the relationship between computational power and weather forecasts?
That's why I really got interested reporting this story, because I had seen that they were buying these computers, and that this 12.1 petaflops number kept coming up, but I was trying to translate that into what it actually means for whether or not it's going to actually rain and whether or not I need to bring my umbrella. Basically, when you're looking at a weather forecast, you're looking at the combination of a lot of different computational models that forecasters look at and take into account. All of these are different models that the National Weather Service runs, and then they make predictions based on that.
What more computing power allows the National Weather Service to do is basically introduce more model complexity, so they're better able to take in all sorts of data about the world and do more with that in order to spit out predictions about what's going to happen with the weather. There are different ways that that complexity comes into play. But basically, you're just better able to represent the really, really complex climate and the really, really complex atmosphere.
This is a thing that a lot of folks don't necessarily know, where their weather report comes from, but can you talk a little bit just generally about the role that the NWS has in this?
They are the ones that are central command, I guess. They're the ones that are running these models and putting out reports and they have all of this data available to local forecasters, whether that's somebody in the government, or folks on the nightly news or whatever. They're getting these outputs from these National Weather Service models and then they're able to translate that to the people who are checking the weather. The National Weather Service is the one that's really in charge of running these computers and getting these numbers out to the forecasters.
You mentioned the sticker price of $300 million to like a half a billion dollars. That's a pretty big chunk of change, but at the same time, what are some of the benefits of the improved computing?
That is a big number: that's over the next 10 years; that includes the operating cost of the computers. What they've found is that every time that they've increased the capacity of the computers, they've noticed that they're better able to predict the weather. One of the things that I highlighted in the piece is the last time that there was a big upgrade like this was in 2018. The researchers were telling me this specific model that they use — it's called the GEFS, the Global Ensemble Forecasting System — they found that with the last upgrade, they were able to run this model so that it was much better able to predict the weather.
They were able to introduce better resolution with it. They were able to shrink the space that they were able to predict, and they were also able to run more copies of the model, which helps them put those together and run more efficiently. Then they were also able to introduce more physics into the model. All of that sounds very theoretical, but I also talked to folks at the National Hurricane Center. I think that's one of the most visible places that forecasting really impacts people's lives, at least in the U.S., is that the whole East Coast sees a lot of hurricanes. What they've found is that over time, they've gotten a lot better at predicting both the track of a hurricane, where it's going to go, but also predicting the hurricane intensity.
This is where I got really interested, because what I was learning is that hurricane track predictions, like where a hurricane is going to hit, have been steadily getting better for the past few decades. Where a hurricane goes is governed by very high-level forces in the atmosphere. But hurricane intensity — is this going to be a category one hurricane, minimal damage, or is this going to be a super strong category, five storm with super high wind speeds? — that's been a lot harder to get better on apparently. Because it's a lot more localized, what's happening in the middle of the storm, and there's a lot more complicated physics going on.
That's where they found that it's only been in the past 10 to 15 years or so that the forecasters have been able to better predict, is this going to be a huge storm or is it going to be a small one? I think that's one of the biggest things that researchers and forecasters are hoping for these upgrades, is that they're going to keep getting better at understanding how strong a storm might be, specifically within hurricane forecasting.
Weather forecasting in general has been getting better, in no small part because of this, but the hurricane forecasting thing has been interesting just because there really is so much on the line. We talk about these million-dollar machines, but the impact of hurricane season alone, just on insurance, can go well into the billions.
It's anecdotal, but when Hurricane Ida hit New York City, I was already reporting this piece actually. There was all of this news around some people in New York saying, "Oh, why was this such a surprise?" But if you look back at the forecast, researchers actually did predict that Ida would go through what's called rapid intensification, where it spun up really fast into a big storm, and they didn't quite get it right on just how much that would happen but they did predict pretty well that this would be a much stronger storm and that would happen really fast. There's still a lot of downstream stuff. That's something that I tried to get across in this piece, that it's not just about predicting storms; to be able to actually communicate with the public about what a forecast means and try to put infrastructure into place, so that people can actually do something about it once we know a storm is coming — that's just a whole 'nother ballgame.
You also mentioned in the piece, climate change is intensifying these storms and potentially making them riskier as we go along, and having more computation power can help.
Yeah, it's complicated and it's hard to really understand exactly how climate change plays into any particular storm. But in general, researchers do think that we're going to keep seeing more intense storms and that it'll be important to keep building up that computational power so we can try to understand when and where they're going to hit.
To that point, do you want to talk a little bit about what your beat is? MIT Technology Review is really fun, can you talk a little bit about what you cover in general?
I cover all things climate, so that ranges for me from stuff like this, talking about the weather and how climate is affecting that and how we're adapting to it, and down into clean energy. I've written a good amount about solar and different kinds of ways that we're trying to clean up our power system. That's what I like to write about and what I've been covering at Tech Review.
I think that Tech Review really tries to not be just a technology cheerleader, but it is nice to find those stories where there are solution elements to it. I think that's one of the great things about being a climate reporter with a tech focus, is that you can try to at least evaluate some of these solutions that people are trying, whether that's weather forecasting or whether that's something like a new clean energy system, but it's a little less doom and gloom than I think a lot of climate reporting tends to be, and rightly so.
Yeah, this is the kind of problem where you can throw petaflops at it and all of a sudden, things start getting a little bit better. Where can folks find you?
They can find me on Twitter. Our latest issue just came out, so this was part of our computing issue. If folks want to read this piece, they will have to subscribe to Tech Review, but there's a lot of really great journalism and I hope that they'll consider supporting my colleagues.
Yeah, I agree. I subscribe, it's great. It's a really good publication, you guys do very good work.
If you have anything you’d like to see in this Sunday special, shoot me an email. Comment below! Thanks for reading, and thanks so much for supporting Numlock.