Harnessing the Power of Exascale for Wind Turbine Simulations
ECP ExaWind Project Taps Berkeley Lab's AMReX to Help Model Next-Generation Wind Farms
April 7, 2020
by Jennifer Huber
Contact: cscomms@lbl.gov
Driving along Highway 580 over the Altamont Pass in Northern California, you can’t help but marvel at the 4,000+ wind turbines slowly spinning on the summer-golden hillsides. Home to one of the earliest wind farms in the United States, Altamont Pass today remains one of the largest concentrations of wind turbines in the world. It is also a symbol of the future of clean energy.
Before utility grids can achieve wide-scale deployment of wind energy, however, they need more efficient wind plants. This requires advancing our fundamental understanding of the flow physics governing wind-plant performance.
ExaWind, a U.S. Department of Energy (DOE) Exascale Computing Project, is tackling this challenge by developing new simulation capabilities to more accurately predict the complex flow physics of wind farms. The project entails a collaboration between the National Renewable Energy Laboratory (NREL), Sandia National Laboratories, Oak Ridge National Laboratory, the University of Texas at Austin, Parallel Geometric Algorithms, and — as of a few months ago — Lawrence Berkeley National Laboratory (Berkeley Lab).
“Our ExaWind challenge problem is to simulate the air flow of nine wind turbines arranged as a three-by-three array inside a space five kilometers by five kilometers on the ground and a kilometer high,” said Shreyas Ananthan, a research software engineer at NREL and lead technical expert on the project. “And we need to run about a hundred seconds of real-time simulation.”
By developing this virtual test bed, the researchers hope to revolutionize the design, operational control, and siting of wind plants, plus facilitate reliable grid integration. And this requires a combination of advanced supercomputers and unique simulation codes.
Unstructured + Structured Calculations
The principle behind a wind turbine is simple: energy in the wind turns the turbine blades, which causes an internal gearbox to rotate and spin a generator that produces electricity. But simulating this is complicated. The flexible turbine blades rotate, bend, and twist as the wind shifts direction and speed. The yaw and pitch of these blades are controlled in real time to extract as much energy as possible from a wind event. The air flow also entails complex dynamics — such as influences from the ground terrain, formation of a turbulent wakefield downstream from the blades, and turbine-turbine interactions.
To improve on current simulations, scientists need more computing power and higher resolution models that better capture the crucial dynamics. The ExaWind team is developing a predictive, physics-based, and high-resolution computational model — progressively building from petascale simulations of a single turbine toward exascale simulations of a nine-turbine array in complex terrain.
“We want to know things like the air velocity and air temperature across a big three-dimensional space,” said Ann Almgren, who leads the Center for Computational Sciences and Engineering in Berkeley Lab’s Computational Research Division. “But we care most about what’s happening right at the turbines where things are changing quickly. We want to focus our resources near these turbines, without neglecting what’s going on in the larger space.”
To achieve the desired accuracy, the researchers are solving fluid dynamics equations near the turbines using a computational code called Nalu-Wind, a fully unstructured code that gives users the flexibility to more accurately describe the complex geometries near the turbines, Ananthan explained.
But this flexibility comes at a price. Unstructured mesh calculations have to store information not just about the location of all the mesh points but also about which points are connected to which. Structured meshes, meanwhile, are “logically rectangular,” which makes a lot of operations much simpler and faster.
“Originally, ExaWind planned to use Nalu-Wind everywhere, but coupling Nalu-Wind with a structured grid code may offer a much faster time-to-solution,” Almgren said.
Enter AMReX
Luckily, Ananthan knew about Berkeley Lab’s AMReX, a C++ software framework that supports block-structured adaptive-mesh algorithms for solving systems of partial differential equations. AMReX supports simulations on a structured mesh hierarchy; at each level the mesh is made up of regular boxes, but the different levels have different spatial resolution.
Ananthan explained they actually want the best of both worlds: unstructured mesh near the turbines and structured mesh elsewhere in the domain. The unstructured mesh and structured mesh have to communicate with each other, so the ExaWind team validated an overset mesh approach with an unstructured mesh near the turbines and a background structured mesh. That’s when they reached out to Almgren to collaborate.
“AMReX allows you to zoom in to get fine resolution in the regions you care about but have coarse resolution everywhere else,” Almgren said. The plan is for ExaWind to use an AMReX-based code (AMR-Wind) to resolve the entire domain except right around the turbines, where the researchers will use Nalu-Wind. AMR-Wind will generate finer and finer cells as they get closer to the turbines, basically matching the Nalu-Wind resolution where the codes meet. Nalu-Wind and AMR-Wind will talk to each other using a coupling code called TIOGA.
Even with this strategy, the team needs high performance computing. Ananthan’s initial performance studies were conducted on up to 1,024 Cori Haswell nodes at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC) and 49,152 Mira nodes at the Argonne Leadership Computing Facility.
“For the last three years, we’ve been using NERSC’s Cori heavily, as well as NREL’s Peregrine and Eagle,” said Ananthan. Moving forward, they’ll also be using the Summit system at the Oak Ridge Leadership Computing Facility and, ultimately, the Aurora and Frontier exascale supercomputers - all of which feature different types of GPUs: NVIDIA on Summit (and NERSC’s next-generation Perlmutter system), Intel on Aurora, and AMD on Frontier.
Although Berkeley Lab just started partnering with the ExaWind team this past fall, the collaboration has already made a lot of progress. “Right now we’re still doing proof-of-concept testing for coupling the AMR-Wind and Nalu-Wind codes, but we expect to have the coupled software running on the full domain by the end of FY20,” said Almgren.
NERSC is a DOE Office of Science user facility.
Jennifer Huber is a freelance science writer and science-writing instructor. Her work has appeared in KQED Science, Berkeley Engineer and Scope, among other publications.
About Berkeley Lab
Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 16 Nobel Prizes. Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and the universe. Scientists from around the world rely on the Lab’s facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of California for the U.S. Department of Energy’s Office of Science.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.