Skip to navigation Skip to content
Careers | Phone Book | A - Z Index

Co-Designing Exascale Architectures and Algorithms for Real-World Combustion Simulations

October 4, 2011

Jon Bashor, Jbashor@lbl.gov, 510-486-5849

The low-swirl injector (LSI) developed by Robert Cheng at Berkeley Lab is uniquely suited for supporting lean hydrogen and hydrocarbon fuel mixtures in low-emissions combustion devices. This image is taken from a simulation of the LSI as it burns a lean hydrogen-air mixture. The colors indicate the presence of nitric oxide emissions near the highly wrinkled flame, while the gray structures at the flame base show the turbulent vorticity generated near the breakdown of the swirling flow from the injector. The simulation was created by John Bell and Marc Day of Berkeley Lab's Center for Computational Sciences and Engineering and run on Franklin, the Cray XT5 supercomputer at the Department of Energy's National Energy Research Scientific Computing Center.

Combustion is such a fundamental part of our lives that we often take it for granted. Our cars start and take us where we want to go. We board an airliner and several hours later land at our destination. The consumer goods we buy are delivered by trucks and trains. And our homes and industry are powered by electricity from fossil fuel-burning power plants.

But as energy prices rise and environmental concerns increase, researchers are seeking to increase our understanding of combustion as a way to improve energy efficiency, adapt to new fuels and reduce the pollutants generated in the process. The challenge is ideally suited to high performance computing and with each generation of more powerful supercomputers, combustion researchers are able to create increasingly detailed combustion simulations.

Among the leaders in the field are Jackie Chen of Sandia National Laboratories and John Bell of Lawrence Berkeley National Laboratory, each attacking the problem from a different perspective. Chen, an engineer by training, has worked in Sandia’s Combustion Research Facility since 1989 with funding from the Department of Energy’s Office of Basic Energy Sciences. Bell, on the other hand, is an applied mathematician who has been developing algorithms to study combustion for the past 15 years with support from DOE’s Office of Advanced Scientific Computing Research.

Now, the two are pooling their expertise in a five-year project recently announced by DOE. The project, called the Combustion Exascale Co-Design Center will combine the talents of combustion scientists, mathematicians, computer scientists, and hardware architects. This multidisciplinary team will work to simultaneously redesign each aspect of the combustion simulation process—from algorithms to programming models to hardware architecture—in order to create high fidelity combustion simulations that can run at the next level of supercomputing power, the exascale.

“Today’s petascale supercomputers are powerful enough to resolve a large enough range of scales to provide a strong connection to combustion experiments,” said Chen, who will be the principal investigator for the project. “Using our simulation capabilities, we can fully characterize various combustion processes and gain a lot more information than we can from experiments. Together, experiment and simulation are a very powerful combination.”

Although they provide a wealth of insight, today’s combustion models still need improvements to increase their accuracy in a wide range of critical areas: the effects of turbulence, increased pressure, different fuel and air mixtures, the ragged edges of the flame, temperatures and more. 

“In five years, we hope to uncover knowledge that is critical to advancing the development of internal combustion engines, aircraft engines and industrial burners—this work could make a huge difference in a number of areas,” Chen said. “Our holy grail is to create predictive models for clean-burning combustion devices.”

In line with the nation’s goal of reducing oil consumption by 25 percent by 2020, the project also seeks to create simulations for current fuels, as well as the potential of hydrogen and biofuels in various combustion applications.

Getting there from here, however, involves a huge leap in computing power and efficiency, a leap that involves hurdling the “energy wall.”

Bridging the Gap between Petascale and Exascale

Supercomputers have now reached the petascale, meaning they perform at the petaflop-per-second level (one quadrillion calculations per second). But the design of such massive systems has hit an energy wall. Scaling them up much larger will mean that the annual cost of energy is more than the total cost of the supercomputer.

To get around this problem, researchers are working on new supercomputer architectures. As these architectures will have hundreds of thousands of processors, current science applications must also be redesigned in order to run on them. Bell and Chen are taking advantage of this break in both hardware and software history to “co-design” an architecture, software and hardware that complement one another to reach exascale speeds.

Using today’s petascale supercomputers like Jaguar at the Oak Ridge Leadership Computing Facility and Hopper at the National Energy Research Scientific Computing Center, researchers can simulate the combustion of relatively simple fuels at the laboratory scale. These lab-scale simulations are typically at atmospheric pressure.

“Real combustion systems operate at much higher pressures and that makes them much more difficult to simulate,” said Bell, deputy leader of the project. The pressure inside an internal combustion engine can be 20 times higher. Diesel engines and large-scale industrial burners, such as those used in natural gas-fired power plants, operate at even higher pressures.

Accurately simulating the effects of such pressure on the already complex chemistry and turbulent flow conditions found in combustion require computing power beyond that of even the most powerful supercomputers available today.

“With exascale systems, we expect to be able to create high-fidelity simulations of more realistic fuels, including biodiesel,” said Bell, who is deputy leader of the project.

Why Computational Combustion?

Simultaneous visualization of a lifted autoigniting hydrogen/air jet flame showing two variables of a turbulent combustion simulation, the hydroxyl radical (flame marker) and hydroperoxy (autoignition marker). Simulation by Chun Sang Yoo and Jacqueline Chen, Sandia National Laboratories, and visualization by Hongfeng Yu, University of California at Davis.

Combustion research is ideally suited to simulation because of its complexity. Scientists can carry out experiments to test their theories, but measurement and observation of actual combustion is difficult.

For example, if a researcher uses a probe to measure conditions within a flame, that probe usually distorts the combustion process. As a result, lasers produce the best current measurements, but they take snapshots—not continuous images. Moreover the thick quartz lenses required to observe the interior of combustion devices make for expensive experiments.

Also, according to Chen, the diagnostics from an experiment can be misleading, while computer simulations are much “cleaner”

The Other Half of the Exascale Equation

Raw computing power is important, but creating new mathematical methods to efficiently use that computing power is also critical. Bell’s group at Berkeley Lab is a world leader in increasing the efficiency and the fidelity of simulations using Adaptive Mesh Refinement, or AMR.  The standard approach to computer simulations divides the problem into a grid with evenly sized squares and treats each square equally. This approach can leave processors assigned to less-active portions of the simulation idling while the active squares catch up. Moreover, it limits the number of squares (and thus the detail) that can be calculated using available processing power, AMR, however, adapts the size of the grid squares to concentrate on the most active parts of the problem – smaller squares mean a higher resolution. In the case of combustion, a scientist may want to focus the computer’s power on a flame front as it moves across a combustion chamber. In short, AMR provides a numerical microscope for researchers.

Such mathematical advancements are a critical part of the co-design process, as improvements in computing hardware alone won’t be sufficient. As an example, Bell cites the “four-thirds law.” Accurate combustion simulation requires a three-dimensional grid, but must also be computed at multiple times steps as the combustion progresses. If you start with a grid that is 1,000 by 1,000 by 1,000, you have 1 billion grid points. Multiply that by 1,000 time steps and it’s up to 1 trillion. Increasing the resolution by a factor of 10 would require a grid 10,000 by 10,000 by 10,000 – or 1 trillion grid points. But this will also require 10,000 time steps – the fourth third. This can mean that a simulation that once ran in two weeks on a supercomputer now takes 20 weeks, which is untenable.

“Because of this, you need to do something algorithmically to give yourself another order of magnitude,” Bell said.

But a key part of the project is ensuring that all of the pieces work together. As new computer architectures are proposed, researchers will use special simulators to see which applications are best suited to the design. Both hardware and software development will iterate repeatedly throughout the process.

“In this interplay between mathematicians, combustion scientists, computer scientists and vendors we will be shifting everything – from the applications to the supercomputer itself—and determining what the machine will look like will be a two-way street,” Bell said. “We will have to make tradeoffs in what the hardware will look like in order to have applications that can run on it. But at the end of five years, we will have a reasonably good idea of what the hardware will look like and a reasonably good idea of what algorithms will run on it.”

In addition to Chen and Bell, the collaboration will include researchers from Lawrence Livermore, Oak Ridge and Los Alamos national laboratories, as well as the National Renewable Energy Laboratory, the University of Texas at Austin, the University of Utah, Stanford University, Georgia Tech and Rutgers University.


About Berkeley Lab

Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 16 Nobel Prizes. Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and the universe. Scientists from around the world rely on the Lab’s facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of California for the U.S. Department of Energy’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.