CRD Scientists Contribute To A Definitive Book On Parallel Processing
November 15, 2006
Demand for improving powerful computers has been growing as more researchers worldwide rely on these systems to solve complex scientific problems—fast. In a new book co-edited by LBNL’s Associate Laboratory Director for Computing Sciences, Horst Simon, scientific computing experts from universities and national labs provide a comprehensive look at the state of the art in scientific computing for the effective use of highly parallel computers.
The book, titled “Parallel Processing for Scientific Computing,” explores the models and technologies used by researchers to generate inceasingly complex modelss from parallel computers, often consisteing of thousands of processors.
These computers are able to divide up a task and carry out the work simultaneously, lessening the amount of time it takes to crunch massive amounts of numbers. Such use of computing power, called parallel processing, has enabled scientists to explore the mystery of black holes, predict weather trends and develop life-saving medicines.
CRD researchers made a significant contribution to the 397-page book, published by the Society for Industrial and Applied Mathematics. Lenny Oliker and David Bailey contributed to Chapter 5, titled “Performance Evaluation and Modeling of Ultra-Scale Systems,” while Ali Pinar co-wrote Chapter 7, “Combinatorial Parallel and Scientific Computing. Esmond Ng was the sole author for Chapter 9, called “Parallel Sparse Solvers, Preconditioners, and Their Applications.”
“This book reflects the state of the art in the field, and the fact that several contributions come from LBNL is an indication of where we have leaders in the field of parallel algorithms,” said Simon, who also co-wrote the concluding chapter with the book’s two other editors, Michael Heroux and Padma Raghavan.
Heroux is a scientist at Sandia National Laboratories while Raghavan is a computer science and engineering professor at Pennsylvania State University.
In the book, the contributing authors examine performance modeling tools, numerical algorithms, and tools and frameworks for parallel platforms. They also present several application case studies in multi-component simulations, computational biology, and PDE constrained simulation, among others. In the process the authors analyze the different platforms, codes, algorithms and other tools scientists employ to squeeze the best performance out of these systems.The publication, divided into four sections, is the first in-depth look at parallel computing in 10 years. The book is a reference for scientists and program developers and a primer for university students.
See http://www.ec-securehost.com/SIAM/SE20.html for a more detailed description of the book.
About Berkeley Lab
Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 16 Nobel Prizes. Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and the universe. Scientists from around the world rely on the Lab’s facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of California for the U.S. Department of Energy’s Office of Science.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.