Berkeley Lab to Showcase HPC, Grids Expertise At SC2003
November 7, 2003
The Computing Sciences organization at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory will demonstrate its leadership in advancing science-driven supercomputing and next-generation Grid tools in a series of demonstrations and presentations at the SC2003 conference in Phoenix.
Berkeley Lab, located in booth R231, will feature three days of short technical talks on subjects as diverse as supercomputer performance, data management, applications to improve system and code performance, scientific visualization tools and computational science in climate, combustion and astrophysics. Talks will also include reports on various DOE SciDAC (Scientific Discovery through Advanced Computing) projects which LBNL is leading. See the full schedule below.
LBNL booth demonstrations will cover next-generation Grid tools, scientific visualization applications, large-scale data transfer and management, checkpoint/restart for Linux clusters the Warewulf Toolkit for cluster management, and Frankenputer, a custom, distributed-memory parallel visualization and rendering application.
“One of the hallmarks of the SC conference is its strong technical program, and this year we have compiled a very strong lineup to complement the conference program,” said Horst Simon, director of Berkeley Lab’s Computational Research and National Energy Research Scientific Computing (NERSC) Center divisions.
SC2003 begins Saturday, Nov. 15, and concludes Friday, Nov. 21.
Here is the schedule of talks to be given in the Berkeley Lab booth.
Tuesday, November 18
10:30-11 a.m., Nicholas Cardo, LBNL, “High Performance Computing in a Production Environment”
11:15-11:45 a.m., Gail Alverson, Cray Inc., “Using One Supercomputer to Test Another: The Role of Alvarez in the Red Storm Software Development Process”
12-12:30 p.m., Tom Davis, LBNL, “Batch Queue Shootout”
1-1:30 p.m., Scott Kruger, Science Applications International Corp., “Fusion MHD Simulation Performance Boost from SuperLU”
1:45-2:15 p.m., Arie Shoshani, LBNL, “Storage Resource Managers: Essential Components for the Grid”
2:30-3 p.m., Michael Banda, LBNL, “Management of Genomic Data at NERSC”
3:15-3:45 p.m., Silvia Crivelli, LBNL, “ProteinShop”
4-4:30 p.m., Cristina Siegerist, LBNL, “Visportal”
Wednesday, November 19
10:30-11 a.m., Daniel Kasen, LBNL, “Cosmic Simulator”
11:15-11:45 a.m., William Johnston, LBNL, “ESnet and the Future Networking Needs of DOE Science”
12-12:30 p.m., Tony Drummond & Osni Marques, LBNL, “The ACTS Collection”
1-1:30 p.m., David Bailey, LBNL, “Experimental Math”
1:45-2:15 p.m., Esmond Ng, LBNL, “The TOPS SciDAC Project”
2:30-3 p.m., Marc Day, LBNL, “Shrinking the Gap Between Combustion Theory and Experiment: Advanced Computer Simulation of Turbulent Laboratory Flames”
3:15-3:45 p.m., Greg Kurtzer, LBNL, “The Warewulf Toolkit
4- 4:30 p.m., Thomas Langley, LBNL, “The PDSF Linux Cluster”
Thursday, November 20
10:30-11 a.m., Michael Wehner, LBNL, “Hurricane Simulation in a High Resolution Version of the Community Atmospheric Model”
11:15-11:45 a.m., Rei Lee, LBNL, “Global Unified Parallel File System”
12-12:30 p.m., Erich Strohmaier, LBNL, “Performance Evaluation and Benchmarking Activities at LBNL”
1-1:30 p.m., Eli Dart, LBNL, “Network Performance Tuning”
1-2:15 p.m., Stephen Lau, LBNL, “The Bro Intrusion Detection System”
Here is a short description of demonstrations to be given in the LBNL booth:
Tools to Enable Next Generation Grid Applications: A demonstration of a set of next-generation GridEtools that allow scientists to map their workflow onto the Grid, easily prototype and develop Grid applications, and monitor and troubleshoot Grid usage.
Large-Scale File Replication using DataMover Technology: This demo tracks the operation of a middleware component called a DataMover that moves hundreds of files between two sites on the Grid.
Grid-Interoperability of Two Different Mass Storage Systems: A demonstration of the use of specialized Storage Resource Managers (SRMs) to communicate with heterogeneous mass storage systems, making them more accessible over the Grid.
ProteinShop and Protein Folding Optimization: ProteinShop's most interesting features: interactive manipulation of protein structures, energy visualization, and monitoring and steering of our parallel global optimization method for protein structure prediction.
Visportal: Visualization demonstration using Globus-enabled resources at NERSC, including High Performance Storage System (HPSS), IBM SP supercomputer “Seaborg,” visualization server “Escher,” and the Parallel Distributed Systems Facility (PDSF).
Astrophysics Visualizations: Exploring the 3D geometry of a supernova. Simulations performed at NERSC.
Fusion Simulation/Visualization: Demonstration of pipelined client/server visualization technology as applied to M3D/GTC fusion data (PPPL) generated at NERSC.
Frankenputer: A custom, distributed-memory parallel visualization and rendering application is used to generate multiple graphics streams that drive all six displays in a tiled display simultaneously.
The Warewulf Cluster Management Solution: Warewulf is a cluster implementation tool that facilitates the distribution of small Linux systems to an arbitrary number of nodes.
Serial, Parallel and Distributed Checkpoint/Restart for Linux: Researchers in Berkeley Lab's Future Technologies Group are developing a new system-level implementation of checkpoint/restart for Linux clusters as part of the SciDAC Scalable Systems Software Center.