Skip to navigation Skip to content
Careers | Phone Book | A - Z Index
Computer Architecture Group

Characterization of DOE Mini-apps

This effort aims at identification of the computational characteristics of the DOE miniapps developed at various exa-scale codesign centers.

For a complete list of traces please refer to our NERSC site.

MPI Traces for Exascale Codesign Center Miniapps

We started these efforts by collecting the MPI traces for the mini-apps that use MPI communication interface.

The traces are collected using the open source dumpi trace tool. The distributed from this site are in binary format to save space. They are also compressed using gzip and tar utilities. Dumpi generates a trace per MPI rank, thus each archive contains many trace files. 

After downloading a trace archive, for instance cesar_Mocfe_256.tar.gz, one can uncompress the directory of traces

tar xzvf cesar_Mocfe_256.tar.gz

The directory will contain 256 trace file in a binary format. To have a readable format of the binary file, one can use dumpi2ascii tools distributed with dumpi.

We ran all data collection on Cray XE06 (Hopper) at NERSC. We used 64 and 256 cores per runs. Some applications only use MPI, while others use hybrid programming mixing MPI with OpenMP. The traces carry both message timing, routing and size information. Timing information are related to Hopper machine, while information about message sizes and destinations are related to the problem sizes and the parallelization levels.

EXMATEX Mini Apps

We collected traces for two applications. These applications are from the DOE codesign center EXMATEX: extreme-scale simulation of materials properties in extreme environmentsThe first application is for modeling the Neutron Transport Evaluation and Test Suite (HILO). We ran two variants of this application, multinode and 2d_multinode. Input arguments are

  • particles: 40000000
  • numcellssx: 100
  • tolerance: 10.0
  • numiters: 100

Available traces:

CMC multinode 64 processes

CMC multinode 256 processes

CMC 2d_multinode 64 processes

CMC 2d_multinode 256 processes

The second application we collect trace for is Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics LULESH.

We use domains of 64^3 .

Available traces:

LULESH 2.0 64 processes

LULESH 2.0 64 processes x 4 threads

 

EXACT Mini Apps

We collected traces for two applications. These applications are from the DOE codesign center EXACT: Center for Exascale Simulation of Combustion in TurbulenceWe collected traces for two apps based on BoxLib library. The first application integrates the compressible Navier Stokes (CNS) equations with constant viscosity and thermal conductivity. 

  • number of cells: 512
  • maximum grid size: 64
  • number of steps: 5

Available traces:

CNS 16 processes x 4 threads

CNS 64 processes x 4 threads

The second application is a multigrid solver based on BoxLib. 

  • number of cells: 1024
  • maximum grid size: 64
  • number of steps: 5
Available traces:

MultiGrid_C 16 processes x 4 threads

MultiGrid_C 64 processes x 4 threads

CESAR Mini Apps

We collected traces for two applications. These applications are from the DOE codesign center CESAR: Center for Exascale Simulation of Advanced Reactors.We collected traces for two applications. The first application is the MOC emulator. 

MOC emulator parameters:

  • Group Visible: 10
  • Groups per process: 5
  • Angle Visible: 8
  • Angles per process: 4
  • Mesh scale: 4
  • ParallelInX: 2 or 4 (based on rank count)
  • ParallelInY: 2 or 4 (based on rank count)
  • ParallelInZ: 4
  • Trajectory spacing: 1.0
  • Krylov iterations: 50
  • Krylov back vectors: 30

Available traces:

MOC emulator 64 processes

MOC emulator 256 processes

The second application is Nekbone, which solves a poison equation using conjugate gradient iteration with no preconditioner. Nekbone represents the main computational kernel of Nek5000 application. 

We used default problem settings.

Available traces:

Nek_bone 64 processes

Nek_bone 256 processes


About Berkeley Lab

Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 16 Nobel Prizes. Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and the universe. Scientists from around the world rely on the Lab’s facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of California for the U.S. Department of Energy’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.