Skip to navigation Skip to content
Careers | Phone Book | A - Z Index

Berkeley Lab Projects Advance Running, Scheduling of Scientific Workflows on HPC Systems

December 4, 2017

Contact: Jon Bashor,, 510-486-5849

Lavanya Gonzala

Lavanya Ramakrishnan (left) and Gonzalo Rodrigo Álvarez co-authored software to help users run scientific workflows more efficiently on HPC systems.

Researchers are increasingly turning to high performance computing (HPC) systems to carry out scientific workflows, which are executed as a series of steps and programs to study complex problems. However, achieving this can require a number of time-consuming manual tasks by the user and doesn’t always make the most efficient use of the system.

Recently, researchers at the Department of Energy’s (DOE) Lawrence Berkeley National Laboratory (Berkeley Lab) released publicly available software that allows HPC scheduling systems to automatically address these issues.

The software is the result of a two-year collaboration between staff in Berkeley Lab’s Computational Research Division and the Distributed Systems Group at Umeå University in Sweden.

“We looked at the infrastructure that supports HPC for workflows involving simulations and experimental or observational data and came to the conclusion that it used inefficient methods,” said Lavanya Ramakrishnan of the Usable Software Systems Group in the lab’s Data Science and Technology Department. “Currently, HPC schedulers have no knowledge of the workflow structure. Users submit an entire workflow as a single job or submit each stage as an individual job. But if the scheduler could see the entire view of the workflow, including the future jobs in the pipeline, we could potentially have a lot of impact on increasing efficiency.”

As an example, Ramakrishnan said that a workflow submitted as a single a job could use 256 processors at one point, but then only a single processor during another stage. But schedulers used today would hold all 256 processors for the entire run. And since large systems typically run several jobs at the same time, that means that many processors can be sitting idle, which wastes computing cycles and electricity. The only way around this is for the user to submit the workflow as a series of separate jobs, which is not of good use of his or her time and incurs long wait times in the job queue.

In June, the team presented WoAS, or Workflow-Aware Scheduling system, at the 26th International Symposium on High-Performance Parallel and Distributed Computing. WoAS enables existing scheduling algorithms to exploit the fine-grained information of a workflow's resource requirements and structure without modification. The team has developed an implementation of WoAS for Slurm, a widely used HPC batch scheduler.

Now the WoAS code has been released as an open-source project and made available to scheduling researchers and developers.

 Getting to WoAS

Much of the work leading up to WoAS was done by Gonzalo Rodrigo Álvarez, who was earning his Ph.D. in computer science at Umeå University. He was working on the Frieda project analyzing HPC workloads and when he talked to scientists at Berkeley Lab he heard their tales of woe regarding workflows: the need for different resources, the long wait times between various pieces of the work and the resulting long turnaround times for completed workflows.

“I thought it would be pretty easy to do research into schedulers, but I’d need a number of tools to do it,” said Rodrigo, who received his Ph.D. in April 2017. “We thought we could find those tools in the HPC community, but there weren’t many available, and those that we found were very old, and didn’t reflect the state of the art at all.”

That lack led to the second component of the project: the open-source Scheduler Simulation Framework, or ScSF.

“We needed something to test our WoAS algorithms, but there weren’t any solid simulators that captured the behavior of real HPC systems,” Rodrigo said. “ScSF allows us to cover all the steps of scheduling research through simulation. It provides capabilities for workload modeling, workload generation, system simulation, comparative workload analysis and experiment orchestration.”

To get simulated results that the team had confidence in, they needed to run large numbers of scenarios that took all of the variables into account. They ran scenarios over and over, often in parallel, just as a production HPC system would be used.  When they were done, they had enough runs to equal 30 years of a simulated lifespan of an HPC system. Then they had to analyze, measure and compare the results in order to extract the data that showed their algorithms worked as envisioned.

ScSF was presented in June at the 21st workshop on Job Scheduling Strategies for Parallel Processing. More recently, ScSF has been released as an open-source project and other researchers will take advantage of its features.

WoAS and ScSF are available for download at

This work was supported by the DOE Office of Science (Office of Advanced Scientific Computing Research) and used resources at the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility.

Berkeley Lab is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit

Financial support has also been provided in part by the Swedish Government's strategic effort eSSENCE and the Swedish Research Council (VR) under contract number C0590801 (Cloud Control). 

About Berkeley Lab

Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 16 Nobel Prizes. Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and the universe. Scientists from around the world rely on the Lab’s facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of California for the U.S. Department of Energy’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit