Skip to navigation Skip to content
Careers | Phone Book | A - Z Index
Computer Languages & Systems Software

Johnny Corbino

corbino
Johnny Corbino
Pacific Time
San Diego, California

Johnny Corbino is an HPC Applications Engineer in the Computer Languages and Systems Software Group. Before joining CLaSS, Johnny worked in The Netherlands for ASML as Software Engineer. His research interests combine high-performance computing, mathematical modeling, and artificial intelligence. He earned his Ph.D. in Computational Science from Claremont Graduate University in 2018.

Johnny currently works on the Pagoda Project with UPC++, funded by the DOE/NNSA Exascale Computing Project.

Publication Lists:

Journal Articles

J Corbino, J Castillo, "High-order mimetic finite-difference operators satisfying the extended Gauss divergence theorem", Journal of Computational and Applied Mathematics, 2020, doi: 10.1016/j.cam.2019.06.042

We present high-order mimetic finite-difference operators that satisfy the extended Gauss theorem. These operators have the same order of accuracy in the interior and at the boundary, no free parameters and optimal bandwidth. They are defined over staggered grids, using weighted inner products with a diagonal norm. We present several examples to demonstrate that mimetic finite-difference schemes using these operators produce excellent results.

A Boada, J Corbino, J Castillo, "High-order mimetic difference simulation of unsaturated flow using Richards equation", Mathematics in Applied Sciences and Engineering, 2020, doi: 10.5206/mase/10874

The vadose zone is the portion of the subsurface above the water table and its pore space usually contains air and water. Due to the presence of infiltration, erosion, plant growth, microbiota, contaminant transport, aquifer recharge, and discharge to surface water, it is crucial to predict the transport rate of water and other substances within this zone. However, ow in the vadose zone has many complications as the parameters that control it are extremely sensitive to the saturation of the media, leading to a nonlinear problem. This ow is referred as unsaturated ow and is governed by Richards equation. Analytical solutions for this equation exists only for simplified cases, so most practical situations require a numerical solution. Nevertheless, the nonlinear nature of Richards equation introduces challenges that causes numerical solutions for this problem to be computationally expensive and, in some cases, unreliable. High-order mimetic finite difference operators are discrete analogs of the continuous differential operators and have been extensively used in the fields of fluid and solid mechanics. In this work, we present a numerical approach involving high-order mimetic operators along with a Newton root- finding algorithm for the treatment of the nonlinear component. Fully-implicit time discretization scheme is used to deal with the problem's stiffness.

J Corbino, J Castillo, C Paolini, "SubFlow: An open-source, object-oriented application for modeling geologic storage of CO2", Journal of Hydrologic Engineering, 2016, doi: 10.1190/ice2016-6517529.1

The capture of carbon dioxide for its subsequent storage in brine-saturated reservoirs or depleted oil fields has become a significant part of US energy policy. In this work, we focus on the design and development of a novel CCUS application to model carbon dioxide injection in brine-saturated reservoirs. SubFlow is written in C++ and uses a relational database to store user session and simulation parameters such as mineral, solute, kinetic reaction, lithology, formation, and injection water data. Subflow is capable of 3D real-time visualization, distributed-parallel execution on massively parallel processor (MPP) systems using OpenMP and MPI, and features an intuitive user interface developed using Qt. SubFlow uses a mimetic discretization method (MDM) for solving conservation of solute mass, energy, and fluid momentum, and the finite element method for solving the pressure, rock stress, and fracture fields. SubFlow is implemented with the Mimetic Methods Toolkit (MTK), a C++ API which allows for an intuitive implementation of the Castillo-Grone based Mimetic Discretization Methods. The FVM is second order accurate while the MDM is capable of fourth order accuracy. OpenGL is used to render pressure, temperature, stress, velocity, and solute concentration fields on a 3D mesh that represents a reservoir. Results from selected simulations are compared with those produced by TOUGHREACT and STOMP.

Conference Papers

Julian Bellavita, Mathias Jacquelin, Esmond G. Ng, Dan Bonachea, Johnny Corbino, Paul H. Hargrove, "symPACK: A GPU-Capable Fan-Out Sparse Cholesky Solver", 2023 IEEE/ACM Parallel Applications Workshop, Alternatives To MPI+X (PAW-ATM'23), ACM, November 13, 2023, doi: 10.1145/3624062.3624600

Sparse symmetric positive definite systems of equations are ubiquitous in scientific workloads and applications. Parallel sparse Cholesky factorization is the method of choice for solving such linear systems. Therefore, the development of parallel sparse Cholesky codes that can efficiently run on today’s large-scale heterogeneous distributed-memory platforms is of vital importance. Modern supercomputers offer nodes that contain a mix of CPUs and GPUs. To fully utilize the computing power of these nodes, scientific codes must be adapted to offload expensive computations to GPUs.

We present symPACK, a GPU-capable parallel sparse Cholesky solver that uses one-sided communication primitives and remote procedure calls provided by the UPC++ library. We also utilize the UPC++ "memory kinds" feature to enable efficient communication of GPU-resident data. We show that on a number of large problems, symPACK outperforms comparable state-of-the-art GPU-capable Cholesky factorization codes by up to 14x on the NERSC Perlmutter supercomputer.

Presentation/Talks

Johnny Corbino, UPC++’s Crucial Role in Quantum Chemistry, UPC++ Community BOF Virtual Symposium, February 16, 2023, doi: 10.25344/S4XG6F

SubFlow: Modeling geological sequestration of carbon dioxide with mimetic discretization methods, Joint Mathematics Meetings (ACM), 2018,

Mimetic discretization methods for solving electrical conduction problems in laminated composites with crack, 14th U.S. National Congress on Computational Mechanics, 2017,

An overview of web-based CO2 subsurface flow modeling, An overview of web-based CO2 subsurface flow modeling, 2013,

Reports

John Bachan, Scott B. Baden, Dan Bonachea, Johnny Corbino, Max Grossman, Paul H. Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, Daniel Waters, "UPC++ v1.0 Programmer’s Guide, Revision 2023.9.0", Lawrence Berkeley National Laboratory Tech Report LBNL-2001560, December 2023, doi: 10.25344/S4P01J

UPC++ is a C++ library that supports Partitioned Global Address Space (PGAS) programming. It is designed for writing efficient, scalable parallel programs on distributed-memory parallel computers. The key communication facilities in UPC++ are one-sided Remote Memory Access (RMA) and Remote Procedure Call (RPC). The UPC++ control model is single program, multiple-data (SPMD), with each separate constituent process having access to local memory as it would in C++. The PGAS memory model additionally provides one-sided RMA communication to a global address space, which is allocated in shared segments that are distributed over the processes. UPC++ also features Remote Procedure Call (RPC) communication, making it easy to move computation to operate on data that resides on remote processes. UPC++ was designed to support exascale high-performance computing, and the library interfaces and implementation are focused on maximizing scalability. In UPC++, all communication operations are syntactically explicit, which encourages programmers to consider the costs associated with communication and data movement. Moreover, all communication operations are asynchronous by default, encouraging programmers to seek opportunities for overlapping communication latencies with other useful work. UPC++ provides expressive and composable abstractions designed for efficiently managing aggressive use of asynchrony in programs. Together, these design principles are intended to enable programmers to write applications using UPC++ that perform well even on hundreds of thousands of cores.

John Bachan, Scott B. Baden, Dan Bonachea, Johnny Corbino, Max Grossman, Paul H. Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, Daniel Waters, "UPC++ v1.0 Programmer’s Guide, Revision 2023.3.0", Lawrence Berkeley National Laboratory Tech Report, March 30, 2023, LBNL 2001517, doi: 10.25344/S43591

UPC++ is a C++ library that supports Partitioned Global Address Space (PGAS) programming. It is designed for writing efficient, scalable parallel programs on distributed-memory parallel computers. The key communication facilities in UPC++ are one-sided Remote Memory Access (RMA) and Remote Procedure Call (RPC). The UPC++ control model is single program, multiple-data (SPMD), with each separate constituent process having access to local memory as it would in C++. The PGAS memory model additionally provides one-sided RMA communication to a global address space, which is allocated in shared segments that are distributed over the processes. UPC++ also features Remote Procedure Call (RPC) communication, making it easy to move computation to operate on data that resides on remote processes.

UPC++ was designed to support exascale high-performance computing, and the library interfaces and implementation are focused on maximizing scalability. In UPC++, all communication operations are syntactically explicit, which encourages programmers to consider the costs associated with communication and data movement. Moreover, all communication operations are asynchronous by default, encouraging programmers to seek opportunities for overlapping communication latencies with other useful work. UPC++ provides expressive and composable abstractions designed for efficiently managing aggressive use of asynchrony in programs. Together, these design principles are intended to enable programmers to write applications using UPC++ that perform well even on hundreds of thousands of cores.

John Bachan, Scott B. Baden, Dan Bonachea, Johnny Corbino, Max Grossman, Paul H. Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, Daniel Waters, "UPC++ v1.0 Programmer’s Guide, Revision 2022.9.0", Lawrence Berkeley National Laboratory Tech Report, September 30, 2022, LBNL 2001479, doi: 10.25344/S4QW26

UPC++ is a C++ library that supports Partitioned Global Address Space (PGAS) programming. It is designed for writing efficient, scalable parallel programs on distributed-memory parallel computers. The key communication facilities in UPC++ are one-sided Remote Memory Access (RMA) and Remote Procedure Call (RPC). The UPC++ control model is single program, multiple-data (SPMD), with each separate constituent process having access to local memory as it would in C++. The PGAS memory model additionally provides one-sided RMA communication to a global address space, which is allocated in shared segments that are distributed over the processes. UPC++ also features Remote Procedure Call (RPC) communication, making it easy to move computation to operate on data that resides on remote processes.

UPC++ was designed to support exascale high-performance computing, and the library interfaces and implementation are focused on maximizing scalability. In UPC++, all communication operations are syntactically explicit, which encourages programmers to consider the costs associated with communication and data movement. Moreover, all communication operations are asynchronous by default, encouraging programmers to seek opportunities for overlapping communication latencies with other useful work. UPC++ provides expressive and composable abstractions designed for efficiently managing aggressive use of asynchrony in programs. Together, these design principles are intended to enable programmers to write applications using UPC++ that perform well even on hundreds of thousands of cores.

Posters

Paul H. Hargrove, Dan Bonachea, Johnny Corbino, Amir Kamil, Colin A. MacLean, Damian Rouson, Daniel Waters, "UPC++ and GASNet: PGAS Support for Exascale Apps and Runtimes (ECP'23)", Poster at Exascale Computing Project (ECP) Annual Meeting 2023, January 2023,

The Pagoda project is developing a programming system to support HPC application development using the Partitioned Global Address Space (PGAS) model. The first component is GASNet-EX, a portable, high-performance, global-address-space communication library. The second component is UPC++, a C++ template library. Together, these libraries enable agile, lightweight communication such as arises in irregular applications, libraries and frameworks running on exascale systems.

GASNet-EX is a portable, high-performance communications middleware library which leverages hardware support to implement Remote Memory Access (RMA) and Active Message communication primitives. GASNet-EX supports a broad ecosystem of alternative HPC programming models, including UPC++, Legion, Chapel and multiple implementations of UPC and Fortran Coarrays. GASNet-EX is implemented directly over the native APIs for networks of interest in HPC. The tight semantic match of GASNet-EX APIs to the client requirements and hardware capabilities often yields better performance than competing libraries.

UPC++ provides high-level productivity abstractions appropriate for Partitioned Global Address Space (PGAS) programming such as: remote memory access (RMA), remote procedure call (RPC), support for accelerators (e.g. GPUs), and mechanisms for aggressive asynchrony to hide communication costs. UPC++ implements communication using GASNet-EX, delivering high performance and portability from laptops to exascale supercomputers. HPC application software using UPC++ includes: MetaHipMer2 metagenome assembler, SIMCoV viral propagation simulation, NWChemEx TAMM, and graph computation kernels from ExaGraph.