Publications
2024
Oscar Antepara, Samuel Williams, Hans Johansen, Mary Hall, "High-Performance, Scalable Geometric Multigrid via Fine-Grain Data Blocking for GPUs", Performance, Portability & Productivity in HPC (P3HPC), November 10, 2024,
Mahesh Lakshminarasimhan, Oscar Antepara, Tuowen Zhao, Benjamin Sepanski, Protonu Basu, Hans Johansen, Mary Hall, Samuel Williams, "Bricks: A high-performance portability layer for computations on block-structured grids", The International Journal of High Performance Computing Applications (IJHPCA), August 19, 2024, doi: 10.1177/1094342024126828
Will Thacher, Hans Johansen, Daniel Martin, "A high order cut-cell method for solving the shallow-shelf equations", Journal of Computational Science, August 1, 2024, 80, doi: 10.1016/j.jocs.2024.102319
David Trebotich, Randolph R Settgast, Terry Ligocki, William Tobin, Gregory H Miller, Sergi Molins, Carl I Steefel, "A multiphysics coupling framework for exascale simulation of subsurface fracture evolution", Frontiers in High Performance Computing, June 30, 2024, 2, doi: 10.3389/fhpcp.2024.1416727
- Download File: FrontiersHPC2024.pdf (pdf: 1.4 MB)
Sergi Molins, David Trebotich, Carl I. Steefel, "Approaches for the simulation of coupled processes in evolving fractured porous media enabled by exascale computing", Computing in Science & Engineering, May 23, 2024, doi: 10.1109/MCSE.2024.3403983
- Download File: CiSE2024.pdf (pdf: 6.6 MB)
Lois Curfman McInnes, Paige Kinsley, Daniel Martin, Suzanne Parete-Koon, Sreeranjani (Jini) Ramprakash, "Building a Diverse and Inclusive HPC Community for Mission-Driven Team Science", Computing in Science & Engineering, April 12, 2024, 25:5:31-38, doi: 10.1109/MCSE.2023.3348943
David Trebotich, "Exascale CFD in Heterogeneous Systems", Journal of Fluids Engineering, February 9, 2024, 146(4):041104, doi: 10.1115/1.4064534
- Download File: FE-23-1357_AuthorProof.pdf (pdf: 1.5 MB)
2023
Daniel F. Martin, Steven B. Roberts, Hans Johansen, David J Gardner, Carol S Woodward, "Impacts of improved time evolution in BISICLES using SUNDIALS", December 14, 2023,
- Download File: AGU2023Sundials.pdf (pdf: 1 MB)
John Bachan, Scott B. Baden, Dan Bonachea, Johnny Corbino, Max Grossman, Paul H. Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, Daniel Waters, "UPC++ v1.0 Programmer’s Guide, Revision 2023.9.0", Lawrence Berkeley National Laboratory Tech Report LBNL-2001560, December 2023, doi: 10.25344/S4P01J
UPC++ is a C++ library that supports Partitioned Global Address Space (PGAS) programming. It is designed for writing efficient, scalable parallel programs on distributed-memory parallel computers. The key communication facilities in UPC++ are one-sided Remote Memory Access (RMA) and Remote Procedure Call (RPC). The UPC++ control model is single program, multiple-data (SPMD), with each separate constituent process having access to local memory as it would in C++. The PGAS memory model additionally provides one-sided RMA communication to a global address space, which is allocated in shared segments that are distributed over the processes. UPC++ also features Remote Procedure Call (RPC) communication, making it easy to move computation to operate on data that resides on remote processes. UPC++ was designed to support exascale high-performance computing, and the library interfaces and implementation are focused on maximizing scalability. In UPC++, all communication operations are syntactically explicit, which encourages programmers to consider the costs associated with communication and data movement. Moreover, all communication operations are asynchronous by default, encouraging programmers to seek opportunities for overlapping communication latencies with other useful work. UPC++ provides expressive and composable abstractions designed for efficiently managing aggressive use of asynchrony in programs. Together, these design principles are intended to enable programmers to write applications using UPC++ that perform well even on hundreds of thousands of cores.
Duncan Carpenter, Anjali Sandip, Samuel Kachuck, Daniel Martin, "Does Damaged Ice affect Ice Sheet Evolution?", American Geophysical Union Fall Meeting, December 14, 2023,
- Download File: CarpenterAGU2023.pdf (pdf: 3.1 MB)
Oscar Antepara, Hans Johansen, Samuel Williams, Tuowen Zhao, Samantha Hirsch, Priya Goyal, Mary Hall, "Performance portability evaluation of blocked stencil computations on GPUs", International Workshop on Performance, Portability & Productivity in HPC (P3HPC), November 2023,
- Download File: P3HPC23_bricks_final-v4.pdf (pdf: 684 KB)
Will Thacher and Hans Johansen and Daniel Martin, "A high order Cartesian grid, finite volume method for elliptic interface problems", Journal of Computational Physics, October 15, 2023, 491, doi: 10.1016/j.jcp.2023.112351
S. Bevan, S. Cornford, L. Gilbert, I. Otosaka, D. Martin, T. Surawy-Stepney, "Amundsen Sea Embayment ice-sheet mass-loss predictions to 2050 calibrated using observations of velocity and elevation change", Journal of Glaciology, August 14, 2023, 1-11, doi: 10.1017/jog.2023.57
David Trebotich, Terry Ligocki, "High Resolution Simulation of Fluid Flow in Press Felts Used in Paper Manufacturing", Album of Porous Media, edited by E.F. Médici, A.D. Otero, (Springer Cham: April 14, 2023) Pages: 132 doi: https://doi.org/10.1007/978-3-031-23800-0_109
Tim Kneafsey, David Trebotich, Terry Ligocki, "Direct Numerical Simulation of Flow Through Nanoscale Shale Pores in a Mesoscale Sample", Album of Porous Media, edited by E.F. Médici, A.D. Otero, (Springer Cham: April 14, 2023) Pages: 87 doi: https://doi.org/10.1007/978-3-031-23800-0_69
Sergi Molins, David Trebotich, "Pore-Scale Controls on Calcite Dissolution using Direct Numerical Simulations", Album of Porous Media, edited by E.F. Médici, A.D. Otero, (Springer Cham: April 14, 2023) Pages: 135 doi: https://doi.org/10.1007/978-3-031-23800-0_112
John Bachan, Scott B. Baden, Dan Bonachea, Johnny Corbino, Max Grossman, Paul H. Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, Daniel Waters, "UPC++ v1.0 Programmer’s Guide, Revision 2023.3.0", Lawrence Berkeley National Laboratory Tech Report, March 30, 2023, LBNL 2001517, doi: 10.25344/S43591
UPC++ is a C++ library that supports Partitioned Global Address Space (PGAS) programming. It is designed for writing efficient, scalable parallel programs on distributed-memory parallel computers. The key communication facilities in UPC++ are one-sided Remote Memory Access (RMA) and Remote Procedure Call (RPC). The UPC++ control model is single program, multiple-data (SPMD), with each separate constituent process having access to local memory as it would in C++. The PGAS memory model additionally provides one-sided RMA communication to a global address space, which is allocated in shared segments that are distributed over the processes. UPC++ also features Remote Procedure Call (RPC) communication, making it easy to move computation to operate on data that resides on remote processes.
UPC++ was designed to support exascale high-performance computing, and the library interfaces and implementation are focused on maximizing scalability. In UPC++, all communication operations are syntactically explicit, which encourages programmers to consider the costs associated with communication and data movement. Moreover, all communication operations are asynchronous by default, encouraging programmers to seek opportunities for overlapping communication latencies with other useful work. UPC++ provides expressive and composable abstractions designed for efficiently managing aggressive use of asynchrony in programs. Together, these design principles are intended to enable programmers to write applications using UPC++ that perform well even on hundreds of thousands of cores.
2022
Daniel Martin, Samuel Kachuck, Joanna Millstein, Brent Minchew, "Examining the Sensitivity of Ice Sheet Models to Updates in Rheology (n=4)", AGU Fall Meeting, December 15, 2022,
- Download File: AGU2022-1.pdf (pdf: 508 KB)
Benjamin Sepanski, Tuowen Zhao, Hans Johansen, Samuel Williams, "Maximizing Performance Through Memory Hierarchy-Driven Data Layout Transformations", MCHPC, November 2022,
- Download File: MCHPC22_final.pdf (pdf: 401 KB)
John Bachan, Scott B. Baden, Dan Bonachea, Johnny Corbino, Max Grossman, Paul H. Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, Daniel Waters, "UPC++ v1.0 Programmer’s Guide, Revision 2022.9.0", Lawrence Berkeley National Laboratory Tech Report, September 30, 2022, LBNL 2001479, doi: 10.25344/S4QW26
UPC++ is a C++ library that supports Partitioned Global Address Space (PGAS) programming. It is designed for writing efficient, scalable parallel programs on distributed-memory parallel computers. The key communication facilities in UPC++ are one-sided Remote Memory Access (RMA) and Remote Procedure Call (RPC). The UPC++ control model is single program, multiple-data (SPMD), with each separate constituent process having access to local memory as it would in C++. The PGAS memory model additionally provides one-sided RMA communication to a global address space, which is allocated in shared segments that are distributed over the processes. UPC++ also features Remote Procedure Call (RPC) communication, making it easy to move computation to operate on data that resides on remote processes.
UPC++ was designed to support exascale high-performance computing, and the library interfaces and implementation are focused on maximizing scalability. In UPC++, all communication operations are syntactically explicit, which encourages programmers to consider the costs associated with communication and data movement. Moreover, all communication operations are asynchronous by default, encouraging programmers to seek opportunities for overlapping communication latencies with other useful work. UPC++ provides expressive and composable abstractions designed for efficiently managing aggressive use of asynchrony in programs. Together, these design principles are intended to enable programmers to write applications using UPC++ that perform well even on hundreds of thousands of cores.
Anne M. Felden, Daniel F. Martin, Esmond G. Ng, "SUHMO: an AMR SUbglacial Hydrology MOdel v1.0", Geosci. Model Dev. Discuss., July 27, 2022,
- Download File: gmd-2022-190.pdf (pdf: 5.5 MB)
John Bachan, Scott B. Baden, Dan Bonachea, Max Grossman, Paul H. Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, Daniel Waters, "UPC++ v1.0 Programmer’s Guide, Revision 2022.3.0", Lawrence Berkeley National Laboratory Tech Report, March 2022, LBNL 2001453, doi: 10.25344/S41C7Q
UPC++ is a C++ library that supports Partitioned Global Address Space (PGAS) programming. It is designed for writing efficient, scalable parallel programs on distributed-memory parallel computers. The key communication facilities in UPC++ are one-sided Remote Memory Access (RMA) and Remote Procedure Call (RPC). The UPC++ control model is single program, multiple-data (SPMD), with each separate constituent process having access to local memory as it would in C++. The PGAS memory model additionally provides one-sided RMA communication to a global address space, which is allocated in shared segments that are distributed over the processes. UPC++ also features Remote Procedure Call (RPC) communication, making it easy to move computation to operate on data that resides on remote processes.
UPC++ was designed to support exascale high-performance computing, and the library interfaces and implementation are focused on maximizing scalability. In UPC++, all communication operations are syntactically explicit, which encourages programmers to consider the costs associated with communication and data movement. Moreover, all communication operations are asynchronous by default, encouraging programmers to seek opportunities for overlapping communication latencies with other useful work. UPC++ provides expressive and composable abstractions designed for efficiently managing aggressive use of asynchrony in programs. Together, these design principles are intended to enable programmers to write applications using UPC++ that perform well even on hundreds of thousands of cores.
Samuel B. Kachuck, Morgan Whitcomb, Jeremy N. Bassis, Daniel F. Martin, Stephen F. Price, "Simulating ice-shelf extent using damage mechanics", Journal of Glaciology, March 7, 2022, 68(271):987-998, doi: 10.1017/jog.2022.12
2021
Samuel Benjamin Kachuck, Morgan Whitcomb, Jeremy N Bassis, Daniel F Martin, and Stephen F Price,, "When are (simulations of) ice shelves stable? Stabilizing forces in fracture-permitting models", AGU Fall Meeting, December 16, 2021,
Daniel F. Martin, Stephen L. Cornford, Esmond G. Ng, Impact of Improved Bedrock Geometry and Basal Friction Relations on Antarctic Vulnerability to Regional Ice Shelf Collapse, Americal Geophysical Union Fall Meeting, December 15, 2021,
Anne M. Felden, Daniel F. Martin, Esmond G. Ng, SUHMO: An SUbglacial Hydrology MOdel based on the Chombo AMR framework, American Geophysical Union Fall Meeting, December 13, 2021,
Courtney Shafer, Daniel F Martin and Esmond G Ng, "Comparing the Shallow-Shelf and L1L2 Approximations using BISICLES in the Context of MISMIP+ with Buttressing Effects", AGU Fall Meeting, December 13, 2021,
John Bachan, Scott B. Baden, Dan Bonachea, Max Grossman, Paul H. Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, Daniel Waters, "UPC++ v1.0 Programmer’s Guide, Revision 2021.9.0", Lawrence Berkeley National Laboratory Tech Report, September 2021, LBNL 2001424, doi: 10.25344/S4SW2T
UPC++ is a C++ library that supports Partitioned Global Address Space (PGAS) programming. It is designed for writing efficient, scalable parallel programs on distributed-memory parallel computers. The key communication facilities in UPC++ are one-sided Remote Memory Access (RMA) and Remote Procedure Call (RPC). The UPC++ control model is single program, multiple-data (SPMD), with each separate constituent process having access to local memory as it would in C++. The PGAS memory model additionally provides one-sided RMA communication to a global address space, which is allocated in shared segments that are distributed over the processes. UPC++ also features Remote Procedure Call (RPC) communication, making it easy to move computation to operate on data that resides on remote processes.
UPC++ was designed to support exascale high-performance computing, and the library interfaces and implementation are focused on maximizing scalability. In UPC++, all communication operations are syntactically explicit, which encourages programmers to consider the costs associated with communication and data movement. Moreover, all communication operations are asynchronous by default, encouraging programmers to seek opportunities for overlapping communication latencies with other useful work. UPC++ provides expressive and composable abstractions designed for efficiently managing aggressive use of asynchrony in programs. Together, these design principles are intended to enable programmers to write applications using UPC++ that perform well even on hundreds of thousands of cores.
Thomas M Evans, Andrew Siegel, Erik W Draeger,Jack Deslippe, Marianne M Francois, Timothy C Germann,William E Hart, Daniel F Martin, "A survey of software implementations used by application codes in the Exascale Computing Project", The International Journal of High Performance Computing Applications, June 25, 2021, doi: https://doi.org/10.1177/10943420211028940
- Download File: ijhpc-2021.pdf (pdf: 242 KB)
Tamsin L. Edwards, Sophie Nowicki, Ben Marzeion, Regine Hock, Heiko Goelzer, Hélène Seroussi, Nicolas C. Jourdain, Donald A. Slater, Fiona E. Turner, Christopher J. Smith, Christine M. McKenna, Erika Simon, Ayako Abe-Ouchi, Jonathan M. Gregory, Eric Larour, William H. Lipscomb, Antony J. Payne, Andrew Shepherd, Cécile Agosta, Patrick Alexander, Torsten Albrecht, Brian Anderson, Xylar Asay-Davis, Andy Aschwanden, Alice Barthel, Andrew Bliss, Reinhard Calov, Christopher Chambers, Nicolas Champollion, Youngmin Choi, Richard Cullather, Joshua Cuzzone, Christophe Dumas, Denis Felikson, Xavier Fettweis, Koji Fujita, Benjamin K. Galton-Fenzi, Rupert Gladstone, Nicholas R. Golledge, Ralf Greve, Tore Hattermann, Matthew J. Hoffman, Angelika Humbert, Matthias Huss, Philippe Huybrechts, Walter Immerzeel, Thomas Kleiner, Philip Kraaijenbrink, Sébastien Le clec’h, Victoria Lee, Gunter R. Leguy, Christopher M. Little, Daniel P. Lowry, Jan-Hendrik Malles, Daniel F. Martin, Fabien Maussion, Mathieu Morlighem, James F. O’Neill, Isabel Nias, Frank Pattyn, Tyler Pelle, Stephen F. Price, Aurélien Quiquet, Valentina Radić, Ronja Reese, David R. Rounce, Martin Rückamp, Akiko Sakai, Courtney Shafer, Nicole-Jeanne Schlegel, Sarah Shannon, Robin S. Smith, Fiammetta Straneo, Sainan Sun, Lev Tarasov, Luke D. Trusel, Jonas Van Breedam, Roderik van de Wal, Michiel van den Broeke, Ricarda Winkelmann, Harry Zekollari, Chen Zhao, Tong Zhang, Thomas Zwinger, "Projected land ice contributions to twenty-first-century sea level rise", Nature, May 5, 2021, 593:74-82, doi: 10.1038/s41586-021-03302-y
- Download File: Edwards-et-al-2021-Nature-preprint.pdf (pdf: 40 MB)
T. Groves, N. Ravichandrasekaran, B. Cook, N. Keen, D. Trebotich, N. Wright, B. Alverson, D. Roweth, K. Underwood, "Not All Applications Have Boring Communication Patterns: Profiling Message Matching with BMM", Concurrency and Computation: Practice and Experience, April 26, 2021, doi: 0.1002/cpe.6380
Tuowen Zhao, Mary Hall, Hans Johansen, Samuel Williams, "Improving Communication by Optimizing On-Node Data Movement with Data Layout", PPoPP, February 2021,
- Download File: PPoPP-Bricks-MPI-final.pdf (pdf: 864 KB)
2020
John Bachan, Scott B. Baden, Dan Bonachea, Max Grossman, Paul H. Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ v1.0 Programmer’s Guide, Revision 2020.10.0", Lawrence Berkeley National Laboratory Tech Report, October 2020, LBNL 2001368, doi: 10.25344/S4HG6Q
UPC++ is a C++11 library that provides Partitioned Global Address Space (PGAS) programming. It is designed for writing parallel programs that run efficiently and scale well on distributed-memory parallel computers. The PGAS model is single program, multiple-data (SPMD), with each separate constituent process having access to local memory as it would in C++. However, PGAS also provides access to a global address space, which is allocated in shared segments that are distributed over the processes. UPC++ provides numerous methods for accessing and using global memory. In UPC++, all operations that access remote memory are explicit, which encourages programmers to be aware of the cost of communication and data movement. Moreover, all remote-memory access operations are by default asynchronous, to enable programmers to write code that scales well even on hundreds of thousands of cores.
Sun, S., Pattyn, F., Simon, E., Albrecht, T., Cornford, S., Calov, R., . . . Zhang, T., "Antarctic ice sheet response to sudden and sustained ice-shelf collapse (ABUMIP)", Journal of Glaciology, September 14, 2020, 1-14, doi: 10.1017/jog.2020.67
S. B. Kachuck, D. F. Martin, J. N. Bassis, S. F. Price, "Rapid viscoelastic deformation slows marine ice sheet instability at Pine Island Glacier", Geophysical Research Letters, May 7, 2020, 47, doi: 10.1029/2019GL086446
Daniel F. Martin, Stephen L. Cornford, Esmond G Ng, Effect of Improved Bedrock Geometry on Antarctic Vulnerability to Regional Ice Shelf Collapse, European Geosciences Union 2020 General Assembly, May 5, 2020,
- Download File: EGU2020-10033-presentation.pdf (pdf: 467 KB)
Andrew Wells, James Parkinson, Daniel F Martin, Three-dimensional convection, phase change, and solute transport in mushy sea ice, European Geosciences Union 2020 General Assembly,, May 4, 2020,
- Download File: EGU2020-12685-presentation.pdf (pdf: 4.1 MB)
John Bachan, Scott B. Baden, Dan Bonachea, Max Grossman, Paul H. Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ v1.0 Programmer’s Guide, Revision 2020.3.0", Lawrence Berkeley National Laboratory Tech Report, March 2020, LBNL 2001269, doi: 10.25344/S4P88Z
UPC++ is a C++11 library that provides Partitioned Global Address Space (PGAS) programming. It is designed for writing parallel programs that run efficiently and scale well on distributed-memory parallel computers. The PGAS model is single program, multiple-data (SPMD), with each separate constituent process having access to local memory as it would in C++. However, PGAS also provides access to a global address space, which is allocated in shared segments that are distributed over the processes. UPC++ provides numerous methods for accessing and using global memory. In UPC++, all operations that access remote memory are explicit, which encourages programmers to be aware of the cost of communication and data movement. Moreover, all remote-memory access operations are by default asynchronous, to enable programmers to write code that scales well even on hundreds of thousands of cores.
Levermann, A., Winkelmann, R., Albrecht, T., Goelzer, H., Golledge, N. R., Greve, R., Huybrechts, P., Jordan, J., Leguy, G., Martin, D., Morlighem, M., Pattyn, F., Pollard, D., Quiquet, A., Rodehacke, C., Seroussi, H., Sutter, J., Zhang, T., Van Breedam, J., Calov, R., DeConto, R., Dumas, C., Garbe, J., Gudmundsson, G. H., Hoffman, M. J., Humbert, A., Kleiner, T., Lipscomb, W. H., Meinshausen, M., Ng, E., Nowicki, S. M. J., Perego, M., Price, S. F., Saito, F., Schlegel, N.-J., Sun, S., van de Wal, R. S. W, "Projecting Antarctica’s contribution to future sea level rise from basal ice shelf melt using linear response functions of 16 ice sheet models (LARMIP-2)", Earth System Dynamics, February 14, 2020, 11:35–76, doi: 10.5194/esd-11-35-2020
Sergi Molins, Cyprien Soulaine, Nikolaos I. Prasianakis, Aida Abbasi, Philippe Poncet, Anthony J. C. Ladd, Vitalii Starchenko, Sophie Roman, David Trebotich, Hamdi Tchelepi, Carl I. Steefel, "Simulation of mineral dissolution at the pore scale with evolving fluid-solid interfaces: review of approaches and benchmark problem set", Computational Geosciences, January 23, 2020, doi: 10.1007/s10596-019-09903-x
- Download File: Molins2020-Article-SimulationOfMineralDissolution.pdf (pdf: 5.3 MB)
James R.G. Parkinson, Daniel F. Martin, Andrew J. Wells, Richard F. Katz, "Modelling binary alloy solidification with adaptive mesh refinement", Journal of Computational Physics: X, January 7, 2020, 5, doi: 10.1016/j.jcpx.2019.100043
- Download File: 1-s2.0-S2590055219300599-main.pdf (pdf: 1.1 MB)
2019
Hans Johansen, Daniel Martin, Esmond Ng, "High-resolution Treatment of Topography and Grounding Line Dynamics in BISICLES", AGU 2019 Fall Meeting, December 13, 2019,
Daniel F. Martin, James Parkinson, Andrew Wells, Richard Katz, "3D convection, phase change, and solute transport in mushy sea ice", AGU 2019 Fall Meeting, December 12, 2019,
- Download File: Martin-AGU2019.pdf (pdf: 761 KB)
Samuel Kachuck, Daniel Martin, Jeremy Bassis, Stephen Price, "Rapid viscoelastic deformation slows marine ice sheet instability at Pine Island Glacier", AGU 2019 Fall Meeting, December 10, 2019,
- Download File: AGU2019-PineIsland-GIA.pdf (pdf: 1.6 MB)
Chaincy Kuo, Daniel Feldman, Daniel Martin, "Quantification of seasonal heat retention by sea-ice: calculations from analytic surface-energy balance", AGU Fall Meeting 2019, December 9, 2019,
Tuowen Zhao, Mary Hall, Samuel Williams, Hans Johansen, "Exploiting Reuse and Vectorization in Blocked Stencil Computations on CPUs and GPUs", Supercomputing (SC), November 2019,
- Download File: SC19-VectorScatter-final.pdf (pdf: 1019 KB)
Mark Adams, Stephen Cornford, Daniel Martin, Peter McCorquodale, "Composite matrix construction for structured grid adaptive mesh refinement", Computer Physics Communications, November 2019, 244:35-39, doi: 10.1016/j.cpc.2019.07.006
- Download File: AdamsCornfordMartinMcCorquodale.pdf (pdf: 1.2 MB)
John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ v1.0 Programmer’s Guide, Revision 2019.9.0", Lawrence Berkeley National Laboratory Tech Report, September 2019, LBNL 2001236, doi: 10.25344/S4V30R
UPC++ is a C++11 library that provides Partitioned Global Address Space (PGAS) programming. It is designed for writing parallel programs that run efficiently and scale well on distributed-memory parallel computers. The PGAS model is single program, multiple-data (SPMD), with each separate constituent process having access to local memory as it would in C++. However, PGAS also provides access to a global address space, which is allocated in shared segments that are distributed over the processes. UPC++ provides numerous methods for accessing and using global memory. In UPC++, all operations that access remote memory are explicit, which encourages programmers to be aware of the cost of communication and data movement. Moreover, all remote-memory access operations are by default asynchronous, to enable programmers to write code that scales well even on hundreds of thousands of cores.
John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ v1.0 Specification, Revision 2019.9.0", Lawrence Berkeley National Laboratory Tech Report, September 14, 2019, LBNL 2001237, doi: 10.25344/S4ZW2C
UPC++ is a C++11 library providing classes and functions that support Partitioned Global Address Space (PGAS) programming. We are revising the library under the auspices of the DOE’s Exascale Computing Project, to meet the needs of applications requiring PGAS support. UPC++ is intended for implementing elaborate distributed data structures where communication is irregular or fine-grained. The UPC++ interfaces for moving non-contiguous data and handling memories with different optimal access methods are composable and similar to those used in conventional C++. The UPC++ programmer can expect communication to run at close to hardware speeds. The key facilities in UPC++ are global pointers, that enable the programmer to express ownership information for improving locality, one-sided communication, both put/get and RPC, futures and continuations. Futures capture data readiness state, which is useful in making scheduling decisions, and continuations provide for completion handling via callbacks. Together, these enable the programmer to chain together a DAG of operations to execute asynchronously as high-latency dependencies become satisfied.
Weiqun Zhang, Ann Almgren, Vince Beckner, John Bell, Johannes Blashke, Cy Chan, Marcus Day, Brian Friesen, Kevin Gott, Daniel Graves, Max P. Katz, Andrew Myers, Tan Nguyen, Andrew Nonaka, Michele Rosso, Samuel Williams, Michael Zingale, "AMReX: a framework for block-structured adaptive mesh refinement", Journal of Open Source Software, May 2019, doi: 10.21105/joss.01370
Boris Lo, Phillip Colella, "An Adaptive Local Discrete Convolution Method for the Numerical Solution of Maxwell's Equations", Communications in Applied Mathematics and Computational Science, April 26, 2019, 14:105-119, doi: DOI: 10.2140/camcos.2019.14.105
D.F. Martin, H.S. Johansen, P.O. Schwartz, E.G. Ng, "Improved Discretization of Grounding Lines and Calving Fronts using an Embedded-Boundary Approach in BISICLES", European Geosciences Union General Assembly, April 10, 2019,
- Download File: Martin-EGU2019-final.pdf (pdf: 1.2 MB)
Sergi Molins, David Trebotich, Bhavna Arora, Carl Steefel, Hang Deng, "Multi-scale Model of Reactive Transport in Fractured Media: Diffusion Limitations on Rates", Transport in Porous Media, March 20, 2019, 128:701-721, doi: 10.1007/s11242-019-01266-2
- Download File: Molins2019-Article-Multi-scaleModelOfReactiveTran.pdf (pdf: 3.2 MB)
John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ Programmer's Guide, v1.0-2019.3.0", Lawrence Berkeley National Laboratory Tech Report, March 2019, LBNL 2001191, doi: 10.25344/S4F301
UPC++ is a C++11 library that provides Partitioned Global Address Space (PGAS) programming. It is designed for writing parallel programs that run efficiently and scale well on distributed-memory parallel computers. The PGAS model is single program, multiple-data (SPMD), with each separate constituent process having access to local memory as it would in C++. However, PGAS also provides access to a global address space, which is allocated in shared segments that are distributed over the processes. UPC++ provides numerous methods for accessing and using global memory. In UPC++, all operations that access remote memory are explicit, which encourages programmers to be aware of the cost of communication and data movement. Moreover, all remote-memory access operations are by default asynchronous, to enable programmers to write code that scales well even on hundreds of thousands of cores.
John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ Specification v1.0, Draft 10", Lawrence Berkeley National Laboratory Tech Report, March 15, 2019, LBNL 2001192, doi: 10.25344/S4JS30
UPC++ is a C++11 library providing classes and functions that support Partitioned Global Address Space (PGAS) programming. We are revising the library under the auspices of the DOE’s Exascale Computing Project, to meet the needs of applications requiring PGAS support. UPC++ is intended for implementing elaborate distributed data structures where communication is irregular or fine-grained. The UPC++ interfaces for moving non-contiguous data and handling memories with different optimal access methods are composable and similar to those used in conventional C++. The UPC++ programmer can expect communication to run at close to hardware speeds. The key facilities in UPC++ are global pointers, that enable the programmer to express ownership information for improving locality, one-sided communication, both put/get and RPC, futures and continuations. Futures capture data readiness state, which is useful in making scheduling decisions, and continuations provide for completion handling via callbacks. Together, these enable the programmer to chain together a DAG of operations to execute asynchronously as high-latency dependencies become satisfied.
Daniel Martin, Modeling Antarctic Ice Sheet Dynamics using Adaptive Mesh Refinement, 2019 SIAM Conference on Computational Science and Engineering, February 26, 2019,
- Download File: Martin-CSE19-final.pdf (pdf: 3.6 MB)
Scott B. Baden, Paul H. Hargrove, Hadia Ahmed, John Bachan, Dan Bonachea, Steve Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "Pagoda: Lightweight Communications and Global Address Space Support for Exascale Applications - UPC++ (ECP'19)", Poster at Exascale Computing Project (ECP) Annual Meeting 2019, January 2019,
Daniel F. Martin, Stephen L. Cornford, Antony J. Payne, "Millennial‐scale Vulnerability of the Antarctic Ice Sheet to Regional Ice Shelf Collapse", Geophysical Research Letters, January 9, 2019, doi: 10.1029/2018gl081229
Abstract:
The Antarctic Ice Sheet (AIS) remains the largest uncertainty in projections of future sea level rise. A likely climate‐driven vulnerability of the AIS is thinning of floating ice shelves resulting from surface‐melt‐driven hydrofracture or incursion of relatively warm water into subshelf ocean cavities. The resulting melting, weakening, and potential ice‐shelf collapse reduces shelf buttressing effects. Upstream ice flow accelerates, causing thinning, grounding‐line retreat, and potential ice sheet collapse. While high‐resolution projections have been performed for localized Antarctic regions, full‐continent simulations have typically been limited to low‐resolution models. Here we quantify the vulnerability of the entire present‐day AIS to regional ice‐shelf collapse on millennial timescales treating relevant ice flow dynamics at the necessary ∼1km resolution. Collapse of any of the ice shelves dynamically connected to the West Antarctic Ice Sheet (WAIS) is sufficient to trigger ice sheet collapse in marine‐grounded portions of the WAIS. Vulnerability elsewhere appears limited to localized responses.
Plain Language Summary:
The biggest uncertainty in near‐future sea level rise (SLR) comes from the Antarctic Ice Sheet. Antarctic ice flows in relatively fast‐moving ice streams. At the ocean, ice flows into enormous floating ice shelves which push back on their feeder ice streams, buttressing them and slowing their flow. Melting and loss of ice shelves due to climate changes can result in faster‐flowing, thinning and retreating ice leading to accelerated rates of global sea level rise.To learn where Antarctica is vulnerable to ice‐shelf loss, we divided it into 14 sectors, applied extreme melting to each sector's floating ice shelves in turn, then ran our ice flow model 1000 years into the future for each case. We found three levels of vulnerability. The greatest vulnerability came from attacking any of the three ice shelves connected to West Antarctica, where much of the ice sits on bedrock lying below sea level. Those dramatic responses contributed around 2m of sea level rise. The second level came from four other sectors, each with a contribution between 0.5‐1m. The remaining sectors produced little to no contribution. We examined combinations of sectors, determining that sectors behave independently of each other for at least a century.
2018
Dan Martin, Brent Minchew, Stephen Price, Esmond Ng, Modeling Marine Ice Cliff Instability: Higher resolution leads to lower impact, AGU Fall Meeting, December 12, 2018,
- Download File: Martin-AGU-2018-1.pdf (pdf: 2.8 MB)
Tuowen Zhao, Samuel Williams, Mary Hall, Hans Johansen, "Delivering Performance Portable Stencil Computations on CPUs and GPUs Using Bricks", International Workshop on Performance, Portability and Productivity in HPC (P3HPC), November 2018,
- Download File: p3hpc-bricks-final.pdf (pdf: 1.3 MB)
G.V. Vogman, U. Shumlak, P. Colella, "Conservative fourth-order finite-volume Vlasov–Poisson solver for axisymmetric plasmas in cylindrical (r,vr,vθ) phase space coordinates", Journal of Computational Physics, November 15, 2018, 373:877 - 899, doi: 10.1016/j.jcp.2018.07.029
Scott B. Baden, Paul H. Hargrove, Hadia Ahmed, John Bachan, Dan Bonachea, Steve Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ and GASNet-EX: PGAS Support for Exascale Applications and Runtimes", The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC'18) Research Poster, November 2018,
Lawrence Berkeley National Lab is developing a programming system to support HPC application development using the Partitioned Global Address Space (PGAS) model. This work is driven by the emerging need for adaptive, lightweight communication in irregular applications at exascale. We present an overview of UPC++ and GASNet-EX, including examples and performance results.
GASNet-EX is a portable, high-performance communication library, leveraging hardware support to efficiently implement Active Messages and Remote Memory Access (RMA). UPC++ provides higher-level abstractions appropriate for PGAS programming such as: one-sided communication (RMA), remote procedure call, locality-aware APIs for user-defined distributed objects, and robust support for asynchronous execution to hide latency. Both libraries have been redesigned relative to their predecessors to meet the needs of exascale computing. While both libraries continue to evolve, the system already demonstrates improvements in microbenchmarks and application proxies.
Chris Kavouklis, Phillip Colella, "Computation of volume potentials on structured grids with the method of local corrections", Communications in Applied Mathematics and Computational Science, October 31, 2018, 14:1-32, doi: DOI: 10.2140/camcos.2019.14.1
Hang Deng, Sergi Molins, David Trebotich, Carl Steefel, Donald DePaolo, "Pore-scale numerical investigation of the impacts of surface roughness: Up-scaling of reaction rates in rough fractures", Geochimica et Cosmochimica Acta, October 15, 2018, 239:374-389, doi: 10.1016/j.gca.2018.08.005
John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ Programmer's Guide, v1.0-2018.9.0", Lawrence Berkeley National Laboratory Tech Report, September 2018, LBNL 2001180, doi: 10.25344/S49G6V
UPC++ is a C++11 library that provides Partitioned Global Address Space (PGAS) programming. It is designed for writing parallel programs that run efficiently and scale well on distributed-memory parallel computers. The PGAS model is single program, multiple-data (SPMD), with each separate constituent process having access to local memory as it would in C++. However, PGAS also provides access to a global address space, which is allocated in shared segments that are distributed over the processes. UPC++ provides numerous methods for accessing and using global memory. In UPC++, all operations that access remote memory are explicit, which encourages programmers to be aware of the cost of communication and data movement. Moreover, all remote-memory access operations are by default asynchronous, to enable programmers to write code that scales well even on hundreds of thousands of cores.
John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ Specification v1.0, Draft 8", Lawrence Berkeley National Laboratory Tech Report, September 26, 2018, LBNL 2001179, doi: 10.25344/S45P4X
UPC++ is a C++11 library providing classes and functions that support Partitioned Global Address Space (PGAS) programming. We are revising the library under the auspices of the DOE’s Exascale Computing Project, to meet the needs of applications requiring PGAS support. UPC++ is intended for implementing elaborate distributed data structures where communication is irregular or fine-grained. The UPC++ interfaces for moving non-contiguous data and handling memories with different optimal access methods are composable and similar to those used in conventional C++. The UPC++ programmer can expect communication to run at close to hardware speeds. The key facilities in UPC++ are global pointers, that enable the programmer to express ownership information for improving locality, one-sided communication, both put/get and RPC, futures and continuations. Futures capture data readiness state, which is useful in making scheduling decisions, and continuations provide for completion handling via callbacks. Together, these enable the programmer to chain together a DAG of operations to execute asynchronously as high-latency dependencies become satisfied.
Dan Martin, Ice sheet model-dependence of persistent ice-cliff formation, European Geosciences Union General Assembly 2018, April 11, 2018,
- Download File: Martin-EGU-2018-final.pdf (pdf: 2.8 MB)
John Bachan, Scott Baden, Dan Bonachea, Paul H. Hargrove, Steven Hofmeyr, Khaled Ibrahim, Mathias Jacquelin, Amir Kamil, Bryce Lelbach, Brian Van Straalen, "UPC++ Specification v1.0, Draft 6", Lawrence Berkeley National Laboratory Tech Report, March 26, 2018, LBNL 2001135, doi: 10.2172/1430689
UPC++ is a C++11 library providing classes and functions that support Partitioned Global Address Space (PGAS) programming. We are revising the library under the auspices of the DOE’s Exascale Computing Project, to meet the needs of applications requiring PGAS support. UPC++ is intended for implementing elaborate distributed data structures where communication is irregular or fine-grained. The UPC++ interfaces for moving non-contiguous data and handling memories with different optimal access methods are composable and similar to those used in conventional C++. The UPC++ programmer can expect communication to run at close to hardware speeds. The key facilities in UPC++ are global pointers, that enable the programmer to express ownership information for improving locality, one-sided communication, both put/get and RPC, futures and continuations. Futures capture data readiness state, which is useful in making scheduling decisions, and continuations provide for completion handling via callbacks. Together, these enable the programmer to chain together a DAG of operations to execute asynchronously as high-latency dependencies become satisfied.
John Bachan, Scott Baden, Dan Bonachea, Paul H. Hargrove, Steven Hofmeyr, Khaled Ibrahim, Mathias Jacquelin, Amir Kamil, Brian Van Straalen, "UPC++ Programmer’s Guide, v1.0-2018.3.0", Lawrence Berkeley National Laboratory Tech Report, March 2018, LBNL 2001136, doi: 10.2172/1430693
UPC++ is a C++11 library that provides Partitioned Global Address Space (PGAS) programming. It is designed for writing parallel programs that run efficiently and scale well on distributed-memory parallel computers. The PGAS model is single program, multiple-data (SPMD), with each separate thread of execution (referred to as a rank, a term borrowed from MPI) having access to local memory as it would in C++. However, PGAS also provides access to a global address space, which is allocated in shared segments that are distributed over the ranks. UPC++ provides numerous methods for accessing and using global memory. In UPC++, all operations that access remote memory are explicit, which encourages programmers to be aware of the cost of communication and data movement. Moreover, all remote-memory access operations are by default asynchronous, to enable programmers to write code that scales well even on hundreds of thousands of cores.
Blake Barker, Rose Nguyen, Björn Sandsted, Nathaniel Ventura, Colin Wahl, "Computing Evans functions numerically via boundary-value problems", Physica D: Nonlinear Phenomena, March 15, 2018, 367:1-10, doi: https://doi.org/10.1016/j.physd.2017.12.002
Tuowen Zhao, Mary Hall, Protonu Basu, Samuel Williams, Hans Johansen, "SIMD code generation for stencils on brick decompositions", Proceedings of the 23rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP), February 2018,
M. S. Waibel, C. L. Hulbe, C. S. Jackson, D. F. Martin, "Rate of Mass Loss Across the Instability Threshold for Thwaites Glacier Determines Rate of Mass Loss for Entire Basin", Geophysical Research Letters, February 19, 2018, 45:809-816, doi: 10.1002/2017GL076470
Daniel F Martin, Xylar Asay-Davis, Jan De Rydt,, "Sensitivity of Ice-Ocean coupling to interactions with subglacial hydrology", AGU 2018 Ocean Sciences Meeting,, February 14, 2018,
- Download File: Martin-OS2018.pdf (pdf: 1.6 MB)
John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Khaled Ibrahim, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ and GASNet: PGAS Support for Exascale Apps and Runtimes (ECP'18)", Poster at Exascale Computing Project (ECP) Annual Meeting 2018, February 2018,
2017
John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Khaled Ibrahim, Mathias Jacquelin, Amir Kamil, Brian Van Straalen, "UPC++: a PGAS C++ Library", The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC'17) Research Poster, November 2017,
John Bachan, Dan Bonachea, Paul H Hargrove, Steve Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, Scott B Baden, "The UPC++ PGAS library for Exascale Computing", Proceedings of the Second Annual PGAS Applications Workshop (PAW17), November 13, 2017, doi: 10.1145/3144779.3169108
We describe UPC++ V1.0, a C++11 library that supports APGAS programming. UPC++ targets distributed data structures where communication is irregular or fine-grained. The key abstractions are global pointers, asynchronous programming via RPC, and futures. Global pointers incorporate ownership information useful in optimizing for locality. Futures capture data readiness state, are useful for scheduling and also enable the programmer to chain operations to execute asynchronously as high-latency dependencies become satisfied, via continuations. The interfaces for moving non-contiguous data and handling memories with different optimal access methods are composable and closely resemble those used in modern C++. Communication in UPC++ runs at close to hardware speeds by utilizing the low-overhead GASNet-EX communication library.
B. Van Straalen, D. Trebotich, A. Ovsyannikov and D.T. Graves, "Scalable Structured Adaptive Mesh Refinement with Complex Geometry", Exascale Scientific Applications: Programming Approaches for Scalability, Performance, and Portability, edited by Straatsma, T., Antypas, K., Williams, T., (Chapman and Hall/CRC: November 9, 2017)
John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Khaled Ibrahim, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ Programmer’s Guide, v1.0-2017.9", Lawrence Berkeley National Laboratory Tech Report, September 2017, LBNL 2001065, doi: 10.2172/1398522
UPC++ is a C++11 library that provides Asynchronous Partitioned Global Address Space (APGAS) programming. It is designed for writing parallel programs that run efficiently and scale well on distributed-memory parallel computers. The APGAS model is single program, multiple-data (SPMD), with each separate thread of execution (referred to as a rank, a term borrowed from MPI) having access to local memory as it would in C++. However, APGAS also provides access to a global address space, which is allocated in shared segments that are distributed over the ranks. UPC++ provides numerous methods for accessing and using global memory. In UPC++, all operations that access remote memory are explicit, which encourages programmers to be aware of the cost of communication and data movement. Moreover, all remote-memory access operations are by default asynchronous, to enable programmers to write code that scales well even on hundreds of thousands of cores.
John Bachan, Scott Baden, Dan Bonachea, Paul H. Hargrove, Steven Hofmeyr, Khaled Ibrahim, Mathias Jacquelin, Amir Kamil, Bryce Lelbach, Brian Van Straalen, "UPC++ Specification v1.0, Draft 4", Lawrence Berkeley National Laboratory Tech Report, September 27, 2017, LBNL 2001066, doi: 10.2172/1398521
UPC++ is a C++11 library providing classes and functions that support Asynchronous Partitioned Global Address Space (APGAS) programming. We are revising the library under the auspices of the DOE’s Exascale Computing Project, to meet the needs of applications requiring PGAS support. UPC++ is intended for implementing elaborate distributed data structures where communication is irregular or fine-grained. The UPC++ interfaces for moving non-contiguous data and handling memories with different optimal access methods are composable and similar to those used in conventional C++. The UPC++ programmer can expect communication to run at close to hardware speeds. The key facilities in UPC++ are global pointers, that enable the programmer to express ownership information for improving locality, one-sided communication, both put/get and RPC, futures and continuations. Futures capture data readiness state, which is useful in making scheduling decisions, and continuations provide for completion handling via callbacks. Together, these enable the programmer to chain together a DAG of operations to execute asynchronously as high-latency dependencies become satisfied.
Daniel Martin, Stephen Cornford, Antony Payne, Millennial-Scale Vulnerability of the Antarctic Ice Sheet to localized subshelf warm-water forcing, International Symposium on Polar Ice, Polar Climate, Polar Change, August 18, 2017,
- Download File: Martin-IGS-2017.pdf (pdf: 6.9 MB)
Nishant Nangia, Hans Johansen, Neelesh A. Patankar, Amneet Pal Singh Bhalla, "A moving control volume approach to computing hydrodynamic forces and torques on immersed bodies", Journal of Computational Physics, June 29, 2017, doi: 10.1016/j.jcp.2017.06.047
Bryce Adelstein Lelbach, Hans Johansen, Samuel Williams, "Simultaneously Solving Swarms of Small Sparse Systems on SIMD Silicon", Parallel and Distributed Scientific and Engineering Computing (PDSEC), June 2017,
Dharshi Devendran, Daniel T. Graves, Hans Johansen,Terry Ligocki, "A Fourth Order Cartesian Grid Embedded Boundary Method for Poisson's Equation", Communications in Applied Mathematics and Computational Science, edited by Silvio Levy, May 12, 2017, 12:51-79, doi: DOI 10.2140/camcos.2017.12.51
- Download File: poisson-eb-4th-order.pdf (pdf: 1.1 MB)
Chaplin, Christopher, Colella, Phillip, "A single-stage flux-corrected transport algorithm for high-order finite-volume methods", Communications in Applied Mathematics and Computational Science, May 8, 2017, 12:1-24, doi: 10.2140/camcos.2017.12.1
Sergi Molins, David Trebotich, Gregory H. Miller, Carl I. Steefel, "Mineralogical and transport controls on the evolution of porous media texture using direct numerical simulation", Water Resources Research, April 7, 2017, doi: 10.1002/2016WR020323
Protonu Basu, Samuel Williams, Brian Van Straalen, Leonid Oliker, Phillip Colella, Mary Hall, "Compiler-Based Code Generation and Autotuning for Geometric Multigrid on GPU-Accelerated Supercomputers", Parallel Computing (PARCO), April 2017, doi: 10.1016/j.parco.2017.04.002
Saverio E Spagnolie, Colin Wahl, Joseph Lukasik, Jean-Luc Thiffeault, "Microorganism billiards", Physica D: Nonlinear Phenomena, February 15, 2017, 341:33 - 44, doi: https://doi.org/10.1016/j.physd.2016.09.010
John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Khaled Ibrahim, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ and GASNet: PGAS Support for Exascale Apps and Runtimes (ECP'17)", Poster at Exascale Computing Project (ECP) Annual Meeting 2017, January 2, 2017,
Esmond Ng, Katherine J. Evans, Peter Caldwell, Forrest M. Hoffman, Charles Jackson, Kerstin Van Dam, Ruby Leung, Daniel F. Martin, George Ostrouchov, Raymond Tuminaro, Paul Ullrich, Stefan Wild, Samuel Williams, "Advances in Cross-Cutting Ideas for Computational Climate Science (AXICCS)", January 2017, doi: 10.2172/1341564
- Download File: AXICCS-Report.pdf (pdf: 4 MB)
2016
Jared O. Ferguson, Christiane Jablonowski, Hans Johansen, Peter McCorquodale, Phillip Colella, Paul A. Ullrich, "Analyzing the adaptive mesh refinement (AMR) characteristics of a high-order 2D cubed-sphere shallow-water model", Mon. Wea. Rev., November 9, 2016, 144:4641–4666, doi: 10.1175/MWR-D-16-0197.1
S.L. Cornford, D.F.Martin, V. Lee, A.J. Payne, E.G. Ng, "Adaptive mesh refinement versus subgrid friction interpolation in simulations of Antarctic ice dynamics", Annals of Glaciology, September 2016, 57 (73), doi: 10.1017/aog.2016.13
Boris Lo, Victor Minden, Phillip Colella, "A real-space Green’s function method for the numerical solution of Maxwell’s equations", Communications in Applied Mathematics and Computational Science, August 11, 2016, 11.2:143-170, doi: 10.2140/camcos.2016.11.143
Anshu Dubey, Hajime Fujita, Daniel T. Graves, Andrew Chien Devesh Tiwari, "Granularity and the Cost of Error Recovery in Resilient AMR Scientific Applications", SuperComputing 2016, August 10, 2016,
Xylar S. Asay-Davis, Stephen L. Cornford, Gaël Durand, Benjamin K. Galton-Fenzi, Rupert M. Gladstone, G. Hilmar Gudmundsson, Tore Hattermann, David M. Holland, Denise Holland, Paul R. Holland, Daniel F. Martin, Pierre Mathiot, Frank Pattyn, Hélène Seroussi, "Experimental design for three interrelated marine ice sheet and ocean model intercomparison projects: MISMIP v. 3 (MISMIP +), ISOMIP v. 2 (ISOMIP +) and MISOMIP v. 1 (MISOMIP1)", Geoscientific Model Development, July 2016, 9(7), doi: doi:10.5194/gmd-9-2471-2016
Dharshi Devendran, Suren Byna, Bin Dong, Brian van Straalen, Hans Johansen, Noel Keen, and Nagiza Samatova,, "Collective I/O Optimizations for Adaptive Mesh Refinement Data Writes on Lustre File System", Cray User Group (CUG) 2016, May 10, 2016,
Samuel Williams, Mark Adams, Brian Van Straalen, Performance Portability in Hybrid and Heterogeneous Multigrid Solvers, Copper Moutain, March 2016,
- Download File: CU16SWWilliams.pptx (pptx: 1 MB)
Andrew Myers, Phillip Colella, Brian Van Straalen, "A 4th-Order Particle-in-Cell Method with Phase-Space Remapping for the Vlasov-Poisson Equation", submitted to SISC, February 1, 2016,
Andrew Myers, Phillip Colella, Brian Van Straalen, "The Convergence of Particle-in-Cell Schemes for Cosmological Dark Matter Simulations", The Astrophysical Journal, Volume 816, Issue 2, article id. 56, 2016,
Xiaocheng Zou, David A Boyuka II, Dhara Desai, Daniel F Martin, Suren Byna, Kesheng Wu, "AMR-aware in situ indexing and scalable querying", Proceedings of the 24th High Performance Computing Symposium, January 1, 2016, 26,
Bin Dong, Suren Byna, Kesheng Wu, Hans Johansen, Jeffrey N Johnson, Noel Keen, others, "Data elevator: Low-contention data movement in hierarchical storage system", 2016 IEEE 23rd international conference on high performance computing (HiPC), January 1, 2016, 152--161,
- Download File: 201612-DataElevator-HiPC2016-Bin-Byna.pdf (pdf: 765 KB)
Andrey Ovsyannikov, Melissa Romanus, Brian Van Straalen, Gunther H. Weber, David Trebotich, "Scientific Workflows at DataWarp-Speed: Accelerated Data-Intensive Science using NERSC s Burst Buffer", Proceedings of the 1st Joint International Workshop on Parallel Data Storage & Data Intensive Scalable Computing Systems, IEEE Press, 2016, 1--6, doi: 10.1109/PDSW-DISCS.2016.005
2015
Stephen M. Guzik, Xinfeng Gao, Landon D. Owen, Peter McCorquodale, Phillip Colella, "A freestream-preserving fourth-order finite-volume method in mapped coordinates with adaptive-mesh refinement", Computers & Fluids, December 21, 2015, 123:202–217, doi: 10.1016/j.compfluid.2015.10.001
S. L. Cornford, D. F. Martin, A. J. Payne, E. G. Ng, A. M. Le Brocq, R. M. Gladstone, T. L. Edwards, S. R. Shannon, C. Agosta, M. R. van den Broeke, H. H. Hellmer, G. Krinner, S. R. M. Ligtenberg, R. Timmermann, D. G. Vaughan, "Century-scale simulations of the response of the West Antarctic Ice Sheet to a warming climate", The Cryosphere, August 18, 2015, doi: 10.5194/tc-9-1579-2015, 2015
Anshu Dubey, Daniel T. Graves, "A Design Proposal for a Next Generation Scientific Software Framework", EuroPar 2015, July 31, 2015,
- Download File: framework.pdf (pdf: 774 KB)
M. Dorf, M. Dorr, J. Hittinger, T. Rognlien, P. Colella, P. Schwartz,R. Cohen, W. Lee, "Modeling Edge Plasma with the Continuum Kinetic Code COGENT", 2015,
- Download File: DorfWhitePaper.pdf (pdf: 306 KB)
P. McCorquodale, P.A. Ullrich, H. Johansen, P. Colella, "An adaptive multiblock high-order finite-volume method for solving the shallow-water equations on the sphere", Comm. App. Math. and Comp. Sci., 2015, 10:121-162, doi: 10.2140/camcos.2015.10.121
Protonu Basu, Samuel Williams, Brian Van Straalen, Mary Hall, Leonid Oliker, Phillip Colella, "Compiler-Directed Transformation for Higher-Order Stencils", International Parallel and Distributed Processing Symposium (IPDPS), May 2015,
- Download File: ipdps15CHiLL.pdf (pdf: 1.8 MB)
P. McCorquodale, M.R. Dorr, J.A.F. Hittinger, P. Colella, "High-order finite-volume methods for hyperbolic conservation laws on mapped multiblock grids", J. Comput. Phys., May 1, 2015, 288:181-195, doi: 10.1016/j.jcp.2015.01.006
Xiaocheng Zou, Kesheng Wu, David A. Boyuka, Daniel F. Martin, Suren Byna, Houjun, Kushal Bansal, Terry J. Ligocki, Hans Johansen, and Nagiza F. Samatova, "Parallel In Situ Detection of Connected Components Adaptive Mesh Refinement Data", Proceedings of the Cluster, Cloud and Grid Computing (CCGrid) 2015, 2015,
Daniel Martin, Xylar Asay-Davis, Stephen Cornford, Stephen Price, Esmond Ng, William Collins, A Tale of Two Forcings: Present-Day Coupled Antarctic Ice-sheet/Southern Ocean dynamics using the POPSICLES model., European Geosciences Union General Assembly 2015, April 16, 2015,
- Download File: Martin-EGU-2015.pdf (pdf: 5.3 MB)
D. Devendran, D. T. Graves, H. Johansen, "A higher-order finite-volume discretization method for Poisson's equation in cut cell geometries", submitted to SIAM Journal on Scientific Computing (preprint on arxiv), 2015,
Peter Schwartz, Julie Percelay, Terry J. Ligocki, Hans Johansen, Daniel T. Graves, Dharshi Devendran, Phillip Colella, Eli Ateljevich, "High-accuracy embedded boundary grid generation using the divergence theorem", Communications in Applied Mathematics and Computational Science 10-1 (2015), 83--96. DOI 10.2140/camcos.2015.10.83, March 31, 2015,
Daniel Martin, Peter O. Schwartz, Esmond G. Ng, Improving Grounding Line Discretization using an Embedded-Boundary Approach in BISICLES, 2015 SIAM Conference on Computational Science and Engineering, March 14, 2015,
- Download File: Martin-CSE-3-15.pdf (pdf: 3.5 MB)
Daniel F. Martin, Response of the Antarctic Ice Sheet to Ocean Forcing using the POPSICLES Coupled Ice sheet-ocean model, Joint Land Ice Working Group/Polar Climate Working Group Meeting, Boulder, CO, February 3, 2015,
- Download File: Martin-LIWG-2015-final.pdf (pdf: 4.3 MB)
David Trebotich, Daniel T. Graves, "An Adaptive Finite Volume Method for the Incompressible Navier-Stokes Equations in Complex Geometries", Communications in Applied Mathematics and Computational Science, January 15, 2015, 10-1:43-82, doi: 10.2140/camcos.2015.10.43
- Download File: camcos-v10-n1-p03-s3.pdf (pdf: 9.1 MB)
M. Adams, P. Colella, D. T. Graves, J.N. Johnson, N.D. Keen, T. J. Ligocki. D. F. Martin. P.W. McCorquodale, D. Modiano. P.O. Schwartz, T.D. Sternberg, B. Van Straalen, "Chombo Software Package for AMR Applications - Design Document", Lawrence Berkeley National Laboratory Technical Report LBNL-6616E, January 9, 2015,
- Download File: chomboDesign.pdf (pdf: 994 KB)
P. Colella, D. T. Graves, T. J. Ligocki, G.H. Miller , D. Modiano, P.O. Schwartz, B. Van Straalen, J. Pillod, D. Trebotich, M. Barad, "EBChombo Software Package for Cartesian Grid, Embedded Boundary Applications", Lawrence Berkeley National Laboratory Technical Report LBNL-6615E, January 9, 2015,
- Download File: ebmain.pdf (pdf: 681 KB)
A Chien, P Balaji, P Beckman, N Dun, A Fang, H Fujita, K Iskra, Z Rubenstein, Z Zheng, R Schreiber, others, "Versioned Distributed Arrays for Resilience in Scientific Applications: Global View Resilience", Journal of Computational Science, 2015,
2014
E.G. Ng, D.F. Martin, X. S. Asay-Davis , S.F. Price , W.D. Collins, "High-resolution coupled ice sheet-ocean modeling using the POPSICLES model", American Geophysical Union Fall Meeting, December 17, 2014,
- Download File: Ng-AGU2014.pdf (pdf: 815 KB)
D.F. Martin, X.S.Asay-Davis, S.F. Price, S.L. Cornford, M. Maltrud, E.G. Ng, W.D. Collins, "Response of the Antarctic ice sheet to ocean forcing using the POPSICLES coupled ice sheet-ocean model", AmericanGeophysical Union Fall Meeting, December 17, 2014,
- Download File: Martin-AGU2014.pdf (pdf: 1000 KB)
J. Ferguson, C. Jablonowski, H. Johansen, R. English, P. McCorquodale, P. Colella, J. Benedict, W. Collins, J. Johnson, P. Ullrich, "Assessing Grid Refinement Strategies in the Chombo Adaptive Mesh Refinement Model", AGU Fall Meeting, San Francisco, CA, December 15, 2014,
David Trebotich, Mark F. Adams, Sergi Molins, Carl I. Steefel, Chaopeng Shen, "High-Resolution Simulation of Pore-Scale Reactive Transport Processes Associated with Carbon Sequestration", Computing in Science and Engineering, December 2014, 16:22-31, doi: 10.1109/MCSE.2014.77
- Download File: CISE-16-06-Trebotichappeared.pdf (pdf: 2.7 MB)
Yu Jung Lo, Samuel Williams, Brian Van Straalen, Terry J. Ligocki, Matthew J. Cordery, Leonid Oliker, Mary W. Hall, "Roofline Model Toolkit: A Practical Tool for Architectural and Program Analysis", Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS), November 2014, doi: 10.1007/978-3-319-17248-4_7
- Download File: PMBS14-Roofline.pdf (pdf: 340 KB)
G.V. Vogman, P. Colella, U. Shumlak, "Dory–Guest–Harris instability as a benchmark for continuum kinetic Vlasov–Poisson simulations of magnetized plasmas", Journal of Computational Physics, November 15, 2014, 277:101 - 120, doi: 10.1016/j.jcp.2014.08.014
Dharshi Devendran, Daniel T. Graves, Hans Johansen, "A Hybrid Multigrid Algorithm for Poisson's equation using an Adaptive, Fourth Order Treatment of Cut Cells", LBNL Report Number: LBNL-1004329, November 11, 2014,
- Download File: multigrid.pdf (pdf: 221 KB)
R.H. Cohen,M Dorf, M. Dorr, D.D. Ryutov, P.Schwartz, Plans for Extending COGENT to Model Snowflake Divertors, APS-DPP Meeting, New Orleans LA, October 27, 2014,
- Download File: SnowflakeCogentDPP2014.pdf (pdf: 1.5 MB)
Protonu Basu, Samuel Williams, Brian Van Straalen, Leonid Oliker, Mary Hall, "Converting Stencils to Accumulations for Communication-Avoiding Optimization in Geometric Multigrid", Workshop on Stencil Computations (WOSC), October 2014,
- Download File: wosc14chill.pdf (pdf: 973 KB)
R.H. Cohen, M. Dorf, M. Dorr, D.D. Ryutov, P.Schwartz, "Plans for Extending COGENT to Model Snowflake Divertors", ESL Team Meeting, GA, September 30, 2014,
- Download File: SnowflakeCogent.pdf (pdf: 926 KB)
Daniel Martin, Xylar Asay-Davis, Stephen Price, Stephen Cornford, Esmond Ng, William Collins, Response of the Antarctic ice sheet to ocean forcing using the POPSICLES coupled ice sheet - ocean model, Twenty-first Annual WAIS Workshop, September 25, 2014,
Gunther H. Weber, Hans Johansen, Daniel T. Graves, Terry J. Ligocki, "Simulating Urban Environments for Energy Analysis", Proceedings Visualization in Environmental Sciences (EnvirVis), 2014, LBNL 6652E,
Sergi Molins, David Trebotich, Li Yang, Jonathan B. Ajo-Franklin, Terry J. Ligocki, Chaopeng Shen and Carl Steefel, "Pore-Scale Controls on Calcite Dissolution Rates from Flow-through Laboratory and Numerical Experiments", Environmental Science and Technology, May 27, 2014, 48:7453-7460, doi: 10.1021/es5013438
- Download File: MolinsETALappearedonline2014-06-09.pdf (pdf: 2.4 MB)
Massively-Parallel Simulations Verify Carbon Dioxide Sequestration Experiments, FY15 DOE ASCR Budget Request to Congress, May 1, 2014,
Mark F. Adams, Jed Brown, John Shalf, Brian Van Straalen, Erich Strohmaier, Samuel Williams, "HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems", LBNL Technical Report, 2014, LBNL 6630E,
- Download File: hpgmg.pdf (pdf: 183 KB)
S.M Guzik, T.H. Weisgraber, P. Colella, B.J. Alder, "Interpolation Methods and the Accuracy of Lattice-Boltzmann Mesh Refinement", Journal of Computational Physics, February 15, 2014, 259:461 - 487, doi: https://doi.org/10.1016/j.jcp.2013.11.037
Anshu Dubey, Ann Almgren, John Bell, Martin Berzins, Steve Brandt, Greg Bryan, Phillip Colella, Daniel Graves, Michael Lijewski, Frank L\ offler, others, "A survey of high level frameworks in block-structured adaptive mesh refinement packages", Journal of Parallel and Distributed Computing, 2014, 74:3217--3227, doi: 10.1016/j.jpdc.2014.07.001
Samuel Williams, Mike Lijewski, Ann Almgren, Brian Van Straalen, Erin Carson, Nicholas Knight, James Demmel, "s-step Krylov subspace methods as bottom solvers for geometric multigrid", Parallel and Distributed Processing Symposium, 2014 IEEE 28th International, January 2014, 1149--1158, doi: 10.1109/IPDPS.2014.119
- Download File: ipdps14cabicgstabfinal.pdf (pdf: 943 KB)
- Download File: ipdps14CABiCGStabtalk.pdf (pdf: 944 KB)
2013
Daniel T. Graves, Phillip Colella, David Modiano, Jeffrey Johnson, Bjorn Sjogreen, Xinfeng Gao, "A Cartesian Grid Embedded Boundary Method for the Compressible Navier Stokes Equations", Communications in Applied Mathematics and Computational Science, December 23, 2013,
- Download File: gravesetal.pdf (pdf: 964 KB)
In this paper, we present an unsplit method for the time-dependent
compressible Navier-Stokes equations in two and three dimensions.
We use a a conservative, second-order Godunov algorithm.
We use a Cartesian grid, embedded boundary method to resolve complex
boundaries. We solve for viscous and conductive terms with a
second-order semi-implicit algorithm. We demonstrate second-order
accuracy in solutions of smooth problems in smooth geometries and
demonstrate robust behavior for strongly discontinuous initial
conditions in complex geometries.
Protonu Basu, Anand Venkat, Mary Hall, Samuel Williams, Brian Van Straalen, Leonid Oliker, "Compiler generation and autotuning of communication-avoiding operators for geometric multigrid", 20th International Conference on High Performance Computing (HiPC), December 2013, 452--461,
- Download File: hipc13chill.pdf (pdf: 989 KB)
C. Steefel, S. Molins, D. Trebotich, "Pore scale processes associated with subsurface CO2 injection and sequestration”, Reviews in Mineralogy and Geochemistry", Reviews in Mineralogy and Geochemistry, November 1, 2013,
- Download File: SteefelMolinsTrebotich2013.pdf (pdf: 5.9 MB)
P. Basu, A. Venkat, M. Hall, S. Williams, B. Van Straalen, L. Oliker, "Compiler Generation and Autotuning of Communication-Avoiding Operators for Geometric Multigrid", Workshop on Stencil Computations (WOSC), 2013,
Frank Pattyn, Laura Perichon, Gaël Durand, Lionel Favier, Olivier Gagliardini, Richard C.A. Hindmarsh, Thomas Zwinger, Torsten Albrecht, Stephen Cornford, David Docquier, Johannes J. Fürst, Daniel Goldberg, G. Hilmar Gudmundsson, Angelika Humbert, Moritz Hütten, Philippe Huybrechts, Guillaume Jouvet, Thomas Kleiner, Eric Larour, Daniel Martin, Mathieu Morlighem, Anthony J. Payne, David Pollard, Martin Rückamp, Oleg Rybak, Hélène Seroussi, Malte Thoma, Nina Wilkens, "Grounding-line migration in plan-view marine ice-sheet models: results of the ice2sea MISMIP3d intercomparison", Journal of Glaciology, 2013, 59:410-422, doi: 10.3189/2013JoG12J129
Christopher D. Krieger, Michelle Mills Strout, Catherine Olschanowsky, Andrew Stone, Stephen Guzik, Xinfeng Gao, Carlo Bertolli, Paul H.J. Kelly, Gihan Mudalige, Brian Van Straalen, Sam Williams, "Loop chaining: A programming abstraction for balancing locality and parallelism", Parallel and Distributed Processing Symposium Workshops & PhD Forum (IPDPSW), 2013 IEEE 27th International, May 2013, 375--384, doi: 10.1109/IPDPSW.2013.68
S.L. Cornford, D.F. Martin, D.T. Graves, D.F. Ranken, A.M. Le Brocq, R.M. Gladstone, A.J. Payne, E.G. Ng, W.H. Lipscomb, "Adaptive mesh, finite volume modeling of marine ice sheets", Journal of Computational Physics, 232(1):529-549, 2013,
- Download File: cornfordmartinJCP2012.pdf (pdf: 1 MB)
A Dubey, B Van Straalen, "Experiences from software engineering of large scale AMR multiphysics code frameworks", arXiv preprint arXiv:1309.1781, January 1, 2013, doi: http://dx.doi.org/10.5334/jors.am
2012
Samuel Williams, Dhiraj D. Kalamkar, Amik Singh, Anand M. Deshpande, Brian Van Straalen, Mikhail Smelyanskiy,
Ann Almgren, Pradeep Dubey, John Shalf, Leonid Oliker,
"Implementation and Optimization of miniGMG - a Compact Geometric Multigrid Benchmark",
December 2012,
LBNL 6676E,
- Download File: miniGMGLBNL-6676E.pdf (pdf: 906 KB)
S. Williams, D. Kalamkar, A. Singh, A. Deshpande, B. Van Straalen, M. Smelyanskiy, A. Almgren, P. Dubey, J. Shalf, L. Oliker, "Optimization of Geometric Multigrid for Emerging Multi- and Manycore Processors", Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis (SC), November 2012, doi: 10.1109/SC.2012.85
- Download File: sc12-mg.pdf (pdf: 808 KB)
- Download File: sc12mgtalk.pdf (pdf: 1.9 MB)
B. Kallemov, G. H. Miller, S. Mitran and D. Trebotich, "Calculation of Viscoelastic Bead-Rod Flow Mediated by a Homogenized Kinetic Scale with Holonomic Constraints", Molecular Simulation, 2012, doi: 10.1080/08927022.2011.654206
- Download File: molsim2012.pdf (pdf: 204 KB)
G. H. Miller and D. Trebotich, "An Embedded Boundary Method for the Navier-Stokes Equations on a Time-Dependent Domain", Communications in Applied Mathematics and Computational Science, 7(1):1-31, 2012,
- Download File: camcos-v7-n1-p01-p.pdf (pdf: 1.2 MB)
S. Molins, D. Trebotich, C. I. Steefel and C. Shen, "An Investigation of the Effect of Pore Scale Flow on Average Geochemical Reaction Rates Using Direct Numerical Simulation", Water Resour. Res., 48(3) W03527, DOI:10.1029/2011WR011404, 2012,
- Download File: wrcr13375.pdf (pdf: 1.4 MB)
Brian Van Straalen, David Trebotich, Terry Ligocki, Daniel T. Graves, Phillip Colella, Michael Barad, "An Adaptive Cartesian Grid Embedded Boundary Method for the Incompressible Navier Stokes Equations in Complex Geometry", LBNL Report Number: LBNL-1003767, 2012, LBNL LBNL Report Numb,
- Download File: paper5.pdf (pdf: 360 KB)
We present a second-order accurate projection method to solve the
incompressible Navier-Stokes equations on irregular domains in two
and three dimensions. We use a finite-volume discretization
obtained from intersecting the irregular domain boundary with a
Cartesian grid. We address the small-cell stability problem
associated with such methods by hybridizing a conservative
discretization of the advective terms with a stable, nonconservative
discretization at irregular control volumes, and redistributing the
difference to nearby cells. Our projection is based upon a
finite-volume discretization of Poisson's equation. We use a
second-order, $L^\infty$-stable algorithm to advance in time. Block
structured local refinement is applied in space. The resulting
method is second-order accurate in $L^1$ for smooth problems. We
demonstrate the method on benchmark problems for flow past a
cylinder in 2D and a sphere in 3D as well as flows in 3D geometries
obtained from image data.
S. Guzik, P. McCorquodale, P. Colella, "A Freestream-Preserving High-Order Finite-Volume Method for Mapped Grids with Adaptive-Mesh Refinement", 50th AIAA Aerospace Sciences Meeting Nashville, TN, 2012,
- Download File: AIAAASM2012Guzik1140057v2.pdf (pdf: 1.3 MB)
P.S. Li, D.F. Martin, R.I. Klein, and C.F. McKee, "A Stable, Accurate Methodology for High Mach Number, Strong Magnetic Field MHD Turbulence with Adaptive Mesh Refinement: Resolution and Refinement Studies", The Astrophysical Journal Supplement Series, 2012, doi: https://doi.org/10.1088/0067-0049/195/1/5
- Download File: LiMartinKleinMcKee.pdf (pdf: 1.5 MB)
A Mignone, C Zanni, P Tzeferacos, B van Straalen, P Colella, G Bodo, The PLUTO code for adaptive mesh computations in astrophysical fluid dynamics, The Astrophysical Journal Supplement Series, Pages: 7 2012,
2011
B. Wang, G.H. Miller, and P. Colella, "A Particle-in-Cell Method with Adaptive Phase-Space Remapping for Kinetic Plasmas", SIAM J. Sci. Comput, 2011,
Ushizima, D.M., Weber, G.H., Ajo-Franklin, J., Kim, Y., Macdowell, A., Morozov, D., Nico, P., Parkinson, D., Trebotich, D., Wan, J., and Bethel E.W., "Analysis and visualization for multiscale control of geologic CO2", Journal of Physics: Conference Series, Proceedings of SciDAC 2011, July 2011, LBNL Denver, CO, USA,
F. Miniati and D.F. Martin, "Constrained-transport Magnetohydrodynamics with Adaptive Mesh Refinement in CHARM", The Astrophysical Journal Supplement Series, 195(1):5, 2011,
Katherine Yelick, Susan Graham, Paul Hilfinger, Dan Bonachea, Jimmy Su, Amir Kamil, Kaushik Datta, Phillip Colella, Tong Wen, "Titanium", Encyclopedia of Parallel Computing, edited by David Padua, (Springer US: 2011) Pages: 2049--2055 doi: 10.1007/978-0-387-09766-4_516
Titanium is a parallel programming language designed for high-performance scientific computing. It is based on Java and uses a Single Program Multiple Data (SPMD) parallelism model with a Partitioned Global Address Space (PGAS).
B. Van Straalen, P. Colella, D. T. Graves, N. Keen, "Petascale Block-Structured AMR Applications Without Distributed Meta-data", Euro-Par 2011 Parallel Processing - 17th International Conference, Euro-Par 2011, August 29 - September 2, 2011, Proceedings, Part II. Lecture Notes in Computer Science 6853 Springer 2011, ISBN 978-3-642-23396-8, Bordeaux, France, 2011,
- Download File: EuroPar2011bvs.pdf (pdf: 400 KB)
P. McCorquodale and P. Colella, "A High-Order Finite-Volume Method for Conservation Laws on Locally Refined Grids", Communications in Applied Mathematics and Computational Science, Vol. 6 (2011), No. 1, 1-25, 2011,
P. Colella, M.R. Dorr, J.A.F. Hittinger, and D.F. Martin, "High-Order Finite-Volume Methods in Mapped Coordinates", Journal of Computational Physics, 230(8):2952-2976 (2011), 2011, doi: 10.1016/j.jcp.2010.12.044
- Download File: HOFiniteVolume2010.pdf (pdf: 1.1 MB)
Chaopeng Shen, David Trebotich, Sergi Molins, Daniel T Graves, BV Straalen, DT Graves, T Ligocki, CI Steefel, "High performance computations of subsurface reactive transport processes at the pore scale", Proceedings of SciDAC, 2011,
- Download File: SciDAC2011sim.pdf (pdf: 1.1 MB)
M. Christen, N. Keen, T. Ligocki, L. Oliker, J. Shalf, B. van Straalen, S. Williams, "Automatic Thread-Level Parallelization in the Chombo AMR Library", LBNL Technical Report, 2011, LBNL 5109E,
B. Kallemov, G. H. Miller, S. Mitran and D. Trebotich, "Multiscale Rheology: New Results for the Kinetic Scale", NSTI-Nanotech 2011, Vol. 2, pp. 575-578 (2011), 2011,
- Download File: Nanotech2011.pdf (pdf: 203 KB)
B. Kallemov, G. H. Miller and D. Trebotich, "A Higher-Order Accurate Fluid-Particle Algorithm for Polymer Flows", Molecular Simulation, 37(8):738-745 (2011), 2011,
- Download File: MolSimPaper2011.pdf (pdf: 175 KB)
Deines E., Weber, G.H., Garth, C., Van Straalen, B. Borovikov, S., Martin, D.F., and Joy, K.I., "On the computation of integral curves in adaptive mesh refinement vector fields", Proceedings of Dagstuhl Seminar on Scientific Visualization 2009, Schloss Dagstuhl, 2011, 2:73-91, LBNL 4972E,
- Download File: 7.pdf (pdf: 799 KB)
2010
Q. Zhang, H. Johansen and P. Colella, "A Fourth-Order Accurate Finite-Volume Method with Structured Adaptive Mesh Refinement for Solving the Advection-Diffusion Equation", SIAM Journal on Scientific Computing, Vol. 34, No. 2. (2012), B179, doi:10.1137/110820105, 2010,
- Download File: O4AdvDiff.pdf (pdf: 599 KB)
R.K. Crockett, P. Colella, and D.T. Graves, "A Cartesian Grid Embedded Boundary Method for Solving the Poisson and Heat Equations with Discontinuous Coefficients in Three Dimensions", Journal of Computational Physics , 230(7):2451-2469, 2010,
- Download File: YJCPH3372.pdf (pdf: 1008 KB)
R.H. Cohen, J. Compton, M. Dorr, J. Hittinger, W.M Nevins, T.D. Rognlien, Z.Q. Xu, P. Colella, and D. Martin, "Testing and Plans for the COGENT edge kinetic code", (abstract) submitted to Sherwood 2010, 2010,
- Download File: Sherwood2010Abstract.pdf (pdf: 32 KB)
E. Ateljevich, P. Colella, D.T. Graves, T.J. Ligocki, J. Percelay, P.O. Schwartz, Q. Shu, "CFD Modeling in the San Francisco Bay and Delta", 2009 Proceedings of the Fourth SIAM Conference on Mathematics for Industry (MI09), pp. 99-107, 2010,
- Download File: realmSIAMPaper.pdf (pdf: 316 KB)
Caroline Gatti-Bono, Phillip Colella and David Trebotich, "A Second-Order Accurate Conservative Front-Tracking Method in One Dimension", SIAM J. Sci. Comput., 31(6):4795-4813, 2010,
- Download File: BonoSISC.pdf (pdf: 414 KB)
Schwartz, P., and Colella P., "A Second-Order Accurate Method for Solving the Signed Distance Function Equation", Communications in Applied Mathematics and Computational Science, Vol. 5 (2010), No. 1, 81-97, 2010,
- Download File: sdfMain.pdf (pdf: 396 KB)
Prateek Sharma, Phillip Colella, and Daniel F. Martin, "Numerical Implementation of Streaming Down the Gradient: Application to Fluid Modeling of Cosmic Rays", SIAM Journal on Scientific Computing , Vol 32(6), 3564-3583, 2010,
- Download File: crstrsiamPS.pdf (pdf: 1.1 MB)
B. Kallemov, G. H. Miller and D. Trebotich, "Numerical Simulation of Polymer Flow in Microfluidic Devices", 009 Proceedings of the Fourth SIAM Conference on Mathematics for Industry (MI09) pp. 93-98, 2010,
- Download File: siammi09011kallemovb.pdf (pdf: 862 KB)
Gunther Weber, "Recent advances in visit: Amr streamlines and query-driven visualization", 2010,
- Download File: LBNL-3185E.pdf (pdf: 2.1 MB)
2009
P. Colella, M. Dorr, J. Hittinger, D.F. Martin, and P. McCorquodale, "High-Order Finite-Volume Adaptive Methods on Locally Rectangular Grids", 2009 J. Phys.: Conf. Ser. 180 012010, 2009,
- Download File: SciDAC2009.pdf (pdf: 594 KB)
A. Nonaka, D. Trebotich, G. H. Miller, D. T. Graves, and P. Colella, "A Higher-Order Upwind Method for Viscoelastic Flow", Comm. App. Math. and Comp. Sci., 4(1):57-83, 2009,
- Download File: nonakaetal.pdf (pdf: 709 KB)
P. Colella, M. Dorr, J. Hittinger, P.W. McCorquodale, and D.F. Martin, "High-Order Finite-Volume Methods on Locally-Structured Grids", Numerical Modeling of Space Plasma Flows: Astronum 2008 -- Proceedings of the 3rd International Conference, June 8-13, 2008, St John, U.S. Virgin Islands, 2009, pp. 207-216, 2009,
- Download File: Astronum2008MappedPaper.pdf (pdf: 1 MB)
B.V. Straalen, J. Shalf, T. Ligocki, N. Keen, and W. Yang, "Scalability Challenges for Massively Parallel AMR Application", 23rd IEEE International Symposium on Parallel and Distributed Processing, 2009., 2009,
- Download File: ipdps09finalcertified.pdf (pdf: 366 KB)
Brian van Straalen, Shalf, J. Ligocki, Keen, Woo-Sun Yang, "Scalability challenges for massively parallel AMR applications", IPDPS, 2009, 1-12,
- Download File: ipdps09submit.pdf (pdf: 529 KB)
B. Kallemov, G. H. Miller and D. Trebotich, "A Duhamel Approach for the Langevin Equations with Holonomic Constraints", Molecular Simulation, 35(6):440-447, 2009,
- Download File: MolSim.pdf (pdf: 186 KB)
2008
G.H. Weber, V. Beckner, H. Childs, T. Ligocki, M. Miller, B. van Straalen, E.W. Bethel, "Visualization of Scalar Adaptive Mesh Refinement Data", Numerical Modeling of Space Plasma Flows: Astronum-2007 (Astronomical Society of the Pacific Conference Series), April 2008, 385:309-320, LBNL 220E,
- Download File: LBNL-220E.pdf (pdf: 1.5 MB)
P. Colella, D. Graves, T. Ligocki, D. Trebotich and B.V. Straalen, "Embedded Boundary Algorithms and Software for Partial Differential Equations", 2008 J. Phys.: Conf. Ser. 125 012084, 2008,
- Download File: SciDAC2008-EBAlgor.pdf (pdf: 972 KB)
T.J. Ligocki, P.O. Schwartz, J. Percelay, P. Colella, "Embedded Boundary Grid Generation using the Divergence Theorem, Implicit Functions, and Constructive Solid Geometry", 2008 J. Phys.: Conf. Ser. 125 012080, 2008,
- Download File: SciDAC2008-EBGenerate.pdf (pdf: 192 KB)
D. Trebotich, B.V. Straalen, D. Graves and P. Colella, "Performance of Embedded Boundary Methods for CFD with Complex Geometry", 2008 J. Phys.: Conf. Ser. 125 012083, 2008,
- Download File: SciDAC2008-EBPerform.pdf (pdf: 167 KB)
Colella, P. and Sekora, M. D., "A Limiter for PPM that Preserves Accuracy at Smooth Extrema", Submitted to Journal of Computational Physics, 2008,
- Download File: ColellaSekora.pdf (pdf: 111 KB)
Martin, D.F., Colella, P., and Graves, D.T., "A Cell-Centered Adaptive Projection Method for the Incompressible Navier-Stokes Equations in Three Dimensions", Journal of Computational Physics Vol 227 (2008) pp. 1863-1886., 2008, LBNL 62025, doi: 10.1016/j.jcp.2007.09.032
- Download File: martinColellaGraves2008.pdf (pdf: 3.1 MB)
D. T. Graves, D Trebotich, G. H. Miller, P. Colella, "An Efficient Solver for the Equations of Resistive MHD with Spatially-Varying Resistivity", Journal of Computational Physics Vol 227 (2008) pp.4797-4804., 2008,
- Download File: gravesTrebMillerColella2008.pdf (pdf: 155 KB)
Miniati, F. and Colella, P., "A Modified higher-Order Godunov's Scheme for Stiff Source Conservative Hydrodynamics", Journal of Computational Physics Vol. 224 (2007), pp. 519-538., 2008, LBNL 59902,
- Download File: JCPMiniatiColellaJun2007.pdf (pdf: 585 KB)
D. Trebotich, G. H. Miller and M. D. Bybee, "A Penalty Method to Model Particle Interactions in DNA-laden Flows", J. Nanosci. Nanotechnol., 8(7):3749-3756, 2008,
- Download File: JNNpreprint.pdf (pdf: 269 KB)
2007
Katherine Yelick, Paul Hilfinger, Susan Graham, Dan Bonachea, Jimmy Su, Amir Kamil, Kaushik Datta, Phillip Colella, Tong Wen, "Parallel Languages and Compilers: Perspective from the Titanium Experience", International Journal of High Performance Computing Applications (IJHPCA), August 1, 2007, 21(3):266--290, doi: 10.1177/1094342007078449
We describe the rationale behind the design of key features of Titanium — an explicitly parallel dialect of JavaTM for high-performance scientific programming — and our experiences in building applications with the language. Specifically, we address Titanium’s Partitioned Global Address Space model, SPMD parallelism support, multi-dimensional arrays and array-index calculus, memory management, immutable classes (class-like types that are value types rather than reference types), operator overloading, and generic programming. We provide an overview of the Titanium compiler implementation, covering various parallel analyses and optimizations, Titanium runtime technology and the GASNet network communication layer. We summarize results and lessons learned from implementing the NAS parallel benchmarks, elliptic and hyperbolic solvers using Adaptive Mesh Refinement, and several applications of the Immersed Boundary method.
Katherine Yelick, Dan Bonachea, Wei-Yu Chen, Phillip Colella, Kaushik Datta, Jason Duell, Susan L. Graham, Paul Hargrove, Paul Hilfinger, Parry Husbands, Costin Iancu, Amir Kamil, Rajesh Nishtala, Jimmy Su, Michael Welcome, Tong Wen, "Productivity and Performance Using Partitioned Global Address Space Languages", Proceedings of the 2007 International Workshop on Parallel Symbolic Computation (PASCO), July 2007, 24--32, doi: 10.1145/1278177.1278183
Partitioned Global Address Space (PGAS) languages combine the programming convenience of shared memory with the locality and performance control of message passing. One such language, Unified Parallel C (UPC) is an extension of ISO C defined by a consortium that boasts multiple proprietary and open source compilers. Another PGAS language, Titanium, is a dialect of Java T M designed for high performance scientific computation. In this paper we describe some of the highlights of two related projects, the Titanium project centered at U.C. Berkeley and the UPC project centered at Lawrence Berkeley National Laboratory. Both compilers use a source-to-source strategy that translates the parallel languages to C with calls to a communication layer called GASNet. The result is portable high-performance compilers that run on a large variety of shared and distributed memory multiprocessors. Both projects combine compiler, runtime, and application efforts to demonstrate some of the performance and productivity advantages to these languages.
Miniati, F. and Colella, P., "Block Structured Adaptive Mesh and Time Refinement for Hybrid, Hyperbolic + N-body Systems", Journal of Computational Physics Vol. 227 (2007), pp. 400-430., 2007,
- Download File: JCPMiniatiColellaNov2007.pdf (pdf: 1.1 MB)
McCorquodale, P., Colella, P., Balls, G.T., and Baden, S.B., "A Local Corrections Algorithm for Solving Poisson's Equation in Three Dimensions", Communications in Applied Mathematics and Computational Science Vol. 2, No. 1 (2007), pp. 57-81., 2007, doi: 10.2140/camcos.2007.2.57
Phillip Colella, John Bell, Noel Keen, Terry Ligocki, Michael Lijewski, Brian van Straalen, "Performance and Scaling of Locally-Structured Grid Methods for Partial Differential Equations", presented at SciDAC 2007 Annual Meeting, 2007,
- Download File: AMRPerformance.pdf (pdf: 386 KB)
G. H. Miller and D. Trebotich, "Toward a Mesoscale Model for the Dynamics of Polymer Solutions", J. Comput. Theoret. Nanosci. 4(4):797-801, 2007,
- Download File: JCTNpreprint.pdf (pdf: 271 KB)
D. Trebotich, G. H. Miller and M. D. Bybee, "A Hard Constraint Algorithm to Model Particle Interactions in DNA-laden Flows", Nanoscale and Microscale Thermophysical Engineering 11(1):121-128, 2007,
- Download File: NMTE.pdf (pdf: 132 KB)
D. Trebotich and G. H. Miller, "Simulation of Flow and Transport at the Micro (Pore) Scale", Proceedings of the 2nd International Conference on Porous Media and its Applications in Science and Engineering, ICPM2 June 17-21, Kauai, Hawaii, USA, 2007,
- Download File: Kauaiporousmedia.pdf (pdf: 198 KB)
D. Trebotich, "Toward a Solution to the High Weissenberg Number Problem", Proc. Appl. Math. Mech. 7(1):2100073-2100074, 2007,
- Download File: HWNPPAMM2007.pdf (pdf: 160 KB)
D. Trebotich, "Simulation of Biological Flow and Transport in Complex Geometries using Embedded Boundary / Volume-of-Fluid Methods", Journal of Physics: Conference Series 78 (2007) 012076, 2007,
- Download File: SciDAC2007.pdf (pdf: 453 KB)
2006
Martin, D.F., Colella, P.. and Keen, N., A. Deane, G. Brenner, A. Ecer, D. Emerson, J. McDonough, J. Periaux, N. Satofuka, & D. Tromeur-Dervout (Eds.), "An Incompressible Navier-Stokes with Particles Algorithm and Parallel Implementation", Parallel Computational Fluid Dynamics: Theory and Applications, Proceedings of the 2005 International Conference on Parallel Computational Fluid Dynamics, May 24-27, College Park, MD, USA, Elsevier (2006), p. 461-468., 2006, LBNL 58787,
- Download File: MartinColellaKeen.pdf (pdf: 93 KB)
Gatti-Bono, C., Colella, P., "An Anelastic Allspeed Projection Method for Gravitationally Stratified Flows", J. Comput. Phys. Vol. 216 (2006), pp. 589-615, 2006, LBNL 57158,
- Download File: B114.pdf (pdf: 632 KB)
Colella, P., Graves, D.T., Keen, B.J., Modiano, D., "A Cartesian Grid Embedded Boundary Method for Hyperbolic Conservation Laws", Journal of Computational Physics. Vol. 211 (2006), pp. 347-366., 2006, LBNL 56420,
- Download File: A162.pdf (pdf: 354 KB)
Schwartz, P., Barad, M., Colella, P., Ligocki, T.J., "A Cartesian Grid Embedded Boundary Method for the Heat Equation and Poisson's Equation in Three Dimensions", Journal of Computational Physics. Vol. 211 (2006), pp. 531-550., 2006, LBNL 56607,
- Download File: A161.pdf (pdf: 364 KB)
D. Trebotich, "Modeling Complex Biological Flows in Multi-Scale Systems Using the APDEC Framework", Journal of Physics: Conference Series 46 (2006) 316-321., 2006,
- Download File: SciDAC2006.pdf (pdf: 786 KB)
2005
McCorquodale, P., Colella, P., Balls, G., Baden, S.B., "A Scalable Parallel Poisson Solver with Infinite-Domain Boundary Conditions", Proceedings of the 7th Workshop on High Performance Scientific and Engineering Computing, Oslo, Norway, June 2005,
- Download File: HPSEC05.pdf (pdf: 141 KB)
Ryne, R., Abell, D., Adelmann, A., Admundson, J., Bohn, C., Cary, J., Colella, P., Dechow, D., Decyk, V., Dragt, A., Gerber, R., Habib, S., Higdon, D., Katsouleas, T., Ta, K.L., McCorquodale, P., Mihalcea, D., Mitchell, C., Mori, W., Mottershead, C.T., Neri, F., Pogorelov, I., Quiang, J., Samulyak, R., Serafini, D., Shalf, J., Siegerist, C., Spentzouris, P., Stoltz, P., Terzic, B., Venturini, M., Walstrom, P., "SciDAC Advances and Applications in Computational Beam Dynamics", June 2005, LBNL 58243,
- Download File: LBNL-58243.pdf (pdf: 219 KB)
Wen, T., Colella, P., "Adaptive Mesh Refinement in Titanium", Proceedings of the International Parallel and Distributed Processing Symposium, Denver, Colorado, April 2005,
- Download File: A237.pdf (pdf: 171 KB)
Horst Simon, William Kramer, William Saphir, John Shalf, David Bailey, Leonid Oliker, Michael Banda, C. William McCurdy, John Hules, Andrew Canning, Marc Day, Philip Colella, David Serafini, Michael Wehner, Peter Nugent, "Science-Driven System Architecture: A New Process for Leadership Class Computing", Journal of the Earth Simulator, Volume 2., 2005, LBNL 56545,
- Download File: JES-SDSA.pdf (pdf: 110 KB)
Schwartz, P., Adalsteinsson, D., Colella, P., Arkin, A.P., Onsum, M., "Numerical computation of diffusion on a surface", Proc. Nat. Acad. Sci. Vol. 102 (2005), pp. 11151-11156, 2005,
- Download File: SchwartzETALPNAS.pdf (pdf: 357 KB)
Gatti-Bono, C., Colella, P., "A Filtering Method for Gravitationally Stratified Flows", 2005, LBNL 57161,
- Download File: LBNL-57161.pdf (pdf: 236 KB)
Martin, D.F., Colella, P., Anghel, M., Alexander, F., "Adaptive Mesh Refinement for Multiscale Nonequilibrium Physics", Computing in Science and Engineering Vol.7 N.3 (2005), pp. 24-31, 2005,
- Download File: A159.pdf (pdf: 324 KB)
Barad, M., Colella, P., "A Fourth-Order Accurate Local Refinement Method for Poisson's Equation", J. Comput. Phys. Vol.209 (2005), pp. 1-18, 2005,
- Download File: A158.pdf (pdf: 533 KB)
Trebotich, D., Colella, P., Miller, G.H., "A Stable and Convergent Scheme for Viscoelastic Flow in Contraction Channels", J. Comput. Phys. Vol.205 (2005), pp. 315-342, 2005,
- Download File: A157.pdf (pdf: 827 KB)
Crockett, R.K., Colella, P., Fisher, R., Klein, R.I., McKee, C.F., "An Unsplit, Cell-Centered Godunov Method for Ideal MHD", J. Comput. Phys. Vol.203 (2005), pp. 422-448, 2005,
- Download File: A156.pdf (pdf: 558 KB)
Trebotich, D., Miller, G.H., Colella, P., Graves, D.T., Martin, D.F., Schwartz, P.O., "A Tightly Coupled Particle-Fluid Model for DNA-Laden Flows in Complex Microscale Geometries", Computational Fluid and Solid Mechanics 2005, pp. 1018-1022, Elsevier (K. J. Bathe editor), 2005,
- Download File: MIT3.pdf (pdf: 431 KB)
2004
Simon, H., Kramer, W., Saphir, W., Shalf, J., Bailey, D., Oliker, L., Banda, M., McCurdy, C.W., Hules, J., Canning, A., Day, M., Colella, P., Serafini, D., Wehner, M., Nugent, P., "National Facility for Advanced Computational Science: A Sustainable Path to Scientific Discovery", April 2004, LBNL 5500,
- Download File: PUB-5500.pdf (pdf: 1.8 MB)
McCorquodale, P., Colella, P., Grote, D., Vay, J.L., "A Node-Centered Local Refinement Algorithm for Poisson's Equation in Complex Geometries", J. Comput. Phys. Vol.201 (2004), pp. 34-60, 2004,
- Download File: A153.pdf (pdf: 649 KB)
Deschamps, T., Schwartz, P., Trebotich, D., Colella, P., Malladi, R., Saloner, D., "Vessel Segmentation and Blood Flow Simulation Using Level Sets and Embedded Boundary Methods", Elsevier International Congress Series, 1268, pp. 75-80. Presented at the 18th Conference and Exhibition for Computer Assisted Radiology and Surgery, June, 2004, 2004,
- Download File: A236.pdf (pdf: 224 KB)
Vay, J.L., Colella, P., Friedman, A., Grote, D.P., McCorquodale, P., Serafini, D.B., "Implementations of Mesh Refinement Schemes for Particle-in-Cell Plasma Simulations", Computer Physics Communications Vol.164 (2004), pp. 297-305, 2004,
- Download File: A155.pdf (pdf: 492 KB)
Trebotich, D., Colella, P., Miller, G.H., Nonaka, A., Marshall, T., Gulati, S., Liepmann, D., "A Numerical Algorithm for Complex Biological Flow in Irregular Microdevice Geometries", Technical Proceedings of the 2004 Nanotechnology Conference and Trade Show Vol.2 (2004), pp. 470-473, 2004,
- Download File: A235.pdf (pdf: 419 KB)
Samtaney, R., Colella, P., Jardin, S.C., Martin, D.F., "3D Adaptive Mesh Refinement Simulations of Pellet Injection in Tokamaks", Computer Physics Communications Vol.164 (2004), pp. 220-228, 2004, doi: 10.1016/j.cpc.2004.06.032
- Download File: A154.pdf (pdf: 309 KB)
Vay, J.L., Colella, P., Kwan, J.W., McCorquodale, P., Serafini, D.B., Friedman, A., Grote, D.P., Westenskow, G., Adam, J.C., "Application of Adaptive Mesh Refinement to Particle-in-Cell Simulations of Plasmas and Beams", Physics of Plasmas Vol.11 (2004), pp. 2928-2934, 2004,
- Download File: A152.pdf (pdf: 275 KB)
Miller, G.H., "Minimal Rotationally Invariant Bases for Hyperelasticity", SIAM J. Appl. Math Vol.64, No. 6 (2004), pp. 2050-2075, 2004,
- Download File: Miller-2004.pdf (pdf: 289 KB)
2003
Balls, G.T., Baden, S.B., Colella, P., "SCALLOP: A Highly Scalable Parallel Poisson Solver in Three Dimensions", Proceedings, SC'03, Phoenix, Arizona, November, 2003, November 2003,
- Download File: A234.pdf (pdf: 1.1 MB)
Miller, G.H., "An Iterative Riemann Solver for Systems of Hyperbolic Conservation Laws, with Application to Hyperelastic Solid Mechanics", J. Comp. Physics Vol.193 (2003), pp. 198-225, 2003,
- Download File: Miller-2003.pdf (pdf: 333 KB)
2002
Miller, G.H., Colella, P., "A Conservative Three-Dimensional Eulerian Method for Coupled Solid-Fluid Shock Capturing", J. Comput. Phys. Vol.183 (2002), pp. 26, 2002,
- Download File: A150.pdf (pdf: 943 KB)
Balls, G., Colella, P., "A Finite Difference Domain Decomposition Method Using Local Corrections for the Solution of Poissons's Equation", J. Comput. Phys. Vol.180 (2002), pp. 25, 2002,
- Download File: A149.pdf (pdf: 296 KB)
Vay, J.L., Colella, P., McCorquodale, P., Van Straalen, B., Friedman, A., Grote, D.P., "Mesh Refinement for Particle-in-Cell Plasma Simulations: Applications to and Benefits for Heavy Ion Fusion", Laser and Particle Beams. Vol.20 N.4 (2002), pp. 569-575, 2002,
- Download File: A151.pdf (pdf: 624 KB)
Colella, P., Graves, D.T., Greenough, J.A., "A Second-Order Method for Interface Recontruction in Orthogonal Coordinate Systems", January 2002, LBNL 45244,
- Download File: LBNL-45244.pdf (pdf: 192 KB)
2001
McCorquodale, P., Colella, P., Johansen, H., "A Cartesian Grid Embedded Boundary Method for the Heat Equation on Irregular Domains", J. Comput. Phys. Vol.173 (2001), pp. 620-635, 2001,
- Download File: A148.pdf (pdf: 652 KB)
Miller, G.H., Colella, P., "A Higher-Order Godunov Method for Elastic-Plastic Flow in Solids", J. Comput. Phys. Vol.167 (2001), pp. 131, 2001,
- Download File: A147.pdf (pdf: 304 KB)
David Trebotich, Phil Colella, "A Projection Method for Incompressible Viscous Flow on Moving Quadrilateral Grids", J. Comput. Phys. Vol.166 (2001), pp. 191-217, 2001,
- Download File: A146.pdf (pdf: 326 KB)
Gunther H. Weber, Oliver Kreylos, Terry J. Ligocki, Jonh Shalf, Hans Hagen, Bernd Hamann, Ken I. Joy, Kwan-Liu Ma, "High-quality Volume Rendering of Adaptive Mesh Refinement Data", VMV, 2001, 121-128,
2000
Modiano, D., Colella, P., "A Higher-Order Embedded Boundary Method for Time-Dependent Simulation of Hyperbolic Conservation Laws", ASME paper FEDSM00-11220, to appear in Proceedings of the ASME 2000 Fluids Engineering Division Summer Meeting, 2000, LBNL 45239,
- Download File: A233.pdf (pdf: 780 KB)
- Download File: LBNL-45239.ps.gz (gz: 549 KB)
Propp, R., Colella, P., Crutchfield, W.Y., Day, M.S., "A Numerical Model for Trickle-Bed Reactors", J. Comput. Phys., 2000, 165:311-333,
- Download File: A145.pdf (pdf: 195 KB)
Martin D., Colella, P., "A Cell-Centered Adaptive Projection Method for the Incompressible Euler Equations", J. Comput. Phys. Vol.163 (2000), pp. 271-312, 2000, doi: 10.1006/jcph.2000.6575
- Download File: A144.pdf (pdf: 1.6 MB)
1999
Colella, P., Graves, D.T., Modiano, D., Puckett, E.G., Sussman, M., "An Embedded Boundary / Volume of Fluid Method for Free Surface Flows in Irregular Geometries", ASME Paper FEDSM99-7108, in Proceedings of the 3rd ASME/JSME Joint Fluids Engineering Conference, 18-23 July, San Francisco, CA, 1999,
Nelson, E.S., Colella, P., "Parametric Study of Reactive Melt Infiltration", R.M. Sullivan, N.J. Salamon, M. Keyhani, and S. White, eds. "Application of porous media methods for engineered materials" AMD-Vol 233, pp 1-11. American Society of Mechanical Engineers (1999). (Presented at the 1999 ASME International Mechanical Engineer, 1999,
- Download File: A232.pdf (pdf: 989 KB)
Colella, P., Dorr, M.R., Wake, D.D., "A Conservative Finite Difference Method for the Numerical Solution of Plasma Fluid Equations", J. Comput. Phys. Vol.149 (1999), pp. 168-193, 1999,
- Download File: A137.pdf (pdf: 318 KB)
Colella, P., Dorr, M.R., Wake, D.D., "Numerical Solution of Plasma Fluid Equations Using Locally Refined Grids", J. Comput. Phys. Vol.152 (1999), pp. 550-583, 1999,
- Download File: A141.pdf (pdf: 436 KB)
Colella, P., Pao, K., "A Projection Method for Low Speed Flows", J. Comput. Phys. Vol.149 (1999), pp. 245-269, 1999,
- Download File: A138.pdf (pdf: 294 KB)
Howell, L.H., Pember, R.B., Colella, P., Fiveland, W.A., Jessee, J.P., "A Conservative Adaptive-Mesh Algorithm for Unsteady, Combined-Mode Heat Transfer Using the Discrete Ordinates Method", Numerical Heat Transfer, Part B: Fundamentals , Vol.35, (1999), pp. 407-430, 1999,
- Download File: A140.pdf (pdf: 1.8 MB)
Phil Colella, David Trebotich, "Numerical Simulation of Incompressible Viscous flow in Deforming Domains", Proceedings of the National Academy of Sciences of the United States of America, 1999, 96:5378-5381,
- Download File: A142.pdf (pdf: 443 KB)
Sussman, M.M., Almgren, A.S., Bell, J.B., Colella, P., Howell, L.H., Welcome, M., "An Adaptive Level Set Approach for Incompressible Two-Phase Flows", J. Comp. Phys. Vol.148, pp. 81-124, 1999, LBNL 40327,
- Download File: paper.ps.gz (gz: 577 KB)
1998
Johansen, H., Colella, P., "A Cartesian Grid Embedded Boundary Method for Poisson's Equation on Irregular Domains", J. Comput. Physics, Vol.147, No.1, pp. 60-85, November 1998,
- Download File: A135.pdf (pdf: 390 KB)
Vegas-Landeau, M.A., Propp, R., Patzek, T.W., Colella, P., "A Sequential Semi-Implicit Algorithm for Computing Discontinuous Flows in Porous Media", SPE Journal, June 1998,
K Yelick, L Semenzato, G Pike, C Miyamoto, B Liblit, A Krishnamurthy, P Hilfinger, S Graham, D Gay, P Colella, A Aiken, "Titanium: A high-performance Java dialect", Concurrency Practice and Experience, January 1998, 10:825--836, doi: 10.1002/(SICI)1096-9128(199809/11)10:11/13<825::AID-CPE383>3.0.CO;2-H
- Download File: A231.pdf (pdf: 856 KB)
Dudek, S., Colella, P., "Steady-State Solution-Adaptive Euler Computations on Structured Grids", AIAA paper 98-0543, AIAA Aerospace Sciences meeting, Reno, NV, January 1998,
- Download File: A230.pdf (pdf: 3.2 MB)
Jessee, J.P., Fiveland, W.A., Howell, L.H., Colella, P., Pember, R., "An Adaptive Mesh Refinement Algorithm for the Radiative Transport Equation", J. Comput. Phys. Vol.139, (1998), pp. 380-398, 1998,
- Download File: A133.pdf (pdf: 473 KB)
Pember, R.P., Howell, L.H., Bell, J.B., Colella, P., Crutchfield, W.Y., Fiveland, W.A., Jessee, J.P., "An Adaptive Projection Method For Unsteady Low-Mach Number Combustion", Comb. Sci. Tech., 1998, 140:123-168,
- Download File: A139.pdf (pdf: 1.6 MB)
Almgren, A.S., Bell, J.B., Colella, P., Howell, L.H., Welcome, M.L., "A Conservative Adaptive Projection Method for the Variable Density Incompressible Navier-Stokes Equations", J. Comp. Phys., 1998, 142:1-46, LBNL 39075,
- Download File: A134.pdf (pdf: 501 KB)
M.S. Day, P. Colella, M. Lijewski, C.A. Rendleman and D.L. Marcus, "Embedded Boundry Algorithms for Solving the Poisson Equation on Complex Domains", 1998, LBNL 41811,
- Download File: dclrm.ps.gz (gz: 2.6 MB)
A Projection Method for Incompressible Viscous Flow on a Deformable Domain, Trebotich, D.P., 1998,
- Download File: thesisTrebotichUCB1998.pdf (pdf: 3.5 MB)
Kevin Long, Brian Van Straalen, "PDESolve: an object-oriented PDE analysis environment", Object Oriented Methods for Interoperable Scientific and Engineering Computing: Proceedings of the 1998 SIAM Workshop, 1998, 99:225,
An Adaptive Cell-Centered Projection Method for the Incompressible Euler Equations, Martin, D.F., 1998,
1997
Tallio, K.V., Colella, P., "A Multifluid CFD Turbulent Entrainment Combustion Model: Formulation and One-Dimensional Results", Society of Automotive Engineers Fuels and Lubricants Meeting, November 1997, LBNL 40806,
- Download File: A229.pdf (pdf: 1.1 MB)
Sussman, M.M., Almgren, A.S., Bell, J.B., Colella, P., Howell, L.H., Welcome, M., "An Adaptive Level Set Approach for Incompressible Two-Phase Flows", J. Comp. Phys., 148, pp. 81-124, April 1997, LBNL 40327,
- Download File: LBNL-40327-02.pdf (pdf: 511 KB)
Dudek, S., Colella, P., "A Godunov Steady-State Solver for Structured Grids", AIAA Aerospace Sciences meeting, AIAA paper 97-0875, Reno, NV, January 1997,
Helmsen, J., Colella, P., Puckett, E.G., "Non-Convex Profile Evolution in Two Dimensions Using Volume of Fluids", 1997, LBNL 40693,
Hilditch, J., Colella, P., "A Projection Method for Low-Mach Number Fast Chemistry Reacting Flow", AIAA Aerospace Sciences meeting, AIAA paper 97-0263, Reno, NV, January 1997,
- Download File: A227.pdf (pdf: 972 KB)
Almgren, A.S., Bell, J.B., Colella, P., Marthaler, T., "A Cartesian Grid Projection Method for the Incompressible Euler Equations in Complex Geometries", SIAM J. Sci. Comp., 1997, 18(5):1289-1309,
- Download File: A132.pdf (pdf: 304 KB)
J.A. Greenough, J.B. Bell, P. Colella, E.G. Puckett, "A Numerical Study of Shock-Induced Mixing of a Helium Cylinder: Comparison with Experiment", Proceedings of the 20th International Symposium on Shock Waves, 1997,
Cartesian Grid Embedded Boundary Finite Difference Methods for Elliptic and Parabolic Partial Differential Equations on Irregular Domains, Johansen, H., 1997,
- Download File: HansJohansenThesis1997.pdf (pdf: 8.8 MB)
1996
Martin, D.F., Cartwright, K.L., "Solving Poisson's Equation using Adaptive Mesh Refinement", U.C. Berkeley Electronics Research Laboratory report No. UCB/ERL M96/66, October 19, 1996,
- Download File: MartinCartwright.pdf (pdf: 150 KB)
Jesse, J.P., Howell, L.H., Fiveland, W.A., Colella, P., Pember, R.B., "An Adaptive Mesh Refinement Algorithm for the Discrete Ordinates Methods", Proceedings, ASME 1996 National Heat Transfer Conference, August 1996,
- Download File: A226.pdf (pdf: 1.1 MB)
Mark. M. Sussman, Ann S. Almgren, John B. Bell, Phillip Colella, Louis H. Howell, Michael Welcome, "An Adaptive Level Set Approach for Incompressible Two-Phase Flows", Proceedings of the ASME Fluids Engineering Summer Meeting: Forum on Advances in Numerical Modeling of Free Surface and Interface Fluid Dynamics, July 1996,
Helmsen, J., Puckett, E.G., Colella, P., Dorr, M., "Two New Methods for Simulating Photolithography Development in 3D", Proceedings of the SPIE - The International Society for Optical Engineering Optical Microlithography IX, Santa Clara, CA, March 1996,
- Download File: A224.pdf (pdf: 655 KB)
R.B. Pember, P. Colella, L.H. Howell, A.S. Almgren, J.B. Bell, W.Y. Crutchfield, V.E. Beckner, K.C. Kaufman, W.A. Fiveland, and J.P. Jessee, "The Modeling of a Laboratory Natural Gas-Fired Furnace with a Higher-Order Projection Method for Unsteady Combustion", UCRL-JC123244, February 1996,
- Download File: paper96.ps.gz (gz: 136 KB)
- Download File: abstract.ps.gz (gz: 25 KB)
- Download File: talk.ps.gz (gz: 261 KB)
An Approximate Projection Method Suitable for the Modeling of Rapidly Rotating Flows, Graves, D.T., 1996,
- Download File: GravesThesis.pdf (pdf: 38 MB)
1995
Pember, R.B., Almgren, A.S., Bell, J.B., Colella, P., Howell, L., Lai, M., "A Higher-Order Projection Method for the Simulation of Unsteady Turbulent Nonpremixed Combustion in an Industrial Burner", Proceedings of the 8th International Symposium on Transport Phenomena in Combustion, San Francisco, CA, July 1995,
Almgren, A.S., Bell, J.B., Colella, P., Marthaler, T., "A Cell-Centered Cartesian Grid Projection Method for the Incompressible Euler Equations in Complex Geometries", AIAA paper 95-1924, Proceedings, AIAA 12th Computational Fluid Dynamics Conference, San Diego, CA, June 1995,
- Download File: A217.pdf (pdf: 970 KB)
Steinthorsson, E., Modiano, D., Crutchfield, W.Y., Bell, J.B., Colella, P., "An Adaptive Semi-Implicit Scheme for Simulations of Unsteady Viscous Compressible Flow", AIAA Paper 95-1727-CP, in Proceedings of the 12th AIAA CFD Conference, June 1995,
- Download File: A219.pdf (pdf: 1.1 MB)
Bell, J.B., Colella, P., Greenough, J.A., Marcus, D.L., "A Multi-Fluid Algorithm for Compressible, Reacting Flow", AIAA 95-1720, 12th AIAA Computational Fluid Dynamics Conference, San Diego, CA, June 1995,
- Download File: A218.pdf (pdf: 411 KB)
Greenough, J.A., Beckner, V., Pember, R.B., Crutchfield, W.Y., Bell, J.B.,Colella, P., "An Adaptive Multifluid Interface-Capturing Method for Compressible Flow in Complex Geometries", AIAA-95-1718, Proceedings of 26th AIAA Fluid Dynamics Conference, San Diego, CA, June 1995,
- Download File: A221.pdf (pdf: 759 KB)
Pember, R.B., Bell, J.B., Colella, P., Crutchfield, W.Y., Welcome, M.L., "An Adaptive Cartesian Grid Method for Unsteady Compressible Flow in Irregular Regions", J. Comp. Phys., 1995, 120(2):278-304,
- Download File: A131.pdf (pdf: 1.9 MB)
Hilditch, J., Colella, P., "A Front Tracking Method for Compressible Flames in One Dimension", SIAM Journal on Scientific Computing, Vol.16 No.4 (1995), pp. 755-772, 1995,
- Download File: A130.pdf (pdf: 684 KB)
Chien, K.Y., Ferguson, R.E., Kuhl, A.L., Glaz, H.M., Colella, P., "Inviscid Dynamics of Two-Dimensional Shear Layers", International Journal of Computational Fluid Dynamics, Vol.5 No.1-2, pp. 59+, 1995,
Collins, J.P., Colella, P., Glaz, H.M., "An Implicit-Explicit Eulerian Godunov Scheme for Compressible Flow", J. Comp. Phys., Vol.116 No.2, pp. 195-211, 1995,
- Download File: A128.pdf (pdf: 1.1 MB)
Ann S. Almgren, John B. Bell, Phillip Colella, Louis H. Howell, Michael Welcome, "A High-Resolution Adaptive Projection Method for Regional Atmospheric Modeling", Proceedings of the NGEMCOM Conference sponsored by the U.S. EPA, August 7-9, Bay City, MI, 1995,
Richard B. Pember, Ann S. Almgren, John B. Bell, Phillip Colella, Louis Howell, and Mindy Lai, "A Higher-Order Projection Method for the Simulation of Unsteady Turbulent Nonpremixed Combustion in an Industrial Burner", Proceedings of the 8th International Symposium on Transport Phenomena in Combustion, July 16-20, San Francisco, CA, 1995,
- Download File: paper95.ps.gz (gz: 62 KB)
- Download File: abstract95.ps.gz (gz: 10 KB)
- Download File: preprint.ps.gz (gz: 101 KB)
R.B. Pember, A.S. Almgren, W.Y. Crutchfield, L.H. Howell, J.B. Bell, P. Colella, and V.E. Beckner, "An Embedded Boundry Method for the Modeling of Unsteady Combustion in an Industrial Gas-Fired Furnace", WSS/CI 95F-165, 1995 Fall Meeting of the Western united States Section of the Combustion Institute, Stanford University, October 30-31, 1995,
- Download File: paper1995.ps.gz (gz: 201 KB)
- Download File: abs95.ps.gz (gz: 18 KB)
1994
P. Colella and W.Y. Crutchfield, "A Parallel Adaptive Mesh Refinement Algorithm on the C-90", Energy Research Power Users Symposium, July 12, 1994,
Colella, P., Crutchfield, W.Y., "A Parallel Adaptive Mesh Refinement Algorithm on the C-90", Energy Research Power Users Symposium, 1994,
Almgren, A.S., Buttke, T., Colella, P., "A Fast Adaptive Vortex Method In Three Dimensions", J. Comp. Phys., Vol.113 No.2, pp. 177-200, 1994,
- Download File: A124.pdf (pdf: 1.4 MB)
Zachary, A.L., Malagoli, A., Colella, P., "A Higher-Order Godunov Method for Multidimensional Ideal Magnetohydrodynamics", SIAM Journal on Scientific Computing, Vol.15 No.2, pp. 263-284, 1994,
- Download File: A125.pdf (pdf: 572 KB)
Liu, J.C., Colella, P., Peterson, P.F., Schrock, V.E., "Modeling Supersonic Flows Through a Gas-Continuous 2-Fluid Medium", Nuclear Engineering and Design, Vol.146 No.1-3, pp. 337-348, 1994,
- Download File: A127.pdf (pdf: 752 KB)
Klein, R.I., McKee, C.F., Colella, P., "On the Hydrodynamic Interaction of Shock Waves with Interstellar Clouds .1. Nonradiative Shocks in Small Clouds", Astrophysical Journal, Vol.420 No.1, pp. 213-236, 1994,
- Download File: A126.pdf (pdf: 3 MB)
Steinthorsson, E., Modiano, D., Colella, P., "Computations of Unsteady Viscous Compressible Flows Using Adaptive Mesh Refinement in Curvilinear Body-Fitted Grid Systems", NASA technical memorandum 106704, ICOMP report no. 94-17, 1994,
- Download File: A215.pdf (pdf: 1.2 MB)
Ann S. Almgren, John B. Bell, Louis H. Howell and Phillip Colella, "An Adaptive Projection Method for the Incompressible Navier-Stokes Equations", Proceedings of the 14th IMACS World Congress, July 11-15, pp. 537-540, Atlanta, Georgia, 1994,
1993
Almgren, A.S., Bell, J.B., Colella, P., Howell, L.H., "An Adaptive Projection Method for the Incompressible Euler Equations", Proceedings of the AIAA 11th Computational Fluid Dynamics Conference, Orlando, FL, July 1993,
- Download File: A212.pdf (pdf: 783 KB)
Lai, M., Colella, P., Bell, J., "A Projection Method for Combustion in the Zero Mach Number Limit", AIAA paper 93-3369, Proceedings of the AIAA 11th Computational Fluid Dynamics Conference, Orlando, FL, July 1993,
- Download File: A213.pdf (pdf: 733 KB)
Pember, R.B., Bell, J.B., Colella, P., Crutchfield, W.Y., Welcome, M.L., "Adaptive Cartesian Grid Methods for Representing Geometry in Inviscid Compressible Flow", Proceedings of the 11th AAIA CFD Conference, Orlando, Florida, July 1993,
- Download File: A214.pdf (pdf: 898 KB)
Hilfinger, P.N., Colella, P., "FIDIL Reference Manual", UC Berkeley Computer Science Division Report, UCB/CSD-93-759, May 1993,
- Download File: B17.pdf (pdf: 1.7 MB)
Ann S. Almgren, John B. Bell,Phillip Colella, Louis H. Howell, "An Adaptive Projection Method for the Incompressible Euler Equations", Proceedings of the AIAA 11th Computational Fluid Dynamics Conference, July 6-9, Orlando, FL, 1993,
1992
Chen, X.M., Schrock, V.E., Peterson, P.F., Colella, P., "Gas Dynamics in the Central Cavity of the Hylife-II Reactor", Fusion Technology, Vol.21 No.3, pp. 1520-1524., 1992,
- Download File: A123.pdf (pdf: 972 KB)
Zachary, A.L., Colella, P., "A Higher-Order Godunov Method for the Equations of Ideal Magnetohydrodynamics", Journal of Computational Physics, Vol.99 No.2, pp. 341-347., 1992,
- Download File: A122.pdf (pdf: 158 KB)
1991
Chien, KY., Ferguson, R.E., Kuhl, A.L., Glaz, H.M., Colella P., "Inviscid Dynamics of Two-Dimensional Shear Layers", Proceedings, 22nd AIAA Fluid Dynamics, Plasma Dynamics, and Lasers Conference, Honolulu, Hawaii, June 1991,
- Download File: June1991.pdf (pdf: 1.4 MB)
Bell, J.B., Colella P., Welcome, M., "Conservative Front-Tracking for Inviscid Compressible Flow", Proceedings, 10th AIAA Computational Fluid Dynamics Conference, pp. 814-822., Honolulu, Hawaii, June 1991,
- Download File: A2111991.pdf (pdf: 914 KB)
Bell, J.B., Colella P., Howell, L., "An Efficient Second-Order Projection Method for Viscous Incompressible Flow", Proceedings, 10th AIAA Computational Fluid Dynamics Conference, pp. 360-367., Honolulu, Hawaii, June 1991,
- Download File: A210.pdf (pdf: 925 KB)
Henderson L.F., Colella P., Puckett E.G., "On the Refraction of Shock Waves at a Slow Fast Gas Interface", Journal of Fluid Mechanics, Vol.224 MAR:1+., 1991,
- Download File: A211991.pdf (pdf: 1.7 MB)
Trangenstein J.A., Colella P., "A Higher-Order Godunov Method for Modeling Finite Deformation in Elastic-Plastic Solids", Communications on Pure and Applied Mathematics, Vol.44 No.1, pp. 41-100, 1991,
- Download File: A1201991.pdf (pdf: 2.1 MB)
1990
Kuhl, A.L., Ferguson, R.E., Chien, K.Y., Glowacki, W., Collins, P. Glaz, H., Colella P., "Turbulent Wall Jet in a Mach Reflection Flow", Progress in Aeronautics and Astronautics, Vol.13, pp. 201-232, 1990,
- Download File: A119.pdf (pdf: 544 KB)
Colella, P., Henderson, L.F., "The von Neumann Paradox for the Diffraction of Weak Shock Waves", Journal of Fluid Mechanics, Vol.213, pp. 71-94., 1990,
- Download File: A117.pdf (pdf: 1.6 MB)
Colella, P., "Multidimensional Upwind Methods for Hyperbolic Conservation Laws", J. Comp. Phys., Vol.87 No.1, pp. 171-200, 1990,
- Download File: A118.pdf (pdf: 882 KB)
1989
Colella, P., Henderson, L.F., Puckett, E.G., "A Numerical Study of Shock Wave Refractions at a Gas Interface", Proceedings, 9th AIAA Computational Fluid Dynamics Conference, pp. 426-439, Buffalo, NY, 1989,
- Download File: A271989.pdf (pdf: 1022 KB)
Bell, J.B., Colella P., Trangenstein, J. A., Welcome, M., "Adaptive Mesh Refinement on Moving Quadrilateral Grids", Proceedings, 9th AIAA Computational Fluid Dynamics Conference, pp. 471-479, Buffalo, NY, 1989,
- Download File: April1989.pdf (pdf: 868 KB)
Bell, J.B., Colella, P., Glaz, H.M., "A Second-Order Projection Method for the Incompressible Navier Stokes Equations", J. Comp. Phys,, Vol.85 No.2, pp. 257-283., 1989,
- Download File: A115.pdf (pdf: 535 KB)
Bell, J.B., Colella, P,, Trangenstein, J.A., "Higher Order Godunov Methods for General Systems of Hyperbolic Conservation Laws", Journal of Computational Physics, Vol.82 No.2, pp. 362-397, 1989,
- Download File: A141989.pdf (pdf: 1.3 MB)
Berger M.J., Colella P., "Local Adaptive Mesh Refinement for Shock Hydrodynamics", J. Comp. Phys., Vol.82 No.1, pp. 64-84, 1989,
- Download File: A113.pdf (pdf: 366 KB)
Sakurai, A., Henderson, L.F., Takayama, K., Walenta, Z., Colella P., "On the Von Neumann Paradox of Weak Mach Reflection", Fluid Dynamics Research, Vol.4, pp. 333-346, 1989,
- Download File: A1161989.pdf (pdf: 848 KB)
1988
Bell, J.B., Colella, P. Trangenstein, J.A., Welcome, M., "Godunov Methods and Adaptive Algorithms for Unsteady Fluid Dynamics", Proceedings, 11th International Conference on Numerical Methods in Fluid Dynamics, Springer Lecture Notes in Physics Vol.323, pp. 137-141, Williamsburg, Virginia, June 1988,
- Download File: June1988.pdf (pdf: 438 KB)
Glaz, H.M., Colella, P., Collins, J.P., Ferguson, R.E., "Nonequilibrium Effects in Oblique Shock-Wave Reflection", AIAA Journal, Vol.26, pp. 698-705., 1988,
- Download File: A1121988.pdf (pdf: 777 KB)
Hilfinger, P.N., Colella P., "FIDIL: A Language for Scientific Programming", Symbolic Computation: Applications to Scientific Computing, SIAM Frontiers in Applied Mathematics, Vol.5, pp. 97-138, 1988,
- Download File: B231988.pdf (pdf: 2 MB)
1987
Chern, I.L., Colella P., "A Conservative Front Tracking Method for Hyperbolic Conservation Laws", Lawrence Livermore National Laboratory Report UCRL-97200, July 1987,
- Download File: B141987.pdf (pdf: 1.9 MB)
Bell, J.B., Colella P., Trangenstein, J., Welcome, M., "Adaptive Methods for High Mach Number Reacting Flow", Proceedings, AIAA 8th Computational Fluid Dynamics Conference, pp. 717-725, Honolulu, Hawaii, June 1987,
- Download File: A241987.pdf (pdf: 562 KB)
Glaz, H.M., Colella, P., Collins, J.P., Ferguson, R.E., "High Resolution Calculations of Unsteady, Two-Dimensional Non-Equilibrium Gas Dynamics with Experimental Comparisons", AIAA paper 87-1293, Proceedings, AIAA 8th Computational Fluid Dynamics Conference, Honolulu, Hawaii, June 1987,
- Download File: A251987.pdf (pdf: 883 KB)
Bell, J.B., Colella P., Glaz, H.M., "A Second Order Projection Method for Viscous Incompressible Flow", AIAA paper 87-1176-CP, Proceedings, AIAA 8th Computational Fluid Dynamics Conference, pp. 789-794, Honolulu, Hawaii, June 1987,
1986
Fryxell, B.A., Woodward, P.R., Colella, P., Winkler, K.H., "An Implicit-Explicit Hybrid Method for Lagrangian Hydrodynamics", Journal of Computational Physics, Vol.62, pp. 283-310., 1986,
- Download File: A191986.pdf (pdf: 508 KB)
Glaz, H.M., Colella P., Glass, I.I., Deschambault, R.L., "Mach Reflection from an HE-Driven Blast Wave", Progress in Aeronautics and Astronautics, Vol.106, pp. 388-421., 1986,
- Download File: A1111986.pdf (pdf: 1.5 MB)
Colella, P., Majda, A., Roytburd, V., "Theoretical and Numerical Structure for Reacting Shock Waves", SIAM Journal of Sci. Stat. Computing, Vol.7 No.4, pp. 1059-1080, 1986,
- Download File: A110.pdf (pdf: 759 KB)
1985
Glaz, H.M., Colella P., Glass, I.I., Deschambault, R.L., "A Numerical Study of Oblique Shock-Wave Reflections with Experimental Comparisons", Proceedings, Royal Society of London A, Vol. 398, pp. 117-140, 1985,
- Download File: A18.pdf (pdf: 1.3 MB)
Colella, P., Glaz, H.M., "Efficient Solution Algorithms for the Riemann Problem for Real Gases", J. Comp. Phys., Vol.59 No.2, pp. 264-289., 1985,
- Download File: A17.pdf (pdf: 565 KB)
Colella, P., "A Direct Eulerian MUSCL Scheme for Gas Dynamics", SIAM Journal for Sci. Stat. Computing, Vol.6 No.1, pp. 104-117., 1985,
- Download File: A16.pdf (pdf: 716 KB)
1984
Eidelman, S., Colella P., Shreeve, R.P., "Application of the Godunov Method and its Second-Order Extension to Cascade Flow Modeling", AIAA Journal, Vol.22 No.11 (1984), pp. 1609-1615, 1984,
- Download File: A15.pdf (pdf: 1.1 MB)
Colella P., Glaz, H.M., "Numerical Calculation of Complex Shock Reflections in Gases", Proceedings, 9th International Conference on Numerical Methods in Fluid Dynamics, Saclay, France, June, 1984, Springer Lecture Notes in Physics, Vol.218, pp. 154-158, 1984,
- Download File: A231984.pdf (pdf: 388 KB)
Woodward, P.R., Colella, P., "The Numerical Simulation of Two-Dimensional Fluid Flow with Strong Shocks", J. Comp. Phys., Vol.54 No.1 (1984), pp. 115-173, 1984,
- Download File: A31984.pdf (pdf: 3.4 MB)
Colella P., Woodward, P.R., "The Piecewise Parabolic Method (PPM) for Gas-Dynamical Simulations", J. Comp. Phys., Vol.54 No.1 (1984), pp. 174-201, 1984,
- Download File: A141984.pdf (pdf: 487 KB)
1982
Colella P., Glaz, H.M., "Numerical Modelling of Inviscid Shocked Flows of Real Gases", Proceedings, 8th International Conference on Numerical Methods in Fluid Dynamics,Springer Lecture Notes in Physics, Vol.170, pp. 175-182, Aachen, Germany, June 1982,
- Download File: A221982.pdf (pdf: 350 KB)
Colella P., "Glimm's Method for Gas Dynamics", SIAM Journal for Sci. Stat. Computing, Vol.3 No.1, pp. 76-110, 1982,
- Download File: A12.pdf (pdf: 852 KB)
1980
Woodward, P.R., Colella P., "High Resolution Difference Schemes for Compressible Gas Dynamics", Proceedings, 7th International Conference on Numerical Methods in Fluid Dynamics, Stanford, CA, June, 1980, Springer Lecture Notes in Physics, Vol.142, pp. 434-441, June 1980,
- Download File: A211980.pdf (pdf: 519 KB)
1973
Colella P., Lanford, O.E., "Sample Field Behavior for the Free Markov Random Field", 1973,
- Download File: B21.pdf (pdf: 903 KB)