Careers | Phone Book | A - Z Index

Steven Hofmeyr

Hofmeyr.jpg
Steven Hofmeyr
Phone: +1 510 486 4505
Fax: +1 510 486 6900

Research Interests

  • Large-scale, high performance genome assembly.
  • Modeling of complex systems, such as the Internet and financial markets.
  • Information security, particularly understanding the impact of policies and regulations and the dynamic interplay of chronic attack and defense on a large scale.
  • New operating systems for the many-core era.
  • Scheduling and load-balancing for parallel applications.

Awards

Talks and panels

  • Fighting unknown attacks (panel). eSecureLive. May 2005.
  • The computer security arms race. Systems and Software Technology Conference. April 2005.
  • The future of security (panel). eFinancial WorldExpo. November 2004.
  • Reflections on adaptable, host-based intrusion prevention systems. Adaptive and Resilient Computer Security Workshop at the Santa Fe Institute. November 2004.
  • What’s next: new threats and new answers. (panel). eWeek Security Summit. September 2004.
  • Acting in milliseconds: why defense processes have to change (panel). Blackhat Briefings. July 2004.
  • Policy-based security: automating intrusion response (panel). eWeek Security Summit. May 2004.
  • Immune system analogues approach. JET Roadmap Workshop. April, 2004.
  • Adaptive intrusion protection. RSA Conference. February, 2004.
  • Emulating the human biosystem to meet cyber security challenges (panel). RSA Conference. February, 2004.
  • Attaining security currency operational, risk, and cost efficiency approaches to patch management (panel). RSA Conference. February, 2004.
  • False positives, intrusion detection, and robust system design. Security and Privacy Vanguard Conference. February, 2004.
  • Preventing intrusions and tolerating false positives. BlackHat Windows Security Briefings. January, 2004.
  • Keynote: Damage-response systems. Adaptive and resilient computing security. Santa Fe Institute Workshop. November, 2003.
  • Change management: secure your applications and live through it. CSI NetSec. November, 2003.
  • Growing a venture-backed security company from New Mexico origins.Coronado Ventures Forum. May, 2003.
  • The solution for today’s security threats? The worlds oldest security system. RSA Conference. April, 2003.
  • Distributed security: lessons from nature. O’Reilly Emerging Technology Conference. May, 2002.
  • How to feel secure in an insecure world (panel). PCForum. March, 2002.
  • Is wireless an enabler for global security? (panel) Wireless Systems Design Conference. February, 2002.

Journal Articles

M. Ferroni, JA Colmenares, S Hofmeyr, JD Kubiatowicz, MD Santambrogio, "Enabling power-awareness for the Xen Hypervisor", ACM SIGBED Review, March 20, 2018, 1:36-42,

M Ferroni, A Corna, A Damiani, R Brondolin, JA Colmenares, S Hofmeyr, JD Kubiatowicz, MD Santambrogio, "Power consumption models for multi-tenant server infrastructures", ACM Transactions on Architecture and Code Optimization, 2017, 14, doi: 10.1145/3148965

Khaled Z. Ibrahim, Steven Hofmeyr, Costin Iancu, "The Case for Partitioning Virtual Machines on Manycore Architectures", IEEE TPDS, April 17, 2014,

S Hofmeyr, JA Colmenares, C Iancu, J Kubiatowicz, "Juggle: Addressing extrinsic load imbalances in SPMD applications on multicore computers", Cluster Computing, 2013, 16:299--319, doi: 10.1007/s10586-012-0204-0

S Hofmeyr, J Colmenares, J Kubiatowicz, C Iancu, "Juggle: Addressing Extrinsic Load Imbalances in SPMD Applications on Multicore Computer", Cluster Computing, 2012,

S. Hofmeyr, "The information security technology arms race", Crosstalk: The Journal of Defense Software Engineering, October 1, 2005,

S. Hofmeyr, "New approaches to security: lessons from nature", Secure Convergence Journal, June 1, 2005,

S. Hofmeyr, "Host intrusion detection: part of the operating system or on top of the operating system", Computers & Security, February 1, 2005,

S Hofmeyr, "The implications of immunology for secure systems design", Computers and Security, January 1, 2004, 23:453--455, doi: 10.1016/S0167-4048(04)00166-X

S Hofmeyr, "A new approach to security: Learning from immunology", Information Systems Security, January 1, 2003, 12:29--35, doi: 10.1201/1086/43648.12.4.20030901/77303.6

S. Hofmeyr, "Why today's security technologies are so inadequate: history, implications and new approaches", Information Systems Security, January 1, 2003,

S. Forrest, S. Hofmeyr, "Engineering an immune system", Graft, June 1, 2001,

SA Hofmeyr, S Forrest, "Architecture for an artificial immune system.", Evolutionary computation, 2000, 8:443--473, doi: 10.1162/106365600568257

SA Hofmeyr, S Forrest, A Somayaji, "Intrusion Detection Using Sequences of System Calls", J. Comput. Secur., January 1, 1998, 6:151--180,

AP Kosoresow, SA Hofmeyr, "Intrusion detection via system call traces", IEEE Software, January 1, 1997, 14:35--41, doi: 10.1109/52.605929

S Forrest, SA Hofmeyr, A Somayaji, "Computer Immunology", Communications of the ACM, January 1, 1997, 40:88--96, doi: 10.1145/262793.262811

Conference Papers

John Bachan, Scott B. Baden, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Dan Bonachea, Paul H. Hargrove, Hadia Ahmed, "UPC++: A High-Performance Communication Framework for Asynchronous Computation", 33rd IEEE International Parallel & Distributed Processing Symposium (IPDPS'19), Rio de Janeiro, Brazil, IEEE, May 2019, doi: 10.25344/S4V88H

UPC++ is a C++ library that supports high-performance computation via an asynchronous communication framework. This paper describes a new incarnation that differs substantially from its predecessor, and we discuss the reasons for our design decisions. We present new design features, including future-based asynchrony management, distributed objects, and generalized Remote Procedure Call (RPC).
We show microbenchmark performance results demonstrating that one-sided Remote Memory Access (RMA) in UPC++ is competitive with MPI-3 RMA; on a Cray XC40 UPC++ delivers up to a 25% improvement in the latency of blocking RMA put, and up to a 33% bandwidth improvement in an RMA throughput test. We showcase the benefits of UPC++ with irregular applications through a pair of application motifs, a distributed hash table and a sparse solver component. Our distributed hash table in UPC++ delivers near-linear weak scaling up to 34816 cores of a Cray XC40. Our UPC++ implementation of the sparse solver component shows robust strong scaling up to 2048 cores, where it outperforms variants communicating using MPI by up to 3.1x.
UPC++ encourages the use of aggressive asynchrony in low-overhead RMA and RPC, improving programmer productivity and delivering high performance in irregular applications.

E Georganas, R Egan, S Hofmeyr, E Goltsman, B Arnt, A Tritt, A Buluc, L Oliker, K Yelick, "Extreme Scale De Novo Metagenome Assembly", International Conference for High Performance Computing, Networking, Storage, and Analysis (SC18), 2018,

C Imes, S Hofmeyr, H Hofmann, "Energy-efficient application resource scheduling using machine learning classifiers", ACM International Conference Proceeding Series, 2018, doi: 10.1145/3225058.3225088

L Di Tucci, D Conficconi, A Comodi, S Hofmeyr, D Donofrio, MD Santambrogio, "A parallel, energy efficient hardware architecture for the merAligner on FPGA using chisel HCL", Proceedings - 2018 IEEE 32nd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2018, 2018, 214--217, doi: 10.1109/IPDPSW.2018.00041

John Bachan, Dan Bonachea, Paul H Hargrove, Steve Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, Scott B Baden, "The UPC++ PGAS library for Exascale Computing", Proceedings of the Second Annual PGAS Applications Workshop, November 13, 2017, 7,

We describe UPC++ V1.0, a C++11 library that supports APGAS programming. UPC++ targets distributed data structures where communication is irregular or fine-grained. The key abstractions are global pointers, asynchronous programming via RPC, and futures. Global pointers incorporate ownership information useful in optimizing for locality. Futures capture data readiness state, are useful for scheduling and also enable the programmer to chain operations to execute asynchronously as high-latency dependencies become satisfied, via continuations. The interfaces for moving non-contiguous data and handling memories with different optimal access methods are composable and closely resemble those used in modern C++. Communication in UPC++ runs at close to hardware speeds by utilizing the low-overhead GASNet-EX communication library.

M Ellis, E Georganas, R Egan, S Hofmeyr, A Buluç, B Cook, L Oliker, K Yelick, "Performance characterization of de novo genome assembly on leading parallel systems", Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2017, 10417 LN:79--91, doi: 10.1007/978-3-319-64203-1_6

E Georganas, M Ellis, R Egan, S Hofmeyr, A Buluç, B Cook, L Oliker, K Yelick, "MerBench: PGAS benchmarks for high performance genome assembly", Proceedings of PAW 2017: 2nd Annual PGAS Applications Workshop - Held in conjunction with SC 2017: The International Conference for High Performance Computing, Networking, Storage and Analysis, 2017, 2017-Jan:1--4, doi: 10.1145/3144779.3169109

B Edwards, S Hofmeyr, S Forrest, "Hype and heavy tails: A closer look at data breaches", Journal of Cybersecurity, 2016, 2:3--14, doi: 10.1093/cybsec/tyw003

S Hofmeyr, C Iancu, J Colmenares, E Roman, B Austin, "Time-Sharing Redux for Large-Scale HPC Systems", Proceedings - 18th IEEE International Conference on High Performance Computing and Communications, 14th IEEE International Conference on Smart City and 2nd IEEE International Conference on Data Science and Systems, HPCC/SmartCity/DSS 2016, 2016, 301--308, doi: 10.1109/HPCC-SmartCity-DSS.2016.0051

M Ferroni, JA Colmenares, S Hofmeyr, JD Kubiatowicz, MD Santambrogio, "Enabling power-awareness for the Xen hypervisor", CEUR Workshop Proceedings, 2016, 1697,

E Georganas, A Buluç, J Chapman, S Hofmeyr, C Aluru, R Egan, L Oliker, D Rokhsar, K Yelick, "HipMer: An extreme-scale de novo genome assembler", International Conference for High Performance Computing, Networking, Storage and Analysis, SC, January 1, 2015, 15-20-No, doi: 10.1145/2807591.2807664

B Edwards, S Hofmeyr, S Forrest, M Van Eeten, "Analyzing and modeling longitudinal security data: Promise and pitfalls", ACM International Conference Proceeding Series, 2015, 7-11-Dec:391--400, doi: 10.1145/2818000.2818010

JA Colmenares, G Eads, S Hofmeyr, S Bird, M Moretó, D Chou, B Gluzman, E Roman, DB Bartolini, N Mor, K Asanovi, JD Kubiatowicz, "Tessellation: Refactoring the OS around explicit resource containers with continuous adaptation", Proceedings - Design Automation Conference, 2013, doi: 10.1145/2463209.2488827

B Edwards, T Moore, G Stelle, S Hofmeyr, S Forrest, "Beyond the blacklist: Modeling malware spread and the effect of interventions", Proceedings New Security Paradigms Workshop, January 1, 2012, 53--65,

S. Hofmeyr, T. Moore, S. Forrest, B. Edwards, G. Stelle, "Modeling Internet scale policies for cleaning up malware", Workshop on the Economics of Information Security (WEIS 2011), June 14, 2011,

KZ Ibrahim, S Hofmeyr, C Iancu, E Roman, "Optimized pre-copy live migration for memory intensive applications", Proceedings of 2011 SC - International Conference for High Performance Computing, Networking, Storage and Analysis, 2011, doi: 10.1145/2063384.2063437

S Hofmeyr, JA Colmenares, C Iancu, J Kubiatowicz, "Juggle: Proactive load balancing on multicore computers", Proceedings of the IEEE International Symposium on High Performance Distributed Computing, 2011, 3--14, doi: 10.1145/1996130.1996134

KZ Ibrahim, S Hofmeyr, C Iancu, "Characterizing the performance of parallel applications on multi-socket virtual machines", Proceedings - 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2011, 2011, 1--12, doi: 10.1109/CCGrid.2011.50

JA Colmenares, S Bird, G Eads, S Hofmeyr, A Kim, R Poddar, H Alkaff, K Asanović, J Kubiatowicz, "Tessellation operating system: Building a real-time, responsive, high-throughput client OS for many-core architectures", 2011 IEEE Hot Chips 23 Symposium, HCS 2011, 2011, doi: 10.1109/HOTCHIPS.2011.7477518

J. A. Colmenares, S. Bird, H. Cook, P. Pearce, D. Zhu, J. Shalf, S. Hofmeyr, K. Asanovic, J. Kubiatowicz, "Resource Management in the Tessellation Manycore OS", 2nd Usenix Workshop on Hot Topics in Parallelism (HotPar), June 15, 2010,

S Hofmeyr, C Iancu, F Blagojević, "Load balancing on speed", Proceedings of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPOPP, January 1, 2010, 147--157, doi: 10.1145/1693453.1693475

C Iancu, S Hofmeyr, F Blagojević, Y Zheng, "Oversubscription on multicore processors", Proceedings of the 2010 IEEE International Symposium on Parallel and Distributed Processing, IPDPS 2010, 2010, doi: 10.1109/IPDPS.2010.5470434

R. Liu, K. Klues, S. Bird, S. Hofmeyr, K. Asanovic, J. D. Kubiatowicz, "Tessellation: Space-Time Partitioning in a Manycore Client OS", First USENIX Workshop on Hot Topics in Parallelism, June 15, 2009,

C Iancu, S Hofmeyr, "Runtime optimization of vector operations on large scale SMP clusters", Parallel Architectures and Compilation Techniques - Conference Proceedings, PACT, 2008, 122--132, doi: 10.1145/1454115.1454134

S Forrest, S Hofmeyr, A Somayaji, "The evolution of system-call monitoring", Proceedings - Annual Computer Security Applications Conference, ACSAC, January 1, 2008, 418--430, doi: 10.1109/ACSAC.2008.54

SA Hofmeyr, S Forrest, "Immunity by Design: An Artificial Immune System", GECCO’99, San Francisco, CA, USA, Morgan Kaufmann Publishers Inc., January 1, 1999, 1289--1296,

S. Hofmeyr, S. Forrest, P. D'haeseleer, "An immunological approach to distributed network intrusion detection", Recent Advances in Intrusion Detection (RAID), September 14, 1998,

A Somayaji, S Hofmeyr, S Forrest, "Principles of a computer immune system", Proceedings New Security Paradigms Workshop, January 1, 1998, Part F12:75--82, doi: 10.1145/283699.283742

S Forrest, SA Hofmeyr, A Somayaji, TA Longstaff, "A Sense of Self for Unix Processes", SP ’96, Washington, DC, USA, IEEE Computer Society, January 1, 1996, 120---120-,

S Hofmeyr, T Moore, S Forrest, B Edwards, G Stelle, "Modeling Internet-Scale Policies for Cleaning up Malware", 1969,

Book Chapters

E. Georganas, S. Hofmeyr, L. Oliker, R. Egan, D. Rokhsar, A. Buluc, K. Yelick, "Extreme-scale de novo genome assembly", Exascale Scientific Applications: Scalability and Performance Portability, edited by T.P. Straatsma, K. B. Antypas, T. J. Williams, ( November 13, 2017) doi: https://doi.org/10.1201/b21930

S. Hofmeyr, "An interpretive introduction to the immune system", Design Principles for the Immune System and Other Distributed Autonomous Systems, ( June 14, 2001)

S Forrest, S Hofmeyr, "Immunology as Information Processing", Design Principles for the Immune Systems and other Distributed Autonomous Systems, ( December 31, 1969)

Presentation/Talks

Yili Zheng, Filip Blagojevic, Dan Bonachea, Paul H. Hargrove, Steven Hofmeyr, Costin Iancu, Seung-Jai Min, Katherine Yelick, Getting Multicore Performance with UPC, SIAM Conference on Parallel Processing for Scientific Computing, February 2010,

Steven Hofmeyr, New approaches to security: lessons from nature, CSI Netsec, June 1, 2005,

Reports

John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ Programmer's Guide, v1.0-2019.3.0", Lawrence Berkeley National Laboratory Tech Report, March 15, 2019, LBNL 2001191, doi: 10.25344/S4F301

UPC++ is a C++11 library that provides Partitioned Global Address Space (PGAS) programming. It is designed for writing parallel programs that run efficiently and scale well on distributed-memory parallel computers. The PGAS model is single program, multiple-data (SPMD), with each separate constituent process having access to local memory as it would in C++. However, PGAS also provides access to a global address space, which is allocated in shared segments that are distributed over the processes. UPC++ provides numerous methods for accessing and using global memory. In UPC++, all operations that access remote memory are explicit, which encourages programmers to be aware of the cost of communication and data movement. Moreover, all remote-memory access operations are by default asynchronous, to enable programmers to write code that scales well even on hundreds of thousands of cores.

John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ Specification v1.0, Draft 10", Lawrence Berkeley National Laboratory Tech Report, March 15, 2019, LBNL 2001192, doi: 10.25344/S4JS30

UPC++ is a C++11 library providing classes and functions that support Partitioned Global Address Space (PGAS) programming. We are revising the library under the auspices of the DOE’s Exascale Computing Project, to meet the needs of applications requiring PGAS support. UPC++ is intended for implementing elaborate distributed data structures where communication is irregular or fine-grained. The UPC++ interfaces for moving non-contiguous data and handling memories with different optimal access methods are composable and similar to those used in conventional C++. The UPC++ programmer can expect communication to run at close to hardware speeds. The key facilities in UPC++ are global pointers, that enable the programmer to express ownership information for improving locality, one-sided communication, both put/get and RPC, futures and continuations. Futures capture data readiness state, which is useful in making scheduling decisions, and continuations provide for completion handling via callbacks. Together, these enable the programmer to chain together a DAG of operations to execute asynchronously as high-latency dependencies become satisfied.

John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ Specification v1.0, Draft 8", Lawrence Berkeley National Laboratory Tech Report, September 26, 2018, LBNL 2001179, doi: 10.25344/S45P4X

UPC++ is a C++11 library providing classes and functions that support Partitioned Global Address Space (PGAS) programming. We are revising the library under the auspices of the DOE’s Exascale Computing Project, to meet the needs of applications requiring PGAS support. UPC++ is intended for implementing elaborate distributed data structures where communication is irregular or fine-grained. The UPC++ interfaces for moving non-contiguous data and handling memories with different optimal access methods are composable and similar to those used in conventional C++. The UPC++ programmer can expect communication to run at close to hardware speeds. The key facilities in UPC++ are global pointers, that enable the programmer to express ownership information for improving locality, one-sided communication, both put/get and RPC, futures and continuations. Futures capture data readiness state, which is useful in making scheduling decisions, and continuations provide for completion handling via callbacks. Together, these enable the programmer to chain together a DAG of operations to execute asynchronously as high-latency dependencies become satisfied.

John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ Programmer's Guide, v1.0-2018.9.0", Lawrence Berkeley National Laboratory Tech Report, September 26, 2018, LBNL 2001180, doi: 10.25344/S49G6V

UPC++ is a C++11 library that provides Partitioned Global Address Space (PGAS) programming. It is designed for writing parallel programs that run efficiently and scale well on distributed-memory parallel computers. The PGAS model is single program, multiple-data (SPMD), with each separate constituent process having access to local memory as it would in C++. However, PGAS also provides access to a global address space, which is allocated in shared segments that are distributed over the processes. UPC++ provides numerous methods for accessing and using global memory. In UPC++, all operations that access remote memory are explicit, which encourages programmers to be aware of the cost of communication and data movement. Moreover, all remote-memory access operations are by default asynchronous, to enable programmers to write code that scales well even on hundreds of thousands of cores.

J Bachan, S Baden, D Bonachea, PH Hargrove, S Hofmeyr, K Ibrahim, M Jacquelin, A Kamil, B van Straalen, "UPC++ Programmer’s Guide, v1.0-2018.3.0", March 31, 2018, LBNL 2001136, doi: 10.2172/1430693

UPC++ is a C++11 library that provides Partitioned Global Address Space (PGAS) programming. It is designed for writing parallel programs that run efficiently and scale well on distributed-memory parallel computers. The PGAS model is single program, multiple-data (SPMD), with each separate thread of execution (referred to as a rank, a term borrowed from MPI) having access to local memory as it would in C++. However, PGAS also provides access to a global address space, which is allocated in shared segments that are distributed over the ranks. UPC++ provides numerous methods for accessing and using global memory. In UPC++, all operations that access remote memory are explicit, which encourages programmers to be aware of the cost of communication and data movement. Moreover, all remote-memory access operations are by default asynchronous, to enable programmers to write code that scales well even on hundreds of thousands of cores.

J Bachan, S Baden, D Bonachea, P Hargrove, S Hofmeyr, K Ibrahim, M Jacquelin, A Kamil, B Lelbach, B van Straalen, "UPC++ Specification v1.0, Draft 6", March 26, 2018, LBNL 2001135, doi: 10.2172/1430689

UPC++ is a C++11 library providing classes and functions that support Partitioned Global Address Space (PGAS) programming. We are revising the library under the auspices of the DOE’s Exascale Computing Project, to meet the needs of applications requiring PGAS support. UPC++ is intended for implementing elaborate distributed data structures where communication is irregular or fine-grained. The UPC++ interfaces for moving non-contiguous data and handling memories with different optimal access methods are composable and similar to those used in conventional C++. The UPC++ programmer can expect communication to run at close to hardware speeds. The key facilities in UPC++ are global pointers, that enable the programmer to express ownership information for improving locality, one-sided communication, both put/get and RPC, futures and continuations. Futures capture data readiness state, which is useful in making scheduling decisions, and continuations provide for completion handling via callbacks. Together, these enable the programmer to chain together a DAG of operations to execute asynchronously as high-latency dependencies become satisfied.

J Bachan, S Baden, D Bonachea, M Jacquelin, P Hargrove, S Hofmeyr, A Kamil, "Performance and Implementation of UPC++ - A C++ Library for PGAS Programming", 2018,

John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Khaled Ibrahim, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ Programmer’s Guide, v1.0-2017.9", Lawrence Berkeley National Laboratory Tech Report, September 29, 2017, LBNL 2001065, doi: 10.2172/1398522

This document has been superseded by: UPC++ Programmer’s Guide, v1.0-2018.3.0 (LBNL-2001136)

UPC++ is a C++11 library that provides Asynchronous Partitioned Global Address Space (APGAS) programming. It is designed for writing parallel programs that run efficiently and scale well on distributed-memory parallel computers. The APGAS model is single program, multiple-data (SPMD), with each separate thread of execution (referred to as a rank, a term borrowed from MPI) having access to local memory as it would in C++. However, APGAS also provides access to a global address space, which is allocated in shared segments that are distributed over the ranks. UPC++ provides numerous methods for accessing and using global memory. In UPC++, all operations that access remote memory are explicit, which encourages programmers to be aware of the cost of communication and data movement. Moreover, all remote-memory access operations are by default asynchronous, to enable programmers to write code that scales well even on hundreds of thousands of cores.

J Bachan, S Baden, D Bonachea, P Hargrove, S Hofmeyr, K Ibrahim, M Jacquelin, A Kamil, B Lelbach, B van Straalen, "UPC++ Specification v1.0, Draft 4", September 27, 2017, LBNL 2001066, doi: 10.2172/1398521

This document has been superseded by: UPC++ Specification v1.0, Draft 6 (LBNL-2001135)

UPC++ is a C++11 library providing classes and functions that support Asynchronous Partitioned Global Address Space (APGAS) programming. We are revising the library under the auspices of the DOE’s Exascale Computing Project, to meet the needs of applications requiring PGAS support. UPC++ is intended for implementing elaborate distributed data structures where communication is irregular or fine-grained. The UPC++ interfaces for moving non-contiguous data and handling memories with different optimal access methods are composable and similar to those used in conventional C++. The UPC++ programmer can expect communication to run at close to hardware speeds. The key facilities in UPC++ are global pointers, that enable the programmer to express ownership information for improving locality, one-sided communication, both put/get and RPC, futures and continuations. Futures capture data readiness state, which is useful in making scheduling decisions, and continuations provide for completion handling via callbacks. Together, these enable the programmer to chain together a DAG of operations to execute asynchronously as high-latency dependencies become satisfied.

G eads, JA colmenares, S Hofmeyr, S bird, D bartolini, D chon, B gluzman, K Asanovic, JD Kubiatowicz, "Building an Adaptive Operating System for Predictability and Efficiency", 2014,

P Beckman, R Brightwell, BR de Supinski, M Gokhale, S Hofmeyr, S Krishnamoorthy, M Lang, B Maccabe, J Shalf, M Snir, "Exascale Operating Systems and Runtime Software Report", 2012,

B Edwards, S Hofmeyr, G Stelle, S Forrest, "Internet Topology over Time", December 31, 1969,

Posters

Scott B. Baden, Paul H. Hargrove, Hadia Ahmed, John Bachan, Dan Bonachea, Steve Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "Pagoda: Lightweight Communications and Global Address Space Support for Exascale Applications - UPC++", Poster at Exascale Computing Project (ECP) Annual Meeting 2019, January 2019,

Scott B. Baden, Paul H. Hargrove, Hadia Ahmed, John Bachan, Dan Bonachea, Steve Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ and GASNet-EX: PGAS Support for Exascale Applications and Runtimes", The International Conference for High Performance Computing, Networking, Storage and Analysis (SC'18), November 13, 2018,

Lawrence Berkeley National Lab is developing a programming system to support HPC application development using the Partitioned Global Address Space (PGAS) model. This work is driven by the emerging need for adaptive, lightweight communication in irregular applications at exascale. We present an overview of UPC++ and GASNet-EX, including examples and performance results.

GASNet-EX is a portable, high-performance communication library, leveraging hardware support to efficiently implement Active Messages and Remote Memory Access (RMA). UPC++ provides higher-level abstractions appropriate for PGAS programming such as: one-sided communication (RMA), remote procedure call, locality-aware APIs for user-defined distributed objects, and robust support for asynchronous execution to hide latency. Both libraries have been redesigned relative to their predecessors to meet the needs of exascale computing. While both libraries continue to evolve, the system already demonstrates improvements in microbenchmarks and application proxies.

John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Khaled Ibrahim, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ and GASNet: PGAS Support for Exascale Apps and Runtimes", Poster at Exascale Computing Project (ECP) Annual Meeting 2018., February 2018,

John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Khaled Ibrahim, Mathias Jacquelin, Amir Kamil, Brian Van Straalen, "UPC++: a PGAS C++ Library", ACM/IEEE Conference on Supercomputing, SC'17, November 2017,

John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Khaled Ibrahim, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ and GASNet: PGAS Support for Exascale Apps and Runtimes", Poster at Exascale Computing Project (ECP) Annual Meeting 2017., January 2017,

Others

S Hofmeyr, Why today’s security technologies are so inadequate: History, implications, and new approaches, Information Security Management Handbook, Sixth Edition, Pages: 2623--2627 2007,

S. Hofmeyr, Stopping the slew of mutating malware, Data Management and Storage Technology Review, July 1, 2005,

S. Hofmeyr, Implementing security on server blades, Blade Letter, June 1, 2005,

S. Hofmeyr, Can IT defenses work like the body's?, Security Management, September 1, 2004,

S. Hofmeyr, Who says biology need be destiny?, CNET News, April 13, 2004,

S. Hofmeyr, Technology can learn a lot from biology, Silicon Valley Bizink, December 26, 2003,

S. Hofmeyr, Forward thinking, Security Products, June 1, 2002,