Careers | Phone Book | A - Z Index

Steven Hofmeyr

Hofmeyr.jpg
Steven Hofmeyr
Phone: +1 510 486 4505
Fax: +1 510 486 6900

Research Interests

  • Large-scale, high performance genome assembly.
  • Modeling of complex systems, such as the Internet and financial markets.
  • Information security, particularly understanding the impact of policies and regulations and the dynamic interplay of chronic attack and defense on a large scale.
  • New operating systems for the many-core era.
  • Scheduling and load-balancing for parallel applications.

Awards

Talks and panels

  • Fighting unknown attacks (panel). eSecureLive. May 2005.
  • The computer security arms race. Systems and Software Technology Conference. April 2005.
  • The future of security (panel). eFinancial WorldExpo. November 2004.
  • Reflections on adaptable, host-based intrusion prevention systems. Adaptive and Resilient Computer Security Workshop at the Santa Fe Institute. November 2004.
  • What’s next: new threats and new answers. (panel). eWeek Security Summit. September 2004.
  • Acting in milliseconds: why defense processes have to change (panel). Blackhat Briefings. July 2004.
  • Policy-based security: automating intrusion response (panel). eWeek Security Summit. May 2004.
  • Immune system analogues approach. JET Roadmap Workshop. April, 2004.
  • Adaptive intrusion protection. RSA Conference. February, 2004.
  • Emulating the human biosystem to meet cyber security challenges (panel). RSA Conference. February, 2004.
  • Attaining security currency operational, risk, and cost efficiency approaches to patch management (panel). RSA Conference. February, 2004.
  • False positives, intrusion detection, and robust system design. Security and Privacy Vanguard Conference. February, 2004.
  • Preventing intrusions and tolerating false positives. BlackHat Windows Security Briefings. January, 2004.
  • Keynote: Damage-response systems. Adaptive and resilient computing security. Santa Fe Institute Workshop. November, 2003.
  • Change management: secure your applications and live through it. CSI NetSec. November, 2003.
  • Growing a venture-backed security company from New Mexico origins.Coronado Ventures Forum. May, 2003.
  • The solution for today’s security threats? The worlds oldest security system. RSA Conference. April, 2003.
  • Distributed security: lessons from nature. O’Reilly Emerging Technology Conference. May, 2002.
  • How to feel secure in an insecure world (panel). PCForum. March, 2002.
  • Is wireless an enabler for global security? (panel) Wireless Systems Design Conference. February, 2002.

Journal Articles

Khaled Z. Ibrahim, Steven Hofmeyr, Costin Iancu, "The Case for Partitioning Virtual Machines on Manycore Architectures", IEEE TPDS, April 17, 2014,

S Hofmeyr, J Colmenares, J Kubiatowicz, C Iancu, "Juggle: Addressing Extrinsic Load Imbalances in SPMD Applications on Multicore Computer", Cluster Computing, 2012,

S. Hofmeyr, "The information security technology arms race", Crosstalk: The Journal of Defense Software Engineering, October 1, 2005,

S. Hofmeyr, "Host intrusion detection: part of the operating system or on top of the operating system", Computers & Security, February 1, 2005,

S. Hofmeyr, "The implications of immunology for secure systems design", Computers & Security, September 1, 2004,

S. Hofmeyr, "A new approach to security: learning from immunology", Information Systems Security, September 1, 2003,

S. Hofmeyr, "Why today's security technologies are so inadequate: history, implications and new approaches", Information Systems Security, January 1, 2003,

S. Forrest, S. Hofmeyr, "Engineering an immune system", Graft, June 1, 2001,

S. Hofmeyr, S. Forrest, "Architecture for an artificial immune system", Evolutionary Computation, December 1, 2000,

S. Hofmeyr, A. Somayaji, S. Forrest, "Intrusion detection using sequences of system calls", Journal of Computer Security, June 1, 1998,

A. P. Kosoresow, S. Hofmeyr, S. Forrest, "Intrusion detection via system call traces", IEEE Software, September 1, 1997,

S. Forrest, S. Hofmeyr, A. Somayaji, "Computer immunology", Communications of the ACM, January 1, 1997,

Conference Papers

John Bachan, Dan Bonachea, Paul H Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian Van Straalen, Scott Baden, "The UPC++ PGAS library for exascale computing", PAW 2017: 2nd Annual PGAS Applications Workshop - Held in conjunction with SC 2017, November 12, 2017, doi: 10.1145/3144779.3169108

We describe UPC++ V1.0, a C++11 library that supports APGAS programming. UPC++ targets distributed data structures where communication is irregular or fine-grained. The key abstractions are global pointers, asynchronous programming via RPC, and futures. Global pointers incorporate ownership information useful in optimizing for locality. Futures capture data readiness state, are useful for scheduling and also enable the programmer to chain operations to execute asynchronously as high-latency dependencies become satisfied, via continuations. The interfaces for moving non-contiguous data and handling memories with different optimal access methods are composable and closely resemble those used in modern C++. Communication in UPC++ runs at close to hardware speeds by utilizing the low-overhead GASNet-EX communication library.

Marquita Ellis, Evangelos Georganas, Rob Egan, Steven Hofmeyr, Aydin Buluc, Brandon Cook, Leonid Oliker, Katherine Yelick, "Performance characterization of de novo genome assembly on leading parallel systems", Europar - International European Conference on Parallel and Distributed Computing, 2017,

Evangelos Georganas, Aydın Buluç, Jarrod Chapman, Steven Hofmeyr,
Chaitanya Aluru, Rob Egan, Leonid Oliker, Daniel Rokhsar, Katherine Yelick,
"HipMer: An Extreme-Scale De Novo Genome Assembler", Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis (SC), November 19, 2015,

Benjamin Edwards, Steven Hofmeyr, Stephanie Forrest, "Hype and heavy tails: A closer look at data breaches", The Workshop on the Economics of Information Security (WEIS), 2015,

Benjamin Edwards, Steven Hofmeyr, Stephanie Forrest, Michel Van Eeten, "Analyzing and modeling longitudinal security data: Promise and pitfalls", Proceedings of the 31st Annual Computer Security Applications Conference, 2015, 391--400,

Juan A Colmenares, Gage Eads, Steven Hofmeyr, Sarah Bird, Miquel Moret\ o, David Chou, Brian Gluzman, Eric Roman, Davide B Bartolini, Nitesh Mor, others, "Tessellation: refactoring the OS around explicit resource containers with continuous adaptation", Proceedings of the 50th Annual Design Automation Conference, 2013, 76,

Steven Hofmeyr, Tyler Moore, Stephanie Forrest, Benjamin Edwards, George Stelle, "Modeling internet-scale policies for cleaning up malware", Economics of Information Security and Privacy III, Springer New York, 2013, 149--170,

Benjamin Edwards, Tyler Moore, George Stelle, Steven Hofmeyr, Stephanie Forrest, "Beyond the blacklist: modeling malware spread and the effect of interventions", Proceedings of the 2012 workshop on New security paradigms, January 1, 2012, 53--66,

S. Hofmeyr, T. Moore, S. Forrest, B. Edwards, G. Stelle, "Modeling Internet scale policies for cleaning up malware", Workshop on the Economics of Information Security (WEIS 2011), June 14, 2011,

Steven A. Hofmeyr, Juan A. Colmenares, Costin Iancu, John Kubiatowicz, "Juggle: proactive load balancing on multicore computers", High-Performance Parallel and Distributed Computing (HPDC), 2011,

Khaled Z. Ibrahim, S. Hofmeyr, Eric Roman, "Optimized Pre-Copy Live Migration for Memory Intensive Applications", The International Conference for High Performance Computing, Networking, Storage, and Analysis, 2011,

Khaled Z. Ibrahim, S. Hofmeyr, C. Iancu, "Characterizing the Performance of Parallel Applications on Multi-socket Virtual Machines", Cluster, Cloud and Grid Computing (CCGrid), 2011 11th IEEE/ACM International Symposium on, 2011, 1 -12,

J. A. Colmenares, S. Bird, H. Cook, P. Pearce, D. Zhu, J. Shalf, S. Hofmeyr, K. Asanovic, J. Kubiatowicz, "Resource Management in the Tessellation Manycore OS", 2nd Usenix Workshop on Hot Topics in Parallelism (HotPar), June 15, 2010,

Steven A. Hofmeyr, Costin Iancu, Filip Blagojevic, "Load balancing on speed", Principles and Practice of Parallel Programming (PPoPP), June 6, 2010,

Costin Iancu, Steven A. Hofmeyr, Filip Blagojevic, Yili Zheng, "Oversubscription on multicore processors", International Parallel & Distributed Processing Symposium (IPDPS), 2010,

R. Liu, K. Klues, S. Bird, S. Hofmeyr, K. Asanovic, J. D. Kubiatowicz, "Tessellation: Space-Time Partitioning in a Manycore Client OS", First USENIX Workshop on Hot Topics in Parallelism, June 15, 2009,

S. Forrest, S. Hofmeyr, A. Somayaji., "The evolution of system-call monitoring", Annual Computer Security Applications Conference (ACSAC), August 12, 2008,

Costin Iancu, Steven A. Hofmeyr, "Runtime optimization of vector operations on large scale SMP clusters", Parallel Architectures and Compilation Techniques (PACT), 2008,

S. Hofmeyr, S .Forrest, "Immunity by design: an artificial immune system", GECCO, June 13, 1999,

A. Somayaji, S. Hofmeyr, S. Forrest, "Principles of a computer immune system", New Security Paradigms Workshop (NSPW), September 22, 1998,

S. Hofmeyr, S. Forrest, P. D'haeseleer, "An immunological approach to distributed network intrusion detection", Recent Advances in Intrusion Detection (RAID), September 14, 1998,

S. Forrest, S. Hofmeyr, A. Somayaji, T. A. Longstaff, "A sense of self for UNIX processes", IEEE Symposium on Computer Security and Privacy, May 6, 1994,

Book Chapters

S. Hofmeyr, "An interpretive introduction to the immune system", Design Principles for the Immune System and Other Distributed Autonomous Systems, ( June 14, 2001)

S. Forrest, S. Hofmeyr, "Immunology as information processing", Design Principles for the Immune System and other Distributed Autonomous Systems, ( June 14, 2001)

Presentation/Talks

Yili Zheng, Filip Blagojevic, Dan Bonachea, Paul H. Hargrove, Steven Hofmeyr, Costin Iancu, Seung-Jai Min, Katherine Yelick, Getting Multicore Performance with UPC, SIAM Conference on Parallel Processing for Scientific Computing, February 2010,

Steven Hofmeyr, New approaches to security: lessons from nature, CSI Netsec, June 1, 2005,

Reports

John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ Specification v1.0, Draft 8", Lawrence Berkeley National Laboratory Tech Report, September 26, 2018, LBNL 2001179, doi: 10.25344/S45P4X

UPC++ is a C++11 library providing classes and functions that support Partitioned Global Address Space (PGAS) programming. We are revising the library under the auspices of the DOE’s Exascale Computing Project, to meet the needs of applications requiring PGAS support. UPC++ is intended for implementing elaborate distributed data structures where communication is irregular or fine-grained. The UPC++ interfaces for moving non-contiguous data and handling memories with different optimal access methods are composable and similar to those used in conventional C++. The UPC++ programmer can expect communication to run at close to hardware speeds. The key facilities in UPC++ are global pointers, that enable the programmer to express ownership information for improving locality, one-sided communication, both put/get and RPC, futures and continuations. Futures capture data readiness state, which is useful in making scheduling decisions, and continuations provide for completion handling via callbacks. Together, these enable the programmer to chain together a DAG of operations to execute asynchronously as high-latency dependencies become satisfied.

John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ Programmer's Guide, v1.0-2018.9.0", Lawrence Berkeley National Laboratory Tech Report, September 26, 2018, LBNL 2001180, doi: 10.25344/S49G6V

UPC++ is a C++11 library that provides Partitioned Global Address Space (PGAS) programming. It is designed for writing parallel programs that run efficiently and scale well on distributed-memory parallel computers. The PGAS model is single program, multiple-data (SPMD), with each separate constituent process having access to local memory as it would in C++. However, PGAS also provides access to a global address space, which is allocated in shared segments that are distributed over the processes. UPC++ provides numerous methods for accessing and using global memory. In UPC++, all operations that access remote memory are explicit, which encourages programmers to be aware of the cost of communication and data movement. Moreover, all remote-memory access operations are by default asynchronous, to enable programmers to write code that scales well even on hundreds of thousands of cores.

John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Khaled Ibrahim, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ Programmer’s Guide, v1.0-2018.3.0", Lawrence Berkeley National Laboratory Tech Report, March 31, 2018, LBNL 2001136, doi: 10.2172/1430693

UPC++ is a C++11 library that provides Partitioned Global Address Space (PGAS) programming. It is designed for writing parallel programs that run efficiently and scale well on distributed-memory parallel computers. The PGAS model is single program, multiple-data (SPMD), with each separate thread of execution (referred to as a rank, a term borrowed from MPI) having access to local memory as it would in C++. However, PGAS also provides access to a global address space, which is allocated in shared segments that are distributed over the ranks. UPC++ provides numerous methods for accessing and using global memory. In UPC++, all operations that access remote memory are explicit, which encourages programmers to be aware of the cost of communication and data movement. Moreover, all remote-memory access operations are by default asynchronous, to enable programmers to write code that scales well even on hundreds of thousands of cores.

John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Khaled Ibrahim, Mathias Jacquelin, Amir Kamil, Bryce Lelbach, Brian van Straalen,, "UPC++ Specification v1.0, Draft 6", Lawrence Berkeley National Laboratory Tech Report, March 26, 2018, LBNL 2001135, doi: 10.2172/1430689

UPC++ is a C++11 library providing classes and functions that support Partitioned Global Address Space (PGAS) programming. We are revising the library under the auspices of the DOE’s Exascale Computing Project, to meet the needs of applications requiring PGAS support. UPC++ is intended for implementing elaborate distributed data structures where communication is irregular or fine-grained. The UPC++ interfaces for moving non-contiguous data and handling memories with different optimal access methods are composable and similar to those used in conventional C++. The UPC++ programmer can expect communication to run at close to hardware speeds. The key facilities in UPC++ are global pointers, that enable the programmer to express ownership information for improving locality, one-sided communication, both put/get and RPC, futures and continuations. Futures capture data readiness state, which is useful in making scheduling decisions, and continuations provide for completion handling via callbacks. Together, these enable the programmer to chain together a DAG of operations to execute asynchronously as high-latency dependencies become satisfied.

John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Khaled Ibrahim, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ Programmer’s Guide, v1.0-2017.9", Lawrence Berkeley National Laboratory Tech Report, September 29, 2017, LBNL 2001065, doi: 10.2172/1398522

This document has been superseded by: UPC++ Programmer’s Guide, v1.0-2018.3.0 (LBNL-2001136)

UPC++ is a C++11 library that provides Asynchronous Partitioned Global Address Space (APGAS) programming. It is designed for writing parallel programs that run efficiently and scale well on distributed-memory parallel computers. The APGAS model is single program, multiple-data (SPMD), with each separate thread of execution (referred to as a rank, a term borrowed from MPI) having access to local memory as it would in C++. However, APGAS also provides access to a global address space, which is allocated in shared segments that are distributed over the ranks. UPC++ provides numerous methods for accessing and using global memory. In UPC++, all operations that access remote memory are explicit, which encourages programmers to be aware of the cost of communication and data movement. Moreover, all remote-memory access operations are by default asynchronous, to enable programmers to write code that scales well even on hundreds of thousands of cores.

John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Khaled Ibrahim, Mathias Jacquelin, Amir Kamil, Bryce Lelbach, Brian van Straalen,, "UPC++ Specification v1.0, Draft 4", Lawrence Berkeley National Laboratory Tech Report, September 27, 2017, LBNL 2001066, doi: 10.2172/1398521

This document has been superseded by: UPC++ Specification v1.0, Draft 6 (LBNL-2001135)

UPC++ is a C++11 library providing classes and functions that support Asynchronous Partitioned Global Address Space (APGAS) programming. We are revising the library under the auspices of the DOE’s Exascale Computing Project, to meet the needs of applications requiring PGAS support. UPC++ is intended for implementing elaborate distributed data structures where communication is irregular or fine-grained. The UPC++ interfaces for moving non-contiguous data and handling memories with different optimal access methods are composable and similar to those used in conventional C++. The UPC++ programmer can expect communication to run at close to hardware speeds. The key facilities in UPC++ are global pointers, that enable the programmer to express ownership information for improving locality, one-sided communication, both put/get and RPC, futures and continuations. Futures capture data readiness state, which is useful in making scheduling decisions, and continuations provide for completion handling via callbacks. Together, these enable the programmer to chain together a DAG of operations to execute asynchronously as high-latency dependencies become satisfied.

Gage Eads, Juan Colmenares, Steven Hofmeyr, Sarah Bird, Davide Bartolini, David Chou, Brian Glutzman, Krste Asanovic, John D Kubiatowicz, "Building an Adaptive Operating System for Predictability and Efficiency", 2014,

Benjamin Edwards, Steven Hofmeyr, George Stelle, Stephanie Forrest, "Internet topology over time", arXiv preprint arXiv:1202.3993, January 1, 2012,

Posters

Scott B. Baden, Paul H. Hargrove, Hadia Ahmed, John Bachan, Dan Bonachea, Steve Hofmeyr, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ and GASNet-EX: PGAS Support for Exascale Applications and Runtimes", The International Conference for High Performance Computing, Networking, Storage and Analysis (SC'18), November 13, 2018,

Lawrence Berkeley National Lab is developing a programming system to support HPC application development using the Partitioned Global Address Space (PGAS) model. This work is driven by the emerging need for adaptive, lightweight communication in irregular applications at exascale. We present an overview of UPC++ and GASNet-EX, including examples and performance results.

GASNet-EX is a portable, high-performance communication library, leveraging hardware support to efficiently implement Active Messages and Remote Memory Access (RMA). UPC++ provides higher-level abstractions appropriate for PGAS programming such as: one-sided communication (RMA), remote procedure call, locality-aware APIs for user-defined distributed objects, and robust support for asynchronous execution to hide latency. Both libraries have been redesigned relative to their predecessors to meet the needs of exascale computing. While both libraries continue to evolve, the system already demonstrates improvements in microbenchmarks and application proxies.

John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Khaled Ibrahim, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ and GASNet: PGAS Support for Exascale Apps and Runtimes", Poster at Exascale Computing Project (ECP) Annual Meeting 2018., February 2018,

John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Khaled Ibrahim, Mathias Jacquelin, Amir Kamil, Brian Van Straalen, "UPC++: a PGAS C++ Library", ACM/IEEE Conference on Supercomputing, SC'17, November 2017,

John Bachan, Scott Baden, Dan Bonachea, Paul Hargrove, Steven Hofmeyr, Khaled Ibrahim, Mathias Jacquelin, Amir Kamil, Brian van Straalen, "UPC++ and GASNet: PGAS Support for Exascale Apps and Runtimes", Poster at Exascale Computing Project (ECP) Annual Meeting 2017., January 2017,

Others

S. Hofmeyr, Stopping the slew of mutating malware, Data Management and Storage Technology Review, July 1, 2005,

S. Hofmeyr, New approaches to security: lessons from nature, Secure Convergence Journal, June 1, 2005,

S. Hofmeyr, Implementing security on server blades, Blade Letter, June 1, 2005,

S. Hofmeyr, Can IT defenses work like the body's?, Security Management, September 1, 2004,

S. Hofmeyr, Who says biology need be destiny?, CNET News, April 13, 2004,

S. Hofmeyr, Technology can learn a lot from biology, Silicon Valley Bizink, December 26, 2003,

S. Hofmeyr, Forward thinking, Security Products, June 1, 2002,