Skip to navigation Skip to content
Careers | Phone Book | A - Z Index
Scientific Data Management Research

Suren Byna

suren.png
Suren Byna
Senior Scientist
Phone: +1 510 495 8136
Mobile: +1 510 486 4004

Suren Byna is a Senior Scientist in the Scientific Data Management Group at Lawrence Berkeley National Lab (LBNL). His research interests are in scientific data management. More specifically, he is interested in parallel I/O, data management systems for managing scientific data, and heterogeneous computing. He is the PI of the ECP funded ExaIO project, and ASCR funded object-centric data management systems (Proactive Data Containers - PDC) and experimental and observational data management (EOD-HDF5) projects.

Before joining LBNL in November 2010, Suren was a researcher at NEC Labs America, where he was a part of the Computer Systems Architecture Department (now Integrated Systems Department) and was involved in the Heterogeneous Cluster Computing project. Prior to that, he was a Research Assistant Professor in the Department of Computer Science at Illinois Institute of Technology (IIT) and a Guest Researcher at the Math and Computer Science division of the Argonne National Laboratory, as well as a Faculty Member of the Scalable Computing Software Laboratory at IIT. He received his Masters and Ph.D. degrees in Computer Science from Illinois Institute of Technology, Chicago.

» Visit Suren Byna's personal web page

Projects and Selected Publications

SDS: Scientific Data Services Framework

[HPDC 2017] [HPDC 2016] [PDSW 2015] [BigData 2015] [Cluster 2014] [SIGMOD 2014]
[HPDIC 2014 w/ IPDPS ][PDSW 2013 (SC13)] [BigData 2013] [CCGrid 2012] [HPCDB 2011]

ExaHDF5: Advancing HPC I/O to Enable Scientific Discovery

[CUG 2017] [IPDPS 2016] [CCGrid 2016] [CUG 2016 - ACB] [CUG 2016 - LIOProf]
[SC15] [PDSW 2015] [PMBS 2015] [Cluster 2015] [HPDC 2015] [CUG 2015] [PDSW 2014]
[HPDC 2014] [WSSSPE2 (SC14)] [SC13] [HPDC 2013] [CUG 2013] [SC12] [XLDB 2012]

Proactive Data Containers (PDC)

[ICDE 2018] [CCGrid 2018] [IC2E 2018] [Cluster 2017] [HiPC 2016]

In situ AMR Indexing & Querying

[ICPP 2016] [Big Data 2016] [CCGrid 2016 - Layout] [CCGrid 2016 - AMRZone] [HPC 2016 - Best Paper] [CCGrid 2015]

Holistic Parallel I/O Characterization

[PDSW 2017] [CUG 2017 - KNL vs. Haswell IO] [CUG 2017 - DXT]

SDAV

[ICPADS 2014] [SSDBM 2013] [CCGrid 2012]

Climate Data Analysis

[ASCMO 2015] [CAIP 2015] [AGU Fall Meeting 2014] [AGU Fall Meeting 2013 - Stat] [AGU Fall Meeting 2013 - TECA]
[IS&T/SPIE 2013] [ICAP 2012] [WCRP 2012] [DMESS 2012] [AGU Fall 2011] [PDAC 2011]

Energy-aware Computing & I/O

[CUG 2015] [ICPP 2011]

Conference Papers

Benjamin A. Brock, Yuxin Chen, Jiakun Yan, John Owens, Aydın Buluç, Katherine Yelick, "RDMA vs. RPC for implementing distributed data structures", 2019 IEEE/ACM 9th Workshop on Irregular Applications: Architectures and Algorithms (IA3), Denver, CO, USA, IEEE, November 18, 2019, 17--22, doi: 10.1109/IA349570.2019.00009

Distributed data structures are key to implementing scalable applications for scientific simulations and data analysis. In this paper we look at two implementation styles for distributed data structures: remote direct memory access (RDMA) and remote procedure call (RPC). We focus on operations that require individual accesses to remote portions of a distributed data structure, e.g., accessing a hash table bucket or distributed queue, rather than global operations in which all processors collectively exchange information. We look at the trade-offs between the two styles through microbenchmarks and a performance model that approximates the cost of each. The RDMA operations have direct hardware support in the network and therefore lower latency and overhead, while the RPC operations are more expressive but higher cost and can suffer from lack of attentiveness from the remote side. We also run experiments to compare the real-world performance of RDMA- and RPC-based data structure operations with the predicted performance to evaluate the accuracy of our model, and show that while the model does not always precisely predict running time, it allows us to choose the best implementation in the examples shown. We believe this analysis will assist developers in designing data structures that will perform well on current network architectures, as well as network architects in providing better support for this class of distributed data structures.

Reports

Luke J Gosink, "Bin-hash indexing: A parallel method for fast query processing", 2008, LBNL 729E,