Careers | Phone Book | A - Z Index

Mehmet Balman

mehmet.jpg
Mehmet Balman
Previous member, Affiliate (Scientific Networking Division)

Biographical Sketch

Mehmet is a former computer engineer in the Scientific Data Management Group, currently an affiliate with the Scientific Networking Division. His recent work particularly deals with performance problems in high-bandwidth networks, efficient data transfer mechanisms and data streaming, high-performance network protocols, bandwidth reservations, virtual circuits, and data transfer scheduling for large-scale applications.  He received his doctoral degree in computer science from Louisiana State University (LSU) in 2010. He has several years of industrial experience as system administrator and R&D specialist, at various software companies before joining LSU. He also worked as a summer intern in Los Alamos National Laboratory. During his study at LSU, he worked as a teaching assistant in the Department of Computer Science, and as a research assistant in the Center for Computation & Technology (CCT). 

NDM (Network-aware Data Management Workshop @SC)

Education

Ph.D. (2010), M.S. (2008) Louisiana State University, Baton Rouge, LA
M.S. (2006), B.S.(2000) Bogazici University, Istanbul, Turkiye

Selected Publications

  1. M. Balman, Advance Resource Provisioning in Bulk Data Scheduling. In Proceedings of the 27th IEEE International Conference on Advanced Information Networking and Applications (AINA), 2013.
  2. M. Balman, Streaming exa-scale data over 100Gbps networks, IEEE Computing Now Magazine, Oct 2012.
  3. M. Balman, E. Pouyoul, Y. Yao, E. W. Bethel, B. Loring, Prabhat, J. Shalf, A. Sim, B. L. Tierney, Experiences with 100Gbps Network Applications. In Proc. ofthe 5th Int. workshop on Data-Intensive Distributed Computing, in conjunction with HPDC’12, 2012.
  4. M. Balman and S. Byna, Open Problems in network-aware data management in exa-scale computing and terabit networking era. In Proc. of the 1st Int. workshop on Network-aware Data Management, in conjunction with SC’11, 2011.
  5. T. Kosar, M. Balman, E. Yildirim, S. Kulasekaran, B. Ross, Stork Data Scheduler: Mitigating the Data Bottleneck in e-Science, in Philosophical Transactions of the Royal Society A, Vol.369 (2011), pp. 3254-3267. 
  6. T. Kosar, I. Akturk, M. Balman, X. Wang, PetaShare: A Reliable, Efficient, and Transparent Distributed Storage Management System, Scientific Programming Journal, Sci. Program. 19, 1 (2011), 27-43
  7. M. Balman and T. Kosar, Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling, International Journal of Autonomic Computing 2010 - Vol. 1, No.4 pp. 425 - 446, DOI: 10.1504/IJAC.2010.037516
  8. M. Balman, E. Chaniotakis, A. Shoshani, A. Sim, A Flexible Reservation Algorithm for Advance Network Provisioning. In Proceedings of the ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis (SC’10), 2010.
  9. M. Balman and T. Kosar, Dynamic Adaptation of Parallelism Level in Data Transfer Scheduling. In Proc. of Int. Workshop on Adaptive Systems in Heterogeneous Environments, in conjunction IEEE CISIS'09 and IEEE ARES'09, 2009.
  10. M. Balman, Tetrahedral Mesh Refinement in Distributed Environments, in Proceedings of IEEE International Conference on Parallel Processing Workshops, IEEE Computer Society 2006, pp. 497-504, ISBN:0-7695-2637-3

» Visit Mehmet's personal web page.

Journal Articles

Nathan Hanford, Vishal Ahuja, Mehmet Balman, Matthew. Farrens, Dipak Ghosal, Eric Pouyoul, Brian Tierney, "Improving Network Performance on Multicore Systems: Impact of Core Affinities on High Throughput Flows", The International Journal of eScience, Elsevier, 2015, doi: doi:10.1016/j.future.2015.09.012

Network throughput is scaling-up to higher data rates while end-system processors are scaling-out to multiple cores. In order to optimize high speed data transfer into multicore end-systems, techniques such as network adaptor offloads and performance tuning have received a great deal of attention. Furthermore, several methods of multi-threading the network receive process have been proposed. However, thus far attention has been focused on how to set the tuning parameters and which offloads to select for higher performance, and little has been done to understand why the various parameter settings do (or do not) work. In this paper, we build on previous research to track down the sources of the end-system bottleneck for high-speed TCP flows. We define protocol processing efficiency to be the amount of system resources (such as CPU and cache) used per unit of achieved throughput (in Gbps). The amount of various system resources consumed are measured using low-level system event counters. In a multicore end-system, affinitization, or core binding, is the decision regarding how the various tasks of network receive process including interrupt, network, and application processing are assigned to the different processor cores. We conclude that affinitization has a significant impact on protocol processing efficiency, and that the performance bottleneck of the network receive process changes significantly with different affinitization.

Brian Tierney, Mehmet Balman, Cees de Laat, "Special section on high-performance networking for distributed data-intensive science", Future Generation Computer Systems, The International Journal of eScience, Elsevier, 2015, doi: doi:10.1016/j.future.2015.10.006

T. Kosar, M. Balman, E. Yildirim, S. Kulasekaran, B. Ross, "Stork Data Scheduler: Mitigating the Data Bottleneck in e-Science", Philosophical Transactions of the Royal Society A, Vol.369 (2011), pp. 3254-3267, July 18, 2011, doi: 10.1098/rsta.2011.0148

In this paper, we present the Stork data scheduler as a solution for mitigating the data bottleneck in e-Science and data-intensive scientific discovery. Stork focuses on planning, scheduling, monitoring and management of data placement tasks and application-level end-to-end optimization of networked inputs/outputs for petascale distributed e-Science applications. Unlike existing approaches, Stork treats data resources and the tasks related to data access and movement as first-class entities just like computational resources and compute tasks, and not simply the side-effect of computation. Stork provides unique features such as aggregation of data transfer jobs considering their source and destination addresses, and an application-level throughput estimation and optimization service. We describe how these two features are implemented in Stork and their effects on end-to-end data transfer performance.

T. Kosar, I. Akturk, M. Balman, X. Wang, "PetaShare: A Reliable, Efficient, and Transparent Distributed Storage Management System", Journal Scientific Programming archive Volume 19 Issue 1, January 2011 Pages 27-43, 2011,

Modern collaborative science has placed increasing burden on data management infrastructure to handle the increasingly large data archives generated. Beside functionality, reliability and availability are also key factors in delivering a data management system that can efficiently and effectively meet the challenges posed and compounded by the unbounded increase in the size of data generated by scientific applications. We have developed a reliable and efficient distributed data storage system, PetaShare, which spans multiple institutions across the state of Louisiana. At the back-end, PetaShare provides a unified name space and efficient data movement across geographically distributed storage sites. At the front-end, it provides light-weight clients the enable easy, transparent and scalable access. In PetaShare, we have designed and implemented an asynchronously replicated multi-master metadata system for enhanced reliability and availability, and an advanced buffering system for improved data transfer performance. In this paper, we present the details of our design and implementation, show performance results, and describe our experience in developing a reliable and efficient distributed data management system for data-intensive science.

Mehmet Balman, Tevfik Kosar, "Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling,", International Journal of Autonomic Computing 2010 - Vol. 1, No.4 pp. 425 - 446, DOI: 10.1504/IJAC.2010.037516, 2010, doi: http://dx.doi.org/10.1504/IJAC.2010.037516

Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users' point-of-view. Our study explores the possibility of efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

Conference Papers

N. Hanford, V. Ahuja, M. Farrens, D. Ghosal, M. Balman, E. Pouyoul, B. Tierney, "Analysis of the effect of core affinity on high-throughput flows", NDM'14, ACM, 2014, doi: 10.1109/NDM.2014.10

Network throughput is scaling-up to higher data rates while end-system processors are scaling-out to multiple cores. In order to optimize high speed data transfer into multicore end-systems, techniques such as network adapter offloads and performance tuning have received a great deal of attention. Furthermore, several methods of multithreading the network receive process have been proposed. However, thus far attention has been focused on how to set the tuning parameters and which offloads to select for higher performance, and little has been done to understand why the settings do (or do not) work. In this paper we build on previous research to track down the source(s) of the end-system bottleneck for high-speed TCP flows. For the purposes of this paper, we consider protocol processing efficiency to be the amount of system resources used (such as CPU and cache) per unit of achieved throughout (in Gbps). The amount of various system resources consumed are measured using low-level system event counters. Affinitization, or core binding, is the decision about which processor cores on an end system are responsible for interrupt, network, and application processing. We conclude that affinitization has a significant impact on protocol processing efficiency, and that the performance bottleneck of the network receive process changes drastically with three distinct affinitization scenarios. 

N. Hanford, V. Ahuja, M. Farrens, D. Ghosal, M. Balman, E. Pouyoul, B. Tierney, "Impact of the end-system and affinities on the throughput of high-speed flows", ANCS '14: Proceedings of the tenth ACM/IEEE symposium on Architectures for networking and communications systems, ACM, 2014, doi: 10.1145/2658260.2661772

Network throughput is scaling-up to higher data rates while processors are scaling-out to multiple cores. In order to optimize high speed data transfer into multicore end-systems, network adapter offloads and performance tuning have received a great deal of attention. However, much of this attention is focused on how to set the tuning parameters and which offloads to select for higher performance and not why they do (or do not) work. In this study we have attempted to address two issues that impact the data transfer performance. First is the impact of the processor core affinity (or core binding) which determines the choice of which processor core or cores handle certain tasks in a network- or I/O-heavy application running on a multicore end-system. Second issue is the impact of Ethernet pause frames which provides a link layer flow control in addition to the end-to-end flow control provided by TCP. The goal of our research is to delve deeper into why these tuning suggestions and this offload exist, and how they affect the end-to-end performance and efficiency of a single, large TCP flow. 

Nathan Hanford, Vishal Ahuja, Mehmet Balman, Matthew. Farrens, Dipak Ghosal, Eric Pouyoul, Brian Tierney, "Characterizing the Impact of End-System Affinities On the End-to-End Performance of High-Speed Flows", SC13 workshop, ACM, 2013, doi: 10.1145/2534695.2534697

Multi-core end-systems use Receive Side Scaling (RSS) to parallelize protocol processing. RSS uses a hash function on the standard flow descriptors and an indirection table to as- sign incoming packets to receive queues which are pinned to specific cores. This ensures flow affinity in that the interrupt processing of all packets belonging to a specific flow is pro- cessed by the same core. A key limitation of standard RSS is that it does not consider the application process that con- sumes the incoming data in determining the flow affinity. In this paper, we carry out a detailed experimental anal- ysis of the performance impact of the application affinity in a 40 Gbps testbed network with a dual hexa-core end- system. We show, contrary to conventional wisdom, that when the application process and the flow are affinitized to the same core, the performance (measured in terms of end- to-end TCP throughput) is significantly lower than the line rate. Near line rate performance is observed when the flow and the application process are affinitized to different cores belonging to the same socket. Furthermore, affinitizing the application and the flow to cores on different sockets results in significantly lower throughput than the line rate. These results arise due to the memory bottleneck, which is demon- strated using preliminary correlational data on the cache hit rate in the core that services the application process. 

Mehmet Balman, "Advance Resource Provisioning in Bulk Data Scheduling", 27th IEEE International Conference on Advanced Information Networking and Applications (AINA), 2013, LBNL 6364E, doi: http://dx.doi.org/10.1109/AINA.2013.5

Today's scientific and business applications generate massive data sets that need to be transferred to remote sites for sharing, processing, and long term storage. Because of increasing data volumes and enhancement in current network technology that provide on-demand high-speed data access between collaborating institutions, data handling and scheduling problems have reached a new scale. In this paper, we present a new data scheduling model with advance resource provisioning, in which data movement operations are defined with earliest start and latest completion times. We analyze time-dependent resource assignment problem, and propose a new methodology to improve the current systems by allowing researchers and higher-level meta-schedulers to use data-placement as-a-service, so they can plan ahead and submit transfer requests in advance. In general, scheduling with time and resource conflicts is {NP-hard}. We introduce an efficient algorithm to organize multiple requests on the fly, while satisfying users' time and resource constraints. We successfully tested our algorithm in a simple benchmark simulator that we have developed, and demonstrated its performance with initial test results.

Keywords: scheduling with constraints, bulk data movement, time-dependent graphs, network reservation, Gale-Shapley algorithm

Mehmet Balman, Eric Pouyoul, Yushu Yao, E. Wes Bethel, Burlen Loring, Prabhat, John Shalf, Alex Sim, and Brian L. Tierney, "Experiences with 100G Network Applications", In Proceedings of the Fifth international Workshop on Data-intensive Distributed Computing, in conjunction with ACM High Performance Distributing Computing (HPDC) Conference, 2012, Delft, Netherlands, June 2012, LBNL 5603E, doi: 10.1145/2286996.2287004

100Gbps networking has finally arrived, and many research and educational in- stitutions have begun to deploy 100Gbps routers and services. ESnet and Internet2 worked together to make 100Gbps networks available to researchers at the Super- computing 2011 conference in Seattle Washington. In this paper, we describe two of the first applications to take advantage of this network. We demonstrate a visu- alization application that enables remotely located scientists to gain insights from large datasets. We also demonstrate climate data movement and analysis over the 100Gbps network. We describe a number of application design issues and host tuning strategies necessary for enabling applications to scale to 100Gbps rates. 

Mehmet Balman, Suredra Byna, "Open Problems in network-aware data management in exa-scale computing and terabit networking era", In Proceedings of the First international Workshop on Network-Aware Data Management, in conjunction with ACM/IEEE international Conference For High Performance Computing, Networking, Storage and Analysis, 2011, Seattle, WA, November 11, 2011, LBNL 6176E, doi: http://dx.doi.org/10.1145/2110217.2110229

Accessing and managing large amounts of data is a great challenge in collaborative computing environments where resources and users are geographically distributed. Recent advances in network technology led to next-generation high- performance networks, allowing high-bandwidth connectiv- ity. Efficient use of the network infrastructure is necessary in order to address the increasing data and compute require- ments of large-scale applications. We discuss several open problems, evaluate emerging trends, and articulate our per- spectives in network-aware data management. 

Dean N. Williams, Ian T. Foster, Don E. Middleton, Rachana Ananthakrishnan, Neill Miller, Mehmet Balman, Junmin Gu, Vijaya Natarajan, Arie Shoshani, Alex Sim, Gavin Bell, Robert Drach, Michael Ganzberger, Jim Ahrens, Phil Jones, Daniel Crichton, Luca Cinquini, David Brown, Danielle Harper, Nathan Hook, Eric Nienhouse, Gary Strand, Hannah Wilcox, Nathan Wilhelmi, Stephan Zednik, Steve Hankin, Roland Schweitzer, John Harney, Ross Miller, Galen Shipman, Feiyi Wang, Peter Fox, Patrick West, Stephan Zednik, Ann Chervenak, Craig Ward, "Earth System Grid Center for Enabling Technologies (ESG-CET): A Data Infrastructure for Data-Intensive Climate Research", SciDAC Conference, 2011,

Alex Sim, Mehmet Balman, Dean N. Williams, Arie Shoshani, Vijaya Natarajan, "Adaptive Transfer Adjustment in Efficient Bulk Data Transfer Management for Climate Datasets", The 22nd IASTED International Conference on Parallel and Distributed Computing and System, Marina Del Rey, CA, November 20, 2010, LBNL 3985E,

Many scientific applications and experiments, such as high energy and nuclear physics, astrophysics, climate observation and modeling, combustion, nano-scale material sciences, and computational biology, generate extreme volumes of data with a large number of files. These data sources are distributed among national and international data repositories, and are shared by large numbers of geographically distributed scientists. A large portion of the data is frequently accessed, and a large volume of data is moved from one place to another for analysis and storage. A challenging issue in such efforts is the limited network capacity for moving large datasets. A tool that addresses this challenge is the Bulk Data Mover (BDM), a data transfer management tool used in the Earth System Grid (ESG) community. It has been managing massive dataset transfers efficiently in the environment where the network bandwidth is limited. Adaptive transfer adjustment was studied to enhance the BDM to handle significant end-to-end performance changes in the dynamic network environments as well as to control the data transfers for the desired transfer performance. We describe the results from our hands-on data transfer management experience in the climate research community. We study a practical transfer estimation model and state our initial results from the adaptive transfer adjustment methodology. 

Mehmet Balman, Evangelos Chaniotakis, Arie Shoshani, Alex Sim, "A Flexible Reservation Algorithm for Advance Network Provisioning", ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis, New Orleans, LA, November 2010 (SC'10)., New Orleans, LA, IEEE Computer Society Washington, DC, USA ISBN: 978-1-4244-7559-, November 14, 2010, LBNL 4017E, doi: http://dx.doi.org/10.1109/SC.2010.4

Many scientific applications need support from a communication infrastructure that provides predictable performance, which requires effective algorithms for bandwidth reservations. Network reservation sys- tems such as ESnet’s OSCARS, establish guaranteed bandwidth of secure virtual circuits for a certain bandwidth and length of time. However, users currently cannot inquire about bandwidth availability, nor have alternative suggestions when reservation requests fail. In general, the number of reservation options is exponential with the number of nodes n, and current reservation commitments. We present a novel approach for path finding in time-dependent networks taking advantage of user-provided parameters of total volume and time constraints, which produces options for earliest completion and shortest duration. The theoretical complexity is only O(n2r2) in the worst-case, where r is the number of reservations in the desired time interval. We have implemented our algorithm and developed efficient methodologies for incorporation into network reservation frameworks. Performance measurements confirm the theoretical predictions. 

Reports

Mehmet Balman, "Streaming Exascale Data over 100Gbps Networks", IEEE Computing Now, November 8, 2012, LBNL 6173E,

Mehmet Balman, "Analyzing Data Movements and Identifying Techniques for Next-generation High-bandwidth Networks", LBNL Tech Report, 2012, LBNL 6177E,

High-bandwidth networks are poised to provide new opportunities in tackling large data challenges in today's scientific applications. However, increasing the bandwidth is not sufficient by itself; we need careful evaluation of future high-bandwidth networks from the applications’ perspective. We have investigated data transfer requirements of climate applications as a typical scientific example and evaluated how the scientific community can benefit from next generation high-bandwidth networks.  We develop a new block-based data movement method (in contrast to the current file-based methods) to improve data movement performance and efficiency in moving large scientific datasets that contain many small files. We implemented the new block-based data movement tool, which takes the approach of aggregating files into blocks and providing dynamic data channel management. One of the major obstacles in use of high-bandwidth networks is the limitation in host system resources. We have conducted a large number of experiments with our new block-based method and with current available file-based data movement tools.  In this white paper, we describe future research problems and challenges for efficient use of next-generation science networks, based on the lessons learnt and the experiences gained with 100Gbps network applications.

M. Balman, A. Sim, "Scaling the Earth System Grid to 100Gbps Networks", 2012, LBNL 5794E,

M. Balman, E. Chaniotakis, A. Shoshani, A. Sim, "A New Approach in Advance Network Reservation and Provisioning for High-Performance Scientific Data Transfers", 2010, LBNL 4091E,

Posters

Mehmet Balman, "MemzNet: Memory-Mapped Zero-copy Network Channel for Moving Large Datasets over 100Gbps Networks", technical poster in ACM/IEEE international Conference For High Performance Computing, Networking, Storage and Analysis (SC'12), LBNL 6175E, November 13, 2012, doi: http://doi.ieeecomputersociety.org/10.1109/SC.Companion.2012.294

High-bandwidth networks are poised to provide new opportunities in tackling large data challenges in today's scientific applications. However, increasing the bandwidth is not sufficient by itself; we need careful evaluation of future high-bandwidth networks from the applications' perspective. We have experimented with current state-of-the-art data movement tools, and realized that file-centric data transfer protocols do not perform well with managing the transfer of many small files in high-bandwidth networks, even when using parallel streams or concurrent transfers. We require enhancements in current middleware tools to take advantage of future networking frameworks. To improve performance and efficiency, we develop an experimental prototype, called MemzNet: Memory-mapped Zero-copy Network Channel, which uses a block-based data movement method in moving large scientific datasets. We have implemented MemzNet that takes the approach of aggregating files into blocks and providing dynamic data channel management. In this work, we present our initial results in 100Gbps networks.
http://dx.doi.org/10.1109/SC.Companion.2012.294               
http://dx.doi.org/10.1109/SC.Companion.2012.295