Oluwamayowa (Mayo) Amusat
Oluwamayowa (Mayo) Amusat is a computational systems engineer in the IDF group working on the application of optimization and machine learning techniques to the design of advanced water, energy and scientific systems. Oluwamayowa joined the Berkeley Lab in February 2019 as a post-doctoral scholar.
Oluwamayowa's research interests centre around the development of numerical optimization, machine learning and decision-support tools for the enhancement and improvement of scientific and process systems. Oluwamayowa is part of the IDAES, NAWI and Science Search projects.
Oluwamayowa received his PhD in Chemical Engineering from University College London, where he was part of the Product and Process Systems Engineering (PPSE) research group. Oluwamayowa has a Bachelor's degree in chemical engineering from Obafemi Awolowo University (Nigeria) and a Master's degree in advanced chemical engineering from the University of Leeds (United Kingdom).
» Visit Oluwamayowa Amusat's personal web page
Devarshi Ghoshal, Drew Paine, Gilberto Pastorello, Abdelrahman Elbashandy, Dan Gunter, Oluwamayowa Amusat, Lavanya Ramakrishnan, "Experiences with Reproducibility: Case Studies from Scientific Workflows", (P-RECS'21) Proceedings of the 4th International Workshop on Practical Reproducible Evaluation of Computer Systems, ACM, June 21, 2021, doi: 10.1145/3456287.3465478
Reproducible research is becoming essential for science to ensure transparency and for building trust. Additionally, reproducibility provides the cornerstone for sharing of methodology that can improve efficiency. Although several tools and studies focus on computational reproducibility, we need a better understanding about the gaps, issues, and challenges for enabling reproducibility of scientific results beyond the computational stages of a scientific pipeline. In this paper, we present five different case studies that highlight the reproducibility needs and challenges under various system and environmental conditions. Through the case studies, we present our experiences in reproducing different types of data and methods that exist in an experimental or analysis pipeline. We examine the human aspects of reproducibility while highlighting the things that worked, that did not work, and that could have worked better for each of the cases. Our experiences capture a wide range of scenarios and are applicable to a much broader audience who aim to integrate reproducibility in their everyday pipelines.