Programming Model Bake-off Helps Inform Data Analysis Directions in Post-Moore’s Era
Scientific Achievement
Detailed performance analysis of a state-of-the-art, unsupervised learning graphical model
optimization method reveals new performance insights and contrasts between OpenMP, threaded,
and data-parallel primitives (DPP) programming models [1].
Significance and Impact
All DOE mission science includes an element of data analysis that are challenged by large,
complex data and processor architectures of increasing complexity. This work shows a way to obtain
performance gains now in a platform portable way that holds promise for similar performance on
future architectures.
Research Results
A parallel Markov Random Field graphical model optimization code is parallized using OpenMP, threads, and data parallel primitives.
Performance analysis measures multiple hardware performance counters and on multiple platforms.
T. Perciano, C. Heinemann, D. Camp, B. Lessley, and E. W. Bethel, “Shared-Memory Parallel Probabilistic Graphical Modeling Optimization: Comparison of Threads, OpenMP, and Data-Parallel Primitives,” in High Performance Computing, Cham, pp. 127–145, Jun. 2020.
@inproceedings{Perciano:2020:ISC,
address = {Cham},
author = {Perciano, Talita and Heinemann, Colleen and Camp, David and Lessley, Brenton and Bethel, E Wes},
booktitle = {High Performance Computing},
editor = {Sadayappan, Ponnuswamy and Chamberlain, Bradford L and Juckeland, Guido and Ltaief, Hatem},
isbn = {978-3-030-50743-5},
pages = {127--145},
publisher = {Springer International Publishing},
title = {{Shared-Memory Parallel Probabilistic Graphical Modeling Optimization: Comparison of Threads, OpenMP, and Data-Parallel Primitives}},
month = jun,
year = {2020},
eprint = {https://doi.org/10.1007/978-3-030-50743-5_7},
doi = {10.1007/978-3-030-50743-5_7},
escholarshipurl = {https://escholarship.org/uc/item/3z48p6kq}
}