A. Radenski, A. Shared Memory,
Message Passing, and Hybrid Merge Sorts for Standalone and Clustered SMPs. Proc PDPTA’11, the 20112011
International Conference on Parallel and Distributed Processing Techniques and
Applications, CSREA Press (H. Arabnia, Ed.), 2011,
pp. 367 - 373.
While merge sort is well-understood in parallel algorithms theory, relatively little is known of how to implement parallel merge sort with mainstream parallel programming platforms, such as OpenMP and MPI, and run it on mainstream SMP-based systems, such as multi-core computers and multi-core clusters. This is misfortunate because merge sort is not only a fast and stable sort algorithm, but it is also an easy to understand and popular representative of the rich class of divide-and-conquer methods; hence better understanding of merge sort parallelization can contribute to better understanding of divide-and-conquer parallelization in general. In this paper, we investigate three parallel merge-sorts: shared memory merge sort that runs on SMP systems with OpenMP; message-passing merge sort that runs on computer clusters with MPI; and combined hybrid merge sort, with both OpenMP and MPI, that runs on clustered SMPs. We have experimented with our parallel merge sorts on a dedicated Rocks SMP cluster and on a virtual SMP luster in the Amazon Elastic Compute Cloud. In our experiments, shared memory merge sort with OpenMP has achieved best speedup. We believe that we are the first ones to concurrently experiment with - and compare – shared memory, message passing, and hybrid merge sort. Our results can help in the parallelization of specific practical merge sort routines and, even more important, in the practical parallelization of other divide-and-conquer algorithms for mainstream SMP-based systems.
Copyright CSREA Press, 2011. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive version was published in Proc. PDPTA’11, CSREA Press (H. Arabnia, Ed.), 2011, pp. 367 – 373. ISBN: 1-60132-193-7.
Last updated: May 2015.