This package contains the artifact to show the effectiveness and the benefits of our on-the-fly workload partitioning technique for irregular data-parallel workloads on integrated CPU/GPU architectures presented at PACT 2018. The technique requires no offline analysis or training of the application or the input data. For an OpenCL kernel, a source-to-source compiler creates profiling code that allows the runtime system to collect information about the computational load of the work-items. Based on this profile information, the workload is reshaped such a way that work-items with loop iteration counts above a dynamically determined threshold are executed on the CPU cores while the GPU only executes work-items with a similar computational load.
This artifact consists of three components:
- compiler (prep): the preprocessor that performs the code generation for the on-the-fly workload partitioning techniuqe.
- runtime (lib): the dynamically loadable library that performs the on-the-fly workload partitioning by leveraging the code generation.
- benchmark (bench): it contains four benchmark applications and some scripts to help experiments.
Younghyun Cho, Florian Negele, Seohong Park, Bernhard Egger, and Thomas R. Gross. "On-The-Fly Workload Partitioning for Integrated CPU/GPU Architectures." To appear in Proceedings of the the 2018 International Conference on Parallel Architectures and Compilation (PACT'18), Limassol, Cyprus, November 2018.
Downloading the Artifact
Target Hardware Platforms
The on-the-fly workload partitioning targets APU architectures that support a unified memory system where the CPU and the GPU share the same off-chip memory.
Download the artifact here.
The framework assumes that a fully-functional OpenCL runtime system (version 1.2 or an above) is already installed. To generate data csv files and plot files, python and gnuplot tools are needed.
Access to target platforms
If access to APU machines is difficult we may be able to provide accesses to the target platforms used in the evaluation in our paper. Email firstname.lastname@example.org to make an inquiry.
Installation, Reproducing the Results, and Customization
Follow the instructions in the README.md files in the artifact package.
Last update: July 2018