Open Collective
Open Collective
Loading

Fall HIP Hackathon

Monday, September 13, 2021, 9:30 AM - Friday, September 17, 2021, 4:00 PM (UTC-07:00)
Created by: OS Hackathon
Fiscal Host: WATERCHaNGE

About


Who?

Teams or individuals with existing CUDA or OpenACC GPU accelerated code that would like to transition out of vendor-lock and into a portable GPU acceleration framework.

Before the hackathon begins, we open applications to build profiles of prospective attendees based on the code and team’s readiness and the scope of their near term goals. Teams and individuals in alignment with the scope of a hackathon (porting from CUDA/OpenACC to HIP/OpenMP/OpenCL) and that meet minimum requirements (e.g. the code compiles, runs, and produces reproducible results) are accepted and are paired with mentors.

High performance computing teams with goals to utilize heterogeneous architectures and a need for community supported software for hardware are encouraged to apply. Anyone with the knowledge and will to help teams achieve their goals at this hackathon is encouraged to apply to be a mentor.

What?


A virtual hackathon aimed at helping developers get out of vendor-locked GPU accelerated applications.

We are building community experience in programming models that promote portable GPU acceleration of scientific applications, such as
  • OpenMP 5.0 in C/C++ and Fortran
  • HIP in C/C++ and Fortran (hipfort)
  • OpenCL/Focal in C/C++ and Fortran

The 2021 HIP Virtual Hackathon is open to teams aiming to migrate from CUDA/OpenACC frameworks to HIP/OpenMP/OpenCL frameworks to support software portability. Hackathon attendees will use the Heterogeneous-compute Interface for Portability or HIP, OpenMP 5.0, or OpenCL to assist with application portability.

Compute Resources


Hackathon attendees will be given access to compute resources on Google Cloud Platform and the AMD Accelerator Cloud.

AMD Accelerator Cloud
  • Compute Nodes
    • (20 node) 2 x AMD EPYC Rome 7742 64-Core Processor + 8 x AMD Radeon Instinct MI50 (32GB) GPU
  • Software
    • Operating System : Ubuntu 18.04 LTS
    • ROCm™ (4.0.0)
      • HIP/HIPFort
      • AOMP Compiler (OpenMPI 5.0 GPU Offloading)
      • OpenCL

Autoscaling OS HPC Cluster
  • Compute Partitions
    • (8 node) a2-highgpu-1g - ( 12 vCPU + 85 GB RAM ) + 1 Nvidia® Ampere® A100 GPUs
    • (20 node) n1-8-solo-v100 - standard-8 ( 8 vCPU + 30 GB RAM; Intel® Broadwell/Haswell ) + 1 Nvidia® Tesla® V100 GPUs
    • (1 node) AMD Ryzen 5 ( 12 vCPU + 32 GB RAM ) + 1 AMD Radeon MI25 Frontier Edition GPU
    • (5 node) n2d-standard-224 ( 224 vCPU + 896 GB RAM; AMD® Epyc Rome )
    • (10 node) c2-standard-60 - ( 60 vCPU + 240 GB RAM; Intel® Cascade Lake )
    • (30 node) n1-standard-4 - ( 4 vCPU + 15 GB RAM; Intel® Broadwell/Haswell )
    • Additional compute partitions available on request.
  • Software
    • Operating System : CentOS 7
    • Slurm 20.02 (Scheduler & Workload manager)
    • GCC 10.2.0 + OpenMPI 4.0.5
    • ROCm™ (4.0.0)
      • HIP/HIPFort
      • AOMP Compiler (OpenMPI 5.0 GPU Offloading)
      • OpenCL
    • Focal (OpenCL for Fortran)
    • Additional software available on request.

Our team