In order to successfully complete this assignment you need to participate both individually and in groups during class. If you attend class in-person then have one of the instructors check your notebook and sign you out before leaving class on Monday March 15. If you are attending asynchronously, turn in your assignment using D2L no later than _11:59pm on Monday March 15.
0314--CUDA_Alternatives_pre-class-assignment
As a class we will discuss the various alternatives to cuda and their pros and cons.
%%writefile cuda_submit.sb
#!/bin/bash
#SBATCH --time=01:00:00
#SBATCH -c 1
#SBATCH -N 1
#SBATCH --gres=gpu:1
#SBATCH --mem=4gb
time srun ./mycudaprogram
#Prints out job statistics
js ${SLURM_JOB_ID}
Overwriting cuda_submit.sb
!sbatch cuda_submit.sb
sbatch: Command not found.
Our next big topic in class will be doing "Shared Network Parallization" using MPI (Message Passing Interface). MPI and it's libraries are loaded by default on the HPCC.
✅ DO THIS: Get either the Pandemic or Galaxsee example working using MPI on the HPCC. Here are the basic steps:
%%writefile cuda_submit.sb
#!/bin/bash
#SBATCH --time=01:00:00
#SBATCH -c 1
#SBATCH -N 10
#SBATCH --mem=40gb
time srun ./mympiprogram
#Prints out job statistics
js ${SLURM_JOB_ID}
Overwriting cuda_submit.sb
✅ QUESTION: What is different about the above submission script as compared to a shared memory job (OpenMP) or a GPU job (CUDA)?
Put your answer to the above question here.
✅ DO THIS: What would a scaling study look like for this type of job? Can you think of a way to automatically vary the number of nodes (N) from from 1,2,4,8 etc?
Put your answer to the above question here.
If you attend class in-person then have one of the instructors check your notebook and sign you out before leaving class. If you are attending asynchronously, turn in your assignment using D2L.
Written by Dr. Dirk Colbry, Michigan State University
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.