Guest blog entry: text by Mark Vousden of the Institute for Complex System Simulation DTC.
It's rare to set up a free WordPress site, and complete Internet of Things and machine learning stacks in two days - but that's what the students of the Microsoft Azure cloud computing course delivered by Kenji Takeda as part of the second NGCM Summer Academy did this year.
Cloud computing is a broad term that encompasses any computation done on a remote server hosted on the Internet. Search engine operation, remote continuous integration, IPython notebook servers, and high-performance computation on remote virtual machines are all examples of cloud computing on different scales. After attending the course, it appears clear that cloud computing has changed the world of high-performance computing, and is a worthy contender to traditional high-throughput compute solutions.
Students were provided a $500 usage pass for the Microsoft Azure cloud computing service to create servers to enable cloud computing for their research, and were shown how to spin up their own SLURM high-throughput compute clusters (like Iridis or Archer) for running their simulations. The clusters were truly their own, as they could install any software they needed due to root access, which they would not have on their current high-performance compute resources, and did not have to queue.
Servers in the cloud can be spun up in minutes from template virtual machines in a few lines of code, which greatly improves the reproducibility and accessibility of simulations. However, this accessibility does not come without cost, as the pay-per-use model adopted by Microsoft and other cloud computing companies means that these servers must be micromanaged to optimise the money spent.