Here at Ars, when we write well-nigh assembling computing clusters in the cloud, it tends to be on a grand scale. Think endeavors in high-performance computing (HPC) on Amazon’s Elastic Compute Deject (EC2) like in 2013 when a chemistry professor and software visitor Cycle Computing assembled 156,314 cores for an 18-hour run that reached a theoretical speed of 1.21 petaflops. Or when simulation software firm Schrödinger rented 50,000 cores on EC2 in 2012 for $4829 per hour.
But sometimes just several dozen cores from a computing fairy-godmother will do. That was the specimen for an undergraduate Hyperloop team from the University of California, Irvine (UCI). The team assembled in 2015 to compete in a series of contests sponsored by SpaceX without the company's CEO, Elon Musk, drew up a whitepaper envisioning a super-fast form of transportation that ran on magnetic skis in a low-pressure tube. Without UCI's team, tabbed HyperXite, won a technical excellence ribbon in an early 2016 diamond competition, the team had to unquestionably build the thing in time for the January 2017 pod races at SpaceX's headquarters in LA. This meant a lot of computer modeling would have to be done.
Around that time Nima Mohseni, a mechanical and aerospace engineering undergraduate at UCI, joined HyperXite as the team's simulation lead. He told Ars on a phone undeniability that HyperXite was originally looking at doing its modeling on a 24-core system owned by the University—a system that limited students to 72 hours of time per project. But HyperXite was sponsored by Microsoft (among others), and the software giant intervened to introduce the undergraduates to the people at Cycle Computing.