Computing Cluster

Nicholas Nothom

Introduction

In late 2012, I was working on analyzing the harmonic properties of various contoured cylinders. I wanted to use some computational fluid dynamics software to solve these problems, as they would produce a much more accurate model than anything I could do by hand. After moving off campus, I finally had enough space to build the computer and keep it safe behind an encrypted VPN and Firewall.

Concept and Uses

By this time, the interest of the project had shifted entirely. When I started, I wanted to run CFD programs on a supercomputer, but by this time I had decided to hold that project, It is still on hold now. I had become more interested in 3D Modeling and Rendering. I love the idea of being able to make a prototype of something on my computer and render it out, creating a convincing photo realistic representation of what one day could be real. So I started on the computer again. I purchased all of the peripherals needed to make it work (additional network interface card, secondary gigabit switch, power supplies, and a terabyte hard drive). Each of the computers would receive a job from a main machine, carry it out, and send the results back. I chose the approach to use many weak computers over one powerful one in the interest of cost (This would prove to be a major Achilles heel for use in rendering later on).

The Build

The machines all connected to the network switch fine, and after disabling all firewall features, were able to ping each other by name. This was a painstaking task to deal with. If I did the project again, I would avoid using Windows. After all the computers were talking just fine, it was a simple task to install VRay and Autodesk 3DS Max on each unit. The machines were all running Windows 7 64 Bit. After getting VRay to connect to my main host computer, I was able to set them up in the VRay distributed rendering settings panel. I set their IP addresses to static so that they could be identified consistently, rather than having to retrieve it after every restart. This also prevented them from accessing the Internet, a welcome side effect since the computers had no security software at all. The computer was able to render out images about 7-10 times faster than just my single machine alone. I was happy with the result, but couldn't help being mad at the largest downside of having a computer cluster. My iMac was the computer that I worked on and was on one side of my bedroom. For reasons that I can only describe as obsessive organizational habits, it was going to stay there. The computer cluster was off on the other side of my room neatly hidden in my closet. The two units communicated by way of my Linksys WRT-54G router, which was quite outdated at the time. It was so slow that roughly 50 percent of the total rendering time was spent on sending irradiance cache data from the iMac to the cluster. This means that those computers couldn't even begin to work on their share of the data until all the information was transferred. I solved this problem by running gigabit cable from the iMac to a new router, which was capable of gigabit speeds. This cut down the data transfer time by about 80-90 percent.

603203_10200166818556136_1424222346_n.jpg

Conclusion

The largest downside to using the computer was the power consumption. I could only run it when my roommates were absent to not trip the circuit breaker. Using many computers that are weak and poor in efficiency is not a great start. However the goal of this project was not to create the fastest computer cluster possible, it was to create a computer cluster using free or low cost components and actually complete jobs with it. This is something that it did accomplish. If the machine was fully optimized both in hardware, and software, the cluster could become an excellent example of using old technology to create a powerful tool.

Key Takeaways