What is the CRUG Cluster?
One of the goals of Carleton’s Computational Research Users Group is to create a shared, powerful, and expandable computation cluster that is usable by as many of our users as possible and funded by grants and faculty startup funds. Our users include faculty and students from all departments, and their needs are diverse.
...
The system is designed to take advantage of the SLURM workload manager. We expect that the majority of the jobs will take advantage of multiple cores through some type of parallel processing. 95% of our needs seem to be embarrassingly parallel. Users “ssh” into command.dmz.carleton.edu and submit slurm jobs through the linux command line. An example of using SLURM can found at https://wiki.carleton.edu/pages/viewpage.action?pageId=57837534. Some useful SLURM commands can be found at https://wiki.carleton.edu/display/carl/Useful+slurm+commands. If you are new to this, please contact our technical staff for help.
...
More information about X forwarding can be found at: X Session Forwarding for Windows and X Session Forwarding for OSX and Linux.
Who gets access to the system?
...
Note as of June 27, 2019: The command and compute nodes are drastically smaller and only one of the HHMI nodes has been converted to a slurm compute node. There is also a very large VM running, summer18.dmz.carleton.edu; this node was initially set up as a place for people to work while slurm was configured. The current plan is to dramatically reduce the size (RAM and core count) of summer18 on July 29, 2019 and to transition users to command.dmz.carleton.edu. summer18 will be removed by the end of 2019.
How to do I reserve a system that I funded and have priority access on?
...
With help from ITS, you would purchase 5 x 10TB drives for dtn.carleton.edu; these drives would then be exported to all the cluster nodes for your use. The data transfer node (dtn) is set up so that drives have to be installed in groups of 5; as a result, the minimal size is 50 TB. If you don’t need that much space, please share it with the rest of the cluster. As of July 27, 2019, that 50 TB costs $1700. Note , that the system has some redundancy built into it, so only 40 TB will be visible to the user.
As stated earlier, we are not currently backing up the system, but a secondary dtn (dtn2.dmz.carleton.edu) is currently available as a potential backup solution. Just like dtn.dmz.carleton.edu, drives need to be purchased in blocks of 5; for $1700, we could add drives to dtn2 , and backup data from the cluster to it. Ideally, you would keep your data in the cluster and backups on a separate server…server.
GPUs?
As of July 27, 2019, none of the compute nodes are equipped with GPUs. Please help us find funding.
...