Sysnet Supported Cluster Requirements¶
Due to the growing number of clusters here at the Institute, it has become necessary to lay down some ground rules for clusters that will be supported directly by the Systems and Network group (Sysnet).
Sysnet will fully support small to medium size clusters (up to 256 cores) that conform to the following standards.
Hardware¶
All Sysnet supported clusters will be of the x86 / x86_64 architecture. This includes both AMD and Intel processors.
No proprietary hardware will be supported.
Vendors¶
While we will entertain the idea of going with other vendors, it is highly encouraged that all cluster purchases be Dell hardware. Dell offers excellent HPC systems using standard hardware and give substantial discounts to UT.
Interconnect¶
High speed interconnects must be an Infiniband or Omnipath based product. We currently support both Cisco and Qlogic products, and while we will look at other infiniband offerings, we have had very solid support and performance from these two vendors. The product line with the best price performance point is currently the Silverstorm line from Qlogic.
OS¶
All Sysnet supported clusters must run OpenHPC. http://openhpc.community/
Queue System¶
All Sysnet supported clusters must run Slurm for their queuing system.
Storage¶
Fiber channel, SAS and iSCSI storage options are available on Sysnet supported clusters. If one storage server is purchased, all storage will be exported over NFS. Lustre is supported as well for those clusters that need a cluster file system.
Backups¶
Due to the growing number of clusters, Sysnet is no longer able to do full backups of all clusters. Sysnet will make recommendations for additional hardware needed to backup small to medium size compute clusters and will determine the size of the storage required along with data retention policies for directories that need to be backed up. Some directories that are important to backup include users home directories and applications.
For convenience, Sysnet will backup important OS specific directories to ease restoration of the cluster if it needs to be re-installed.
Sysnet will install, configure, monitor, and set data rention policies using the same backup software used to backup the Institute’s main servers. Backup storage of at least 1.5 times the available storage on the cluster must be purchased.
MPI¶
MPI stacks provided by OpenHPC will be installed on request. By default, we will install a minimum of OpenMPI with GNU and Intel support. Others can be requested if it’s available from OpenHPC repositories.
User Accounts¶
User accounts are local only to the cluster. We no longer use LDAP authentication service for clusters. This has a number of distinct advantages over centrally managed authentication.
Home and /org access¶
Is no longer supported, no exceptions.
Remote Access¶
All clusters will be placed on a static NAT subnet. Remote access from off campus will require the use of UT’s VPN service.