Department Core Values
- Excellence - We never settle for "good enough."
- Passion - We are driven by our personal interest in cutting-edge technology.
- Integrity - We enhance our reputation through trustworthy people and dependable solutions.
- Camaraderie - We develop rapport through fellowship and mutual support.
The Department of Computing Services consists of six teams covering a broad spectrum of technology disciplines. While each team provides distinctly different services, the department is designed to facilitate cross-team collaboration with the goal of providing more effective, efficient solutions by working together in a multidisciplinary manner to solve business problems with cutting-edge technology.
Work in the computing services department is full of diverse technical challenges, such as the development of sensor data collection systems, support of high performance computing clusters, creation of field-based spatial tools and development of tools to assist with plant DNA analysis. Exciting opportunities abound for those talented in technical disciplines and with the ability to work in a very collaborative team environment.
Bioinformatics and Research Support
Consisting of both scientists and programmers, this team covers a broad range of disciplines and skill sets, providing computational support services to researchers in agriculture, forage improvement and basic plant science. Group members are experienced in bioinformatics, experimental design, computer programming, data analysis, scientific writing and high performance computing.
Business and Scientific Solutions
The business and scientific solutions team works closely with all of the Noble Foundation's divisions to architect, procure, implement and maintain technology solutions. Their primary focus is to implement innovative software solutions to enable the advancement of business and scientific activities at the Noble Foundation.
Desktop and Endpoint Support Services
The desktop and endpoint support services team works closely with all Noble Foundation staff and provides exceptional customer support. Their duties include the installation, security and maintenance of all desktops, laptops and tablets. The desktop and endpoint support services team provides a great way for new as well as seasoned support professionals to learn new skills and grow in the world of IT. The team prides itself in providing excellent service, an innovative environment and investing in team members.
Enterprise and Research Computing Infrastructure
The enterprise and research computing infrastructure team manages a modern, highly virtualized data center and is responsible for all server, storage, telecommunications, high performance computing clusters and network systems. They also provide desktop and mobile device support for all Noble Foundation staff.
Library and Information Services
The library and information services team provides Noble Foundation researchers with a host of information services. In addition to traditional print and digital library services, this team has strategically positioned itself to provide research data management, data curation, digital asset management and archival services.
Spatial Technology Services
The spatial technology services team provides the Noble Foundation with geographic information systems (GIS) support to enable efficient and effective research based on collaborator needs. In the ever-changing world of GIS, this team focuses on providing the Noble Foundation with unique tools and data to help visualize landscapes and support research outcomes.
High Performance Computing
The department maintains three high performance computing (HPC) clusters. The newest, added in early 2016, is "Taurus," which achieves an aggregate peak performance of 28.25 TFLOPs, with 792 cores, and 13 TB of RAM. Taurus consists of 31 nodes from Advance Clustering Technologies (ACT) and provides a wide range of memory from 256 GB to 2 TB. It also includes ~300 TB of globally accessible high performance DDN storage provided by a 4U chassis with 60 drive bay. The interconnect networks are Infiniband (IB) for message passing, Ethernet for I/O and an Ethernet management network. The IB for message passing is Mellanox ConnectX-3 dual-port 56 Gb/s FDR. Point-to-point latency is approximately 1 microsecond. The Ethernet network includes 40 Gb/s Mellanox switches that connect to all compute nodes. ACT's online job scheduler and submission tools, SLURM/eQUEUE, not only allows efficient use of the cluster but also supports interactive GUI applications and remote visualization, and it provides detailed analytics and reporting to better manage the cluster and its users. This high cores, high memory supercomputer will help researchers accelerate their code, facilitate bioinformatics and support big data analysis research.
The second HPC is an IBM 1350 blade center with 27 computing nodes, providing 272 CPU cores, with 648 GB of distributed memory and 16 TB of scratch storage. It is primarily used for distributed computing such as reference sequence alignment and BLAST searches. The third cluster, ScaleMP vSMP, has 10 HP ProLiant DL385 G7 servers. The ScaleMP provides a total of 120 CPU cores and 2 TB aggregative memory. The ScaleMP provides an environment for large-memory computing applications such as de novo assembly and gene regulatory network prediction.
Virtual Server Environment
The department manages two VMWare virtual server farms. Each farm is designated for specific activities. The first farm comprises six Cisco UCS B series ESXi hosts for a total of 96 cores and 1.5 TB of memory. The farm's primary function is to host enterprise shared virtual servers. Most virtual servers in the enterprise farm use a Microsoft operating system. These virtual servers host functions that include database servers, web servers, intranet servers and hosted application servers.
The department also manages an additional VMWare virtual server farm specifically used for scientific computing activities. The scientific farm is comprised of four HP ProLiant DL585 G5 ESX servers for a total of 144 CPU cores and 560 GB memory. It comprises four Cisco UCS B series ESXi hosts for a total of 64 cores and 1.0 TB of memory. The scientific farm hosts a variety of virtual servers that run various Linux distributions and Microsoft operating systems. The farm's virtual servers are used for scientific software development, testing, web server and database hosting, public and internal services, internal user desktops, and application servers.
Data Center and Enterprise Infrastructure
Built in 2006, the Administration Building houses the data center for computing services and activities. The data center contains 2,025 square feet of usable floor space, and there is currently a total of 20 data cabinets that contain 42u each for a total of 840u of total rack space. The data center also contains redundant cooling and power systems. Critical systems are also connected to our 160 kVA UPS system for extended power outages.
The department manages a total of 216 TB of high availability usable storage. The storage environment is built upon two Netapp 3240 storage controllers (seven-mode cluster) and provides 143 TB of usable storage. Additionally, one Netapp 3240 is maintained at a remote location for backup and disaster recovery and provides 73 TB of usable storage. Core networking is built upon one Cisco 6509-E platform providing 192 1GigE and 32 10GigE ports.