ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
AwardNumber
Program Area
Title (VBC/QUAD Chart Link)StartDate
PrincipalInvestigator
StateOrganizationAbstract
2
2019080Infrastructure
CC* Networking Infrastructure: Research on the Hill
7/15/2020Damian ClarkeALAlabama A&M University
Alabama Agricultural & Mechanical University?s Research on the Hill Network (RotH Net) is a dedicated and frictionless network fabric to serve as a Science DMZ for multidisciplinary research programs. The RotH Net provides a secure digital workspace where researchers can compute, store, and share research data on a 10G connection to Internet2. This network fabric facilitates transfers of large datasets from projects serving multiple research labs. Data collected from Unmanned Aerial Systems (UAS) for scientists and farmers to evaluate crops at scale while simultaneously creating crop maps and large consistent datasets that were previously imaginable. Normalized Difference Vegetation Index images will prescribe fertilizer applications, estimate yields, and identify weeds. Other facilities dedicated to data-intensive science are enabled by the project, including research into Ion Accelerator Technology and Ion beam modification of materials.<br/><br/>The RotH Net includes a full set of science DMZ components serving to re-architect the campus border to support high performance science data flows across campus and external to the campus network. One additional component is the Open Storage Network (OSN) pod, a set of deployable units of distributed storage with minimal administrative overhead and petabyte-sized storage capable of high-throughput, and high-speed large volume data transfers. The OSN provides a cyberinfrastructure (CI) service to address specific data storage, transfer, sharing, and access, challenges. The project also supports a fiber extension connecting campus research facilities and paving the way for flexible growth in high performance research and education network connectivity.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
3
2018846Compute
CC* Compute: Accelerating Advances in Science and Engineering at The University of Alabama Through HPC Infrastructure
7/1/2020Jeffrey CarverALUniversity of Alabama Tuscaloosa
This project augments the University of Alabama’s (UA) computing infrastructure to support the increase in computational science and engineering needed to study diverse, interesting problems including: reliable computational chemistry predictions, properties of engineered materials, applied mathematics for image analysis and signal processing, bioinformatics of complex cellular systems, and hydrological simulations. The new high-performance computing (HPC) infrastructure removes bottlenecks in local UA resources caused by an increasing number of users and increasingly larger computational solutions to more realistic problems. This infrastructure enables UA researchers to make scientific and engineering advances not possible with previous UA HPC machines. To provide broader impacts, the infrastructure is also available to regional institutions of higher education, including HBCUs and private institutions who lack adequate HPC access, and to similar institutions across the nation through the Open Science Grid.<br/><br/>This project augments the UA HPC system by (1) doubling computing capacity, (2) adding two large-memory nodes for large-scale data analysis and mining, (3) adding ten GPUs for massively data-parallel computations, (4) significantly increasing storage node bandwidth, and (5) shifting from a “condo” model to a general use, shared model. This infrastructure provides compelling new research and educational opportunities for students, staff, and faculty at UA and other regional and national institutions (including HBCUs and private institutions). In terms of broader impacts, the infrastructure allows undergraduate students to perform state-of-the-art computational research, thereby attracting more diverse STEM participants. The infrastructure provides a platform for educating the next generation of computational and computer scientists in cutting-edge HPC techniques.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
4
2126108CIRA
CC* CIRA: Shared Arkansas Research Plan for Community Cyber Infrastructure (SHARP CCI)
9/1/2021
Donald DuRousseau
ARUniversity of Arkansas
This award is funded in whole or in part under the American Rescue Plan Act of 2021 (Public Law 117-2).<br/><br/>SHARP CCI creates a statewide research cyber infrastructure (RCI) Plan for Arkansas that is focused on the eight degree granting institutions performing science and engineering research. Participating institutions include: Arkansas State University (ASU); Southern Arkansas University (SAU); University of Arkansas (UA) Fayetteville (UAF); UA Division of Agriculture (UADA); UA Little Rock (UALR); UA Medical Sciences (UAMS); UA Pine Bluff (UAPB); and University of Central Arkansas (UCA). Each school has a growing demand for federated access to high-speed resources, managed services and technical training; however, a coordinated plan for providing these capabilities does not currently exist. Most schools in Arkansas lack the connectivity, budget and staffing to fully utilize these statewide resources, and each school is unique in technical capability and expertise. SHARP CCI assembles the senior administrators and research leaders at each school to document the environments, capabilities, technology needs and resource gaps and create an equitable RCI Plan to inform decision makers in the state. This plan will also provide a necessary resource for each school to apply for additional NSF support through solicitations that expand school infrastructure and training capabilities.<br/><br/>A statewide RCI Plan is requisite for organizing and expanding research collaborations and it fosters an economy-of-scale for building compliant facilities and systems and establishing a managed service environment for schools with limited resources. SHARP CCI includes establishing a comprehensive data science training program, Arkansas Cyber Team (ACT), to provide engineers, educators, researchers and students access to technical expertise in a broad range of scientific domains. The project team includes Arkansas Research and Education Optical Network, Great Plains Network, Open Science Grid and NSF?s Engagement and Performance Operations Center to provide engineering and training expertise, managed service experience and help in implementing a statewide RCI Plan for Arkansas.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
5
2126308CIRA
CC* CIRA: Southwest Higher Education Knowledge and Technology Exchange (SHEKATE)
10/1/2021Lev GonickAZArizona State University
The COVID-19 pandemic exposed the deficiencies of long haul, middle mile, and last mile broadband connectivity in the West. Long distances, sparse populations, and isolated tribal communities characterized new challenges for the higher education community. An influx of new federal funding and state allocations from the CARES Act released new opportunities to plan, build, and deliver a new foundation for cyberinfrastructure. Whereas planning and collaboration were needed and required in a pre-pandemic environment, the demand for regional-scale thinking and planning by leadership organizations is now an essential element in rebuilding the mountain west. As a regional, multi-state, multi-institution collaboration, SHEKATE is prepared to engage, plan, and implement the new cyberinfrastructure that supports and improves science in a place where connectivity is expensive and isolated from researchers, and where regional CIOs and state research and networking collaborations work collectively to solve regional-scale research problems.<br/> <br/>SHEKATE’s collaborative planning events introduce and connect researchers from the region’s research universities to cross state and international boundaries and to work with federal and state broadband initiatives to establish a new research core across the region. Led by Arizona State University, the Sun Corridor Network, and the Utah Education and Telehealth Network as planning leads, SHEKATE conference events focus on researcher enabled urban biometrics, artificial intelligence, broadband in the West, cyberinfrastructure sustainability, and the underlying organizational structures that inform a new approach to building CI in one of the most complex urban and rural regions.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
6
2126303Compute
CC* Compute: The Arizona Federated Open Research Computing Enclave (AFORCE), an Advanced Computing Platform for Science, Engineering, and Health
10/1/2021Douglas JenneweinAZArizona State UniversityThis award is funded in whole or in part under the American Rescue Plan Act of 2021 (Public Law 117-2).<br/><br/>Drawing upon its mission to enable access to discovery and scholarship in science, engineering, and health, Arizona State University (ASU) is deploying the Arizona Federated Open Research Computing Enclave (AFORCE). AFORCE provides cutting-edge technology to support research and education while advancing the knowledge and understanding of deploying 21st-century cyberinfrastructure in a large public research university. Specifically, this state-of-the-art system is supporting multidisciplinary research and education in science, technology, engineering, and mathematics domains including computational genomics, molecular dynamics, computational materials science, robotics, and imaging.<br/><br/>To increase computational capacity, AFORCE comprises a pool of multiple graphical processing unit (GPU) accelerated computing nodes accessible to extramural researchers through federated authentication provided via InCommon. Moreover, the AFORCE system itself is part of the global Open Science Grid computing pool. ASU also promotes and enables the use of Open Science Grid by incorporating its capabilities into regular training sessions and faculty engagement events. Finally, AFORCE is configured to also provide cloud burst capabilities allowing compute jobs to be scheduled on commercial clouds. Early career faculty will be specifically targeted for workshops and tutorials, helping encourage their participation in the AFORCE system.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
7
2126301Planning
CC* Planning: Precision Agriculture and the Community
9/15/2021Derek MassethAZArizona State University
Water shortages in California, Arizona, and Nevada will impact crop yield and agriculture in the region for years to come. By further advancing farming innovations in Yuma, Arizona along the Colorado River, the prospects of sustainable growth, improved research potential, and new infrastructure collaborations benefit STEM programs, industry, and the community. Yuma is Arizona’s Salad Bowl. The continued introduction of specialized networking and collaborations with higher education and K-12 institutions in the area improves the opportunity to plan and implement broadband and networking aligned with the science of agriculture. The use of technology to improve crop yields, or precision agriculture, is a required step forward in the deployment of cyberinfrastructure in rural communities.<br/><br/>The planning effort to improve networking, connect science-oriented programs, and experiment with homework gap services is led by Sun Corridor Network in collaboration with Yuma’s agricultural research community, community college service and agricultural programs, Yuma’s largest elementary and high school programs, the University of Arizona’s experimental farms and regional academic organization, and the desert agricultural research organization. The Yuma collaboration planning activities will improve community-based cyberinfrastructure models, inform the next generation of agricultural infrastructure, and bridge science, engineering, and technology programs by addressing the complex needs of the business and science of agriculture. The Yuma Collaboration introduces broadband and infrastructure planning in support of precision agriculture during a period of extreme drought and the impacts of water shortages.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
8
2126291Integration
CC* Integration-Large: (BLUE) Software-Defined CyberInfrastructure to enable data-driven smart campus applications
9/1/2021Dijiang HuangAZArizona State University
This award is funded in whole or in part under the American Rescue Plan Act of 2021 (Public Law 117-2).<br/><br/>A new campus infrastructure called BLUE is developed to enable efficient and secure data-driven research and application development based on distributed IoT devices. BLUE addresses three main challenges for supporting innovative smart campus applications based on distributed IoT devices: (a) establishing a programable campus infrastructure to support distributed and ad hoc IoT services, (b) providing strong security and privacy protection of IoT data, and (c) constructing an edge-cloud infrastructure to provide computing, networking, and storage resources to support smart-campus applications. <br/><br/>BLUE is a new software-defined infrastructure to support IoT-based data processing, analysis, and distribution over distributed IoT data sources. BLUE also supports a set of tangible metrics, such as network QoS metrics, location, resource consumption, etc., to effectively enable researchers to validate their research models. Moreover, BLUE takes privacy and security protection as a fundamental enabling technique by pushing the computation towards the edge computing and networking infrastructure. Research applications built on this project share a common requirement for low-latency transfer of ever-larger data sets with collaborators across multiple geographic sites. This project will contribute to a national paradigm of campus-level dynamic network services that enables leading-edge network and domain-specific research.<br/><br/>BLUE can benefit the full range of campus scholarly activities, including research activities funded by NSF and other federal agencies. The outcomes of this project will be shared with the public based on an open-source license agreement. In addition, undergraduate and graduate student researchers will receive diverse STEM skills training, including networking research, big data analysis, and domain-specific research.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
9
2018886Infrastructure
CC* Networking Infrastructure: Science DMZ for Data-enabled Science, Engineering, and Health
7/15/2020Douglas JenneweinAZArizona State University
Drawing upon its mission to enable access to discovery and scholarship in science, engineering, and health, Arizona State University is deploying an advanced research network employing the Science DMZ architecture. While advancing knowledge of deploying 21st century cyberinfrastructure in a large public research university, this project also advances how network cyberinfrastructure supports research and education in science, engineering, and health. This effort is being carried out by a partnership of campus cyberinfrastructure experts to: 1) Improve campus network connectivity to enable high speed data movement for STEM research and education activities, by enabling friction-free access to wide area networks. 2) Ensure secure and performant data movement for STEM research and education activities. 3) Increase STEM research and education productivity.<br/><br/>The project incorporates national best practices in network architecture, security, and federated authentication and identity management. Replacing existing edge network equipment and installing an optimized, tuned Data Transfer Node provides friction-free wide area network path and streamlined research data movement. A strict router access control list and Intrusion Detection System provide security within the Science DMZ, and end-to-end network performance measurement via perfSONAR guards against issues such as packet loss. Science data flows are supported by a process incorporating user engagement, iterative technical improvements, training, documentation, and follow-up. Network design and implementation are guided by an external advisory board consisting of experts from the Energy Sciences Network, Internet2, the Engagement Performance and Operations Center (EPOC), The Quilt, and Arizona's ?Sun Corridor? research network.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
10
1925632Regional
CC* Regional: Sun Corridor Network ? Arizona Community College Research Expansion
8/1/2019Steven BurrellAZNorthern Arizona University
Northern Arizona University (NAU) is a founding member of the Sun Corridor Network (SCN) along with Arizona State University (ASU) and the University of Arizona (UA). SCN delivers research and education networking to Arizona's research universities through high-speed connections and Internet2. Research and specialized network connectivity at Arizona community colleges are limited and generally unavailable to academic or to specialized technical programs. This project connects one of the nation's largest community college systems and the northern NAU collaborator to SCN. Network expansion to Arizona's community colleges enables research and education connectivity and support to the students and faculty of Maricopa Community College County District's Estrella Mountain Community College (Goodyear, AZ), Chandler-Gilbert Community College (Chandler, AZ), Phoenix College (Phoenix, AZ) and Coconino Community College, (Flagstaff, AZ). The network expansion advances the use of science-oriented workflows, high performance computing in undergraduate research, instrument sharing, STEM education, and homework gap connectivity and introduces wide area and campus networking capacities that align with cybersecurity academic requirements. <br/><br/>NAU and SCN will improve campus network performance, increasing external connectivity to each campus by connecting them to SCN as a regional aggregator. This leverages a strong existing regional relationship with NAU, a leader in rural and online programs and services. The proposal emphasizes outreach to determine needs and requirements, followed by design, workshops, and science network implementation. By building new collaborations and by expanding connectivity, NAU and SCN improve undergraduate science and technical instruction for over 200,000 students in the Phoenix metropolitan area and northern Arizona.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
11
2127548CoE
CI CoE: CI Compass: An NSF Cyberinfrastructure (CI) Center of Excellence for Navigating the Major Facilities Data Lifecycle
7/15/2021Ewa DeelmanCAUniversity of Southern California
Innovative and robust Cyberinfrastructure (CI) is critical to the science missions of the NSF Major Facilities (MFs), which are at the forefront of science and engineering innovations, enabling pathbreaking discoveries across a broad spectrum of scientific areas. The MFs serve scientists, researchers and the public at large by capturing, curating, and serving data from a variety of scientific instruments (from telescopes to sensors). The amount of data collected and disseminated by the MFs is continuously growing in complexity and size and new software solutions are being developed at an increasing pace. MFs do not always have all the expertise, human resources, or budget to take advantage of the new capabilities or to solve every technological issue themselves. The proposed NSF Cyberinfrastructure Center of Excellence, CI Compass, brings together experts from multiple disciplines, with a common passion for scientific CI, into a problem-solving team that curates the best of what the community knows; shares expertise and experiences; connects communities in response to emerging challenges; and builds on and innovates within the emerging technology landscape. By supporting MFs to enhance and evolve the underlying CI, the proposed CI Compass will amplify the largest of NSF’s science investments, and have a transformative, broad societal impact on a multitude of MF science and engineering areas and the community of scientists, engineers, and educators MFs serve. CI Compass will also impact the broader NSF CI ecosystem through dissemination of CI Compass outcomes, which can be adapted and adopted by other large-scale CI projects and thus empower them to more efficiently serve their user communities.<br/><br/>The goal of the proposed CI Compass is to enhance the CI underlying the MF data lifecycle (DLC) that represents the transformation of raw data captured by state-of-the-art scientific MF instruments into interoperable and integration-ready data products that can be visualized, disseminated, and converted into insights and knowledge. CI Compass will engage with MFs and contribute knowledge and expertise to the MF DLC CI by offering a collection of services that includes evaluating CI plans, helping design new architectures and solutions, developing proofs of concept, and assessing applicability and performance of existing CI solutions. CI Compass will also enable knowledge-sharing across MFs and the CI community, by brokering connections between MF CI professionals, facilitating topical working groups, and organizing community meetings. CI Compass will also disseminate the best practices and lessons learned via online channels, publications, and community events.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
12
2126319Regional
CC* Regional: A Purpose-built SoCal Science DMZ for Catalyzing Scientific Research Collaborations
10/1/2021Carl KesselmanCAUniversity of Southern California
The Los Nettos Regional Network is a long-standing regional research and education (R&E) network with a history of supporting science and engineering research for its more than 30 members and associates in the greater Los Angeles area. This project builds a friction-free regional Science DMZ network across multiple Southern California college campuses, catalyzing collaborative research capabilities at the institutions. The project establishes the network infrastructure and software necessary to facilitate high speed transfers of large-scale research data for regional and national scientific collaborations. The campuses included in the network are Loyola Marymount University, Occidental College, and The Claremont Colleges consortium, which consists of Claremont Graduate University, Claremont McKenna College, Harvey Mudd College, Keck Graduate Institute, Pitzer College, Pomona College, and Scripps College. <br/><br/>This purpose-built science network is specially customized for each institution’s unique needs and follows the well-known Science DMZ guidelines established by ESnet. The new network interconnects with state, national, and international networks, such as CENIC’s California Research and Education Network (CalREN), Internet2, and Pacific Wave. Many projects in various science domains benefit from the significant network capacity increase that this project supports. Coordinated activities at the regional level, including technical training for administrators and researchers at each campus, ensure uniform standards are maintained. This scalable R&E network can be expanded in the future for researchers and students at other smaller regional institutions (e.g., Charles R. Drew University of Medicine and Science, ArtCenter College of Design) as their need for collaboration widens.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
13
2019220Compute
CC* Compute: A Customizable, Reproducible, and Secure Cloud Infrastructure as a Service for Scientific Research in Southern California
9/1/2020Carl KesselmanCAUniversity of Southern California
This project creates a hybrid cloud infrastructure as a scientific computing gateway that promotes and supports inter-disciplinary, multi-institutional research in science, engineering, biomedicine, and the social sciences. The hybrid cloud platform also promotes regional and national research collaboration, as a portion of the resources is integrated into the Open Science Grid (OSG). Many institutes with multi-institutional research projects headquartered at University of Southern California (USC), along with their regional and national collaborators, benefit from use of the OSG. It also extends the impact of their research outcomes and the projects themselves, as the system offers various ways to share research outputs and knowledge with external collaborators. The planned support for regional universities and integration with OSG increases opportunities to serve a broader community.<br/><br/>A broad research community is supported by this system by providing access to public and private cloud services as well as local high performance computing (HPC) and data resources. The design of the hybrid cloud system facilitates the creation of customizable, virtualized platforms and reproducible, container-based application services that enable multi-dimensional computing and data solutions. Researchers are able to pick and choose from a standard service catalogue to build pre-defined virtual machines and containerized applications, or, if necessary, create their own specialized environments. Along with built-in security, reproducible service modules, and the capability of creating and sharing customized environments, the hybrid cloud system bridges multi-disciplinary research domains and enhances the usability of advanced cyberinfrastructure for improved research productivity.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
14
2019194Compute
CC* Compute: Central Computing with Advanced Implementation at San Diego State University
9/1/2020Jose CastilloCASan Diego State University Foundation
This project establishes a computing system that assists researchers at San Diego State University to accelerate their work through ongoing developments in computer processing hardware. Faculty and students will develop code, most of it currently running on traditional processing units, to take advantage of the enhanced computing power of graphical processing units and field programmable gate arrays, which effectively allow the circuitry of the computer chip to be optimized for specific computing tasks. Applications hosted on this system include simulations of subterranean carbon dioxide sequestration, the development of new computer programs to identify disease-causing and other organisms in biologically diverse environments, and studies of nuclear structure, brain imaging, the motion of viruses, and engine design. One of the PIs directs the training of new users, and the infrastructure is incorporated into several courses available to undergraduates and graduate students. External users can access the resource through the Pacific Research Platform.<br/><br/>A new, high-performance computing cluster offers advanced processor hardware to researchers at San Diego State University and throughout the region. The cluster is optimized to support transfer of existing software developed in-house to processors that promote distributed computing and/or electronic design automation. While accelerating projects and modernizing training in computational methods, this work also informs the University’s long-range cyberinfrastructure acquisition plans by quantifying the enhancement of research possible with advanced hardware. 20% of the cluster capacity is reserved for extramural use through the Pacific Research Platform, and 5% for training and coursework to advance workforce development.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
15
1925717Compute
CC* Compute: A high-performance GPU cluster for accelerated research
10/1/2019Kris DelaneyCAUniversity of California-Santa Barbara
The exponential growth of computing power and the emergence of high-performance computing paradigms has revolutionized all fields of science and engineering. Graphics processing unit (GPU) hardware, a type of highly parallel co-processor originally designed for generating 3D scenes in video games, has been increasingly leveraged over the last decade to dramatically accelerate scientific computing workloads. This project is for the acquisition of a GPU compute cluster consisting of 24 state-of-the-art NVIDIA Tesla V100 32 GB GPUs with fast inter-GPU communication. The resource is housed at the University of California, Santa Barbara (UCSB), and is accessible to researchers across campus and externally through a connection to the Pacific Research Platform/Nautilus federated systems network.<br/><br/>Initial research activities on the facility span the computational realm, including: a new type of multi-scale molecular simulation for predicting structural and thermodynamic properties of complex polymeric solution formulations; a materials characterization thrust involving crystal orientation indexing with real-time instrument feedback control; and the development of a scalable Neural Architecture Search framework for automatic generation of Deep Neural Network models for scientific applications of machine learning. The cluster provides a significant resource for educating the next generation of computational scientists in the latest GPU-computing techniques. Undergraduates, high-school students, and K-12 teachers will also have access via existing campus-sponsored programs: Research Experience for Teachers (RET), California Alliance for Minority Participation (CAMP), and the Center for Science and Engineering Partnerships (CSEP). These programs serve to provide training and increase the number of under-represented students in STEM fields.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
16
1925558ComputeCC* Compute: Triton Stratus7/15/2019Ronald HawkinsCAUniversity of California-San Diego
The University of California, San Diego, deploys Triton Stratus, an addition to its existing high performance computing system that allows campus researchers in many schools and departments to apply computational methods to their scientific research. Triton Stratus helps researchers advance their scientific aims by providing improved facilities for accessing emerging computing tools and scaling them to commercial cloud computing resources. Researchers, especially data scientists, are increasingly using tools such as Jupyter notebooks and RStudio to implement computational and data analysis functions and workflows. Jupyter notebooks provide a web-browser-based environment that permits the assembly of text, graphics, and computing functions into a living laboratory notebook, promoting research documentation and reproducibility. RStudio is another tool growing in popularity which provides a graphical environment for the R statistical language, widely used for data analysis. These tools are part of a general trend in research computing towards web-based and graphical interfaces, especially for attracting newer generations of researchers and data scientists. Triton Stratus runs JupyterHub server and RStudio server, addressing the interactive computing needs of scientists from various domain sciences.<br/><br/>The project delivers a productive research computing capability serving investigators in different scientific domains. Triton Stratus permits exploration of the emerging hybrid model of on-premise cluster computing resources coupled with commercial cloud computing services and will help answer questions such as the right balance of on-premise and cloud resources, best modes for scaling or bursting to cloud resources, and investigating new models of interactive research computing.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
17
1826967NPEO
CC* NPEO: Toward the National Research Platform
10/1/2018Larry SmarrCAUniversity of California-San Diego
Academic researchers need a simple data sharing architecture with end-to-end 10-to-100Gbps performance to enable virtual co-location of large amounts of data with computing. End-to-end is a difficult problem to solve in general because the networks between ends (campuses, data repositories, etc.) typically traverse multiple network management domains: campus, regional, and national. No one organization owns the responsibility for providing scientists with high-bandwidth disk-to-disk performance. Toward the National Research Platform (TNRP), addresses issues critical to scaling end-to-end data sharing. TNRP will instrument a large federation of heterogeneous "national-regional-state" networks (NRSNs) to greatly improve end-to-end network performance across the nation. <br/><br/>The goal of improving end-to-end network performance across the nation requires active participation of these distributed intermediate-level entities to reach out to their campuses. They are trusted conveners of their member institutions, contributing effectively to the "people networking" that is as necessary to the development of a full National Research Platform as is the stability, deployment, and performance of technology. TNRP's collaborating NRSNs structure leads to engagement of a large set of science applications, identified by the participating NRSNs and the Open Science Grid. <br/><br/>TNRP is highly instrumented to directly measure performance. Visualizations of disk-to-disk performance with passive and active network monitoring show intra- and inter-NSRN end-to-end performance. Internet2, critical for interconnecting regional networks, will provide an instrumented dedicated virtual network instance for the interconnection of TNRP's NRSNs. Cybersecurity is a continuing concern; evaluations of advanced containerized orchestration, hardware crypto engines, and novel IPv6 strategies are part of the TNRP plan.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
18
2129723WINS
Travel: WINS Travel Funds in Support of SCinet at SC21
10/1/2021Marla MeehlCO
University Corporation For Atmospheric Res
This travel grant supports five volunteers to attend the 2021 Supercomputing<br/>Conference (SC21) scheduled November 15 - 21, 2021 in St. Louis, Missouri, as part of<br/>the Womening in IT Networking at SC (WINS) program who will participate in building<br/>and operating SCinet, the advanced computer networks which supports the annual<br/>conference. The WINS program, started in 2015, enables talented early to mid-career<br/>women from diverse backgrounds and regions of the U.S. research and education IT<br/>community to volunteer to participate in SCinet and experience the hands-on<br/>development and construction of the conference network.<br/><br/>The intellectual merit associated with the project is to continue the expansion and<br/>broadening of knowledge and training for women network engineers and information<br/>technology professionals who desire to build expertise and continue their education and<br/>careers in network and computer systems. The broader impact associated with the<br/>program is through the diverse representation of organizations and applicant<br/>backgrounds representing Minority Serving Institutions, Established Program to<br/>Stimulate Competitive Research (EPSCoR) states, and historically underrepresented<br/>minority groups. Through the participation in SC and SCinet, the travel funds provide<br/>training, professional development, and career opportunities for women in a workforce<br/>that has been historically male and non-minority-dominated.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
19
2019163Integration
CC* Integration-Small: Error Free File Transfer for Big Science
7/1/2020Craig PartridgeCOColorado State University
Scientific data transfers have gotten so large that previously rare transmission errors in the Internet are causing some scientific data transfers to be corrupted. The Internet's error checking mechanisms were designed at a time when a megabyte was a large file. Now files can contain terabytes. The old error checking mechanisms are in danger of being overwhelmed. This project seeks to find new error checking mechanisms for the Internet to safely move tomorrow's scientific data efficiently and without errors.<br/><br/>This project addresses two fundamental issues. First, the Internet's checksums and message digests are too small (32-bits) and probably are poorly tuned to today's error patterns. It is a little-known fact that checksums can (and typically should) be designed to reliably catch specific errors. A good checksum is designed to protect against errors that it will actually encounter. So the first step in this project is to collect information about the kinds of transmission errors currently happening in the Internet for a comprehensive study. Second, today's file transfer protocols, if they find a file has been corrupted in transit, simply discard the file and transfer it again. In a world in which the file is huge (tens of terabytes or even petabytes long), that's a tremendous waste. Rather, the file transfer protocol should seek to repair the corrupted parts of the file. As the project collects data about errors, it will also design a new file transfer protocol that can incrementally verify and repair files.<br/><br/>This project will improve the Internet's ability to support big data transfers, both for science and commerce, for decades to come. Users will be able to transfer big files with confidence that the data will be accurately and efficiently copied over the network. This work will further NSF's Blueprint for a National Cyberinfrastructure Ecosystem by ensuring a world in which networks work efficiently to deliver trustworthy copies of big data to anyone who needs it. Additional information on the project is available at: www.hipft.net<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
20
2019089Compute
CC* Compute: Accelerating Science and Education by Campus and Grid Computing
7/1/2020Jan MandelCO
University of Colorado at Denver-Downtown Campus
High-performance computing resources are a fundamental need of modern research that unites almost all disciplines. Experience with these resources is also an important tool for preparing today's students for a wide range of careers. A group of researchers and educators at the University of Colorado Denver, in partnership with its Office of Information Technology, are building a state-of-the-art computing resource on its Downtown Campus. The new facility provides the first campus-wide high-performance computer system to support both research and teaching efforts. The computing cluster is integrated with the Open Science Grid (OSG), enabling access to additional computing resources from partner institutions, and sharing unused time with the wider community. A high-priority educational queue is dedicated for teaching and course-based research. <br/><br/>The resource will include 2048 AMD EPYC compute cores and 16TB memory distributed across 32 compute nodes; 2 high-memory nodes, each with 2TB memory and 64 cores; one NVIDIA Tesla V100 32GB GPU; 1PB (raw) storage; and InfiniBand interconnect. For data-intensive research and access of OSG jobs to distributed data, the cluster is configured with full end-to-end 10gb/s connectivity from each node to Internet 2. A graphical Jupyter notebook interface increases accessibility. <br/><br/>The configuration addresses computing requirements based on a survey and an analysis of the needs of science and educational drivers in fields including earth and environmental sciences, biotechnology and genomics, computer science and engineering, applied mathematics, physics, and business. The resource will broaden participation in computational science and have a significant impact on the supported research.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
21
2018982Planning
CC* Planning: UCCS Computational Cluster for Science
12/1/2020Yanyan ZhuangCO
University of Colorado at Colorado Springs
Classified as a “high research activity” university in 2019, the University of Colorado Colorado Springs (UCCS) is a school in transition. While UCCS has good Cyber Infrastructure for educational purposes, there is an urgent need and opportunity to develop a plan for computational support for scientific research. Maximizing a computational infrastructure’s impact depends on the scale and scope of usage. A major part of this project is the analysis of the current and planned scientific computational needs at UCCS and then mapping them to design, which requires careful research to optimize. The investigators are using a multi-pronged approach to raise awareness, gather qualitative data, and collect custom data grounded within established motivational theory. This approach yields baseline data and assesses change over time with future infrastructural changes.<br/><br/>One aim of this planning project is the design and testing of a Cyber Aspirations and Readiness Assessment (CARA) instrument. The instrument serves as a theoretically driven tool for longitudinal assessment of initial aspirations, perceptions of barriers, and opportunities for success. This planning process helps STEM faculty across campus better understand how computing could accelerate their research, and how they could utilize different computational resources. Such a model, processes, and experiences not only benefit UCCS, but could be replicated at other institutions for their risk assessment, benchmarking, and other activities during their computation infrastructure planning. This plan for a cyber infrastructure provides a much needed capability required by a broad range of science, engineering, and education teams at UCCS and beyond.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
22
1925766Compute
CC* Compute: A Hybrid Cloud Environment for the Rocky Mountain Advanced Computing Consortium
10/1/2019Shelley KnuthCOUniversity of Colorado at Boulder
The Research Computing group at the University of Colorado Boulder provides a hybrid cloud infrastructure to support computing, data, and science gateway needs that are currently not met by the existing high-performance and high-throughput computing infrastructure. The system is also integrated into the Open Science Grid (OSG) to enable full utilization of the deployed on-premise hardware using otherwise idle compute capacity. The hybrid cloud system provides one integrated system view of the on-premise and public cloud to researchers so they can select the right resource to support their work. This new capability eases the burden of finding appropriate computational tools for their work, allowing researchers to focus on new discoveries and the advancement of their fields. Major emphasis areas of the research supported are in the geosciences, hydrological modeling, natural language processing, machine learning, and earth analytics. Members of the Rocky Mountain Advanced Computing Consortium (RMACC) have access to 20% of the provided cloud resources with a focus on smaller institutions.<br/><br/>This hybrid cloud provides: virtual machines either from a library or customized by the researcher; execution and orchestration of containers; serverless computing; and hosting for persistent science gateways and other science services. This project heavily leverages the NSF award #1659425 "CC* Cyber Team: Creating a Community of Regional Data and Workflow Cyberinfrastructure Facilitators" by reusing training materials on containerization and capitalizing on relationships established during focus groups and one-on-one consults.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
23
1925752Regional
CC* Regional: Integrating the Colorado Western Slope Research and Education (R&E) Community into the National R&E Infrastructure
8/1/2019Marla MeehlCO
University Corporation For Atmospheric Res
High-speed, reliable networking infrastructure is vital to an organization's ability to thrive in today's rapidly evolving scientific and technical environments. Such network connectivity is essential for success in business, science, communication, global collaboration, education, and outreach. The network connectivity in the Western Slope Rocky Mountain region of Colorado is challenged. This project will bring together research and education (R&E) stakeholders to identify specific areas of need, work with public and private partners to identify infrastructure and other network resources, define the scope of the network to be constructed, and design a network. Stakeholders benefit from gaining knowledge of available resources for better network connectivity, building relationships with the regional and national network providers, technical training, and access to high-performance network connectivity. Project goals include ensuring underserved populations from rural areas of Colorado and Northern New Mexico have a diverse, well-educated workforce and that students are offered the highest caliber educational opportunities and resources. Improved network access supports economic development and growth, increased diversity, and development and expansion of employment opportunities.<br/><br/>The project team identifies and documents network resources and provides a network design that addresses feasibility, costs, and deployment of the network. The team performs site visits to identify, document, and assess network paths, and one-time and recurring costs. Regular meetings with stakeholders and vendors are conducted. A workshop is planned to teach stakeholders about advanced network tools and technology.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
24
1925747Infrastructure
CC* Networking Infrastructure: Enabling Research with Mines' Underground Research Facility
8/1/2019Matthew KetterlingCOColorado School of Mines
This award provides funds to install extensive cyberinfrastructure and expand Wide-Area Network (WAN) speeds to the Colorado School of Mines' Edgar Underground Educational and Research Facility. Such cyberinfrastructure upgrades further establish the Edgar Underground Educational and Research Facility as the premier facility for innovative research in underground environments. Furthermore, this facility provides an environment for students, educators, researchers, and external partners to collaborate across a wide variety of scientific and engineering disciplines. As a result of the project, researchers can better leverage campus, public, and private resources efficiently and expand research opportunities in areas such as underground communications and networking; sub-surface studies of microbial genetics and environmental DNA; remote monitoring of geologic features, seismic, electric or magnetic anomalies; and remote and autonomous control of terrestrial and aerial robots.<br/><br/>The Colorado School of Mines' Edgar Underground Educational and Research Facility is located 25 miles west of main campus in Idaho Springs, CO. Due to the challenges of the surrounding terrain and remote location of the Edgar Facility, commodity ISP provides only one bonded pair of DSL lines, for a maximum download speed of 20Mbsec. One goal of this project to establish a 10Gb network connection to the Edgar Facility using a combination of leased fiber from the FrontRange GigaPOP and 10Gb P2P wireless communication to connect the final half-mile that spans rugged terrain. A second goal is to create robust cyberinfrastructure within the Edgar Facility, allowing connections to instrumentation and providing redundant network pathways within the mine.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
25
1659425Team
CC* Team: Creating a Community of Regional Data and Workflow Cyberinfrastructure Facilitators
4/1/2017Shelley KnuthCOUniversity of Colorado at Boulder
A distributed team of data and workflow facilitators ("cyberteam") from the University of Colorado Boulder, the University of Utah, and Colorado State University support experimental and observational science (EOS) research as part of the Rocky Mountain Advanced Computing Consortium (RMACC). Advances in the number/diversity of data sets require enhanced capabilities to access, reuse, process, analyze, understand, curate, share, and preserve data. A critical aspect of these efforts is to provide expert support for efficient and effective workflows involving data generation, data analysis, visualization, and preservation. Typically, these activities have been the responsibilities of individual researchers, and as a result, data can be difficult to reuse by others. Likewise, computational and data generation workflows are often cobbled together, hard-coded, and not readily amenable for sharing.<br/><br/>These problems are addressed by the distributed RMACC cyberteam who will provide support for researchers in the region by assisting them with data and workflow reuse and management. The facilitators have complementary skills and expertise, and are fully integrated into campus and regional efforts. Their focus is on data curation and metadata, and data and compute workflows, including protected information. These facilitators, in collaboration with others in the region, provide and develop regional, shared resources to support data management for small research groups and under-resourced communities, including working with regional RMACC partners.
26
2018873Team
CC* Team: CAREERS: Cyberteam to Advance Research and Education in Eastern Regional Schools
7/1/2020Andrew ShermanCTYale University
Given the pivotal role of data and cyberinfrastructure (CI) in scientific discovery and teaching, it is essential that all small and mid-sized institutions be empowered to fully exploit them. Access to physical infrastructure is certainly required, but researchers also need access to “Research Computing Facilitators” (RCFs) possessing the mix of technical knowledge and interpersonal skills required to help them use CI resources effectively. This poses challenges for smaller institutions, since RCFs are in short supply and are difficult to recruit and retain. Moreover, institutions can often afford only one or two, making it challenging to support diverse scientific disciplines.<br/> <br/>This project is developing a sustainable distributed approach to address these challenges in six Eastern states, facilitated by the Eastern Regional Network, a nascent, but growing collaboration among this project’s seven anchor institutions, other institutions in the Eastern US, and the area’s regional network providers. The project strategy has two principal legs: (1) expanding the RCF talent pipeline by engaging students at smaller institutions in nearly 70 project-based mentored experiential learning opportunities; and (2) developing a regional RCF pool providing CI facilitation across institutional and geographic boundaries.<br/> <br/>Success of this project directly enhances scientific research at smaller institutions. The regional RCF pool enables researchers to access appropriate expertise without the costs and delays of building an institutional RCF team. The experiential training approach exposes a relatively large, diverse group of students to the RCF profession, yielding opportunities to encourage trainees, especially in underrepresented groups, to pursue RCF careers.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
27
1925716Compute
CC* Compute: Shared Computing Infrastructure for Large-scale Science Problems
8/1/2019Richard JonesCTUniversity of Connecticut
Advances in scientific instrumentation over the past decade have led to phenomenal growth in both the size and complexity of data across a wide variety of scientific domains. Researchers at the University of Connecticut are expanding the capabilities of the existing shared high-performance computer facility to meet this challenge by the addition of a 28-node cluster, each node with 40 Intel cores and 192 GB of memory, and 1000 TB of shared storage.<br/><br/>This balance of resources has been chosen to accommodate applications in the areas of particle physics, astrophysics, geophysics, and statistics in use by UConn researchers, with a view to meet a rising need for data-intensive computing across all of science and engineering at the University. Furthermore, the cluster will be configured as a shared scientific computing resource on the Open Science Grid (OSG). OSG is a consortium of universities and national research facilities who are using grid technology to aggregate their individual computer clusters into a single unified national compute infrastructure for science. In joining this cluster to the OSG, UConn researchers will see a much higher throughput than they would see running on local resources alone, granting them faster turn-around and access to bigger data than could be stored and processed locally. The cluster will enable the introduction of a new big data component within an existing course on scientific computing at the graduate level. It will also be used to produce visualizations of astronomical observations that will be used in K-12 outreach.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
28
2126253Compute
CC* Compute: RAPTOR - Reconfigurable Advanced Platform for Transdisciplinary Open Research
10/1/2021Jason LiuFLFlorida International University
Building resilient, sustainable, livable, and environmentally safe dynamic systems (natural or human built) requires on-demand computing resources, facilitating machine learning, data processing, and data analytics. These systems rely on computation to support simulation, modeling, and analyses to enable discovery, facilitate understanding, and make decisions. This project implements RAPTOR (Reconfigurable Advanced Platform for Transdisciplinary Open Research), a reconfigurable compute environment to address dynamic and diverse computing needs of science drivers??coastal resilience, sustainable environmental research, and systems biology.<br/><br/>The goal of RAPTOR is to increase research production by enhancing computing capabilities at Florida International University (FIU), both at the campus level and through participation in a resource-sharing federated distributed computing community. For that, RAPTOR integrates with the Chameleon Cloud Infrastructure for on-demand resource allocation and the Open Science Grid (OSG) for opportunistic preemptible resource sharing allocation. The result is a platform capable of connecting with a rapidly deployable sampling system that can assimilate and transmit data in real-time to facilitate actionable intelligence and drive adaptive environmental monitoring decisions, and respond as conditions change. The integration of knowledge, tools, and modes of computation proposed by RAPTOR builds upon the expertise of domain researchers, computer scientists, and IT practitioners towards a ?smart cyberinfrastructure? to foster and develop transdisciplinary approaches. RAPTOR not only introduces new computing modalities at FIU to support its researchers, but also benefits a broader federated community of researchers and scholars sharing resources through OSG and Chameleon Cloud.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
29
2114911EAGER
Collaborative Research: EAGER: SaTC-EDU: Just-in-Time Artificial Intelligence-Driven Cyber Abuse Education in Social Networks
5/1/2021Bogdan CarbunarFLFlorida International University
Social networks encourage casual interactions and expose users to a variety of forms of cyber abuse, which are known to have negative socio-psychological effects. Previous work has shown that only a fraction of cyber abuse victims adopt self-protective behaviors. This may occur because some victims lack the background knowledge required to identify cyber abuse and to assert appropriate protective behaviors. While education can be effective in this regard, classroom delivery may fail to reproduce the diverse and dynamic context of cyber abuse, making it difficult for students to effectively translate knowledge into practice. This project seeks to increase the adoption of self-protective behaviors by integrating educational content into social networking interactions. Artificial intelligence (AI) techniques will be used to optimize the placement and timing of educational content. The project has the potential to improve the security and privacy of vulnerable social network users.<br/><br/>The project team will leverage their expertise in cybersecurity, AI, and education to investigate, develop and evaluate a new educational framework that provides just-in-time awareness training to identify and respond appropriately to cyber abuse when using social networks. First, the team will develop AI-based solutions to detect and classify cyber abuse based on abuse traces in the accounts of the users involved. The team will also leverage data and feedback collected from study participants to build a ground-truth dataset of instances and timelines of cyber abuse. Second, the team will design and implement targeted learning content and user interface nudges to deliver the knowledge required to make safer decisions in social network interactions. Third, the team will develop AI-based techniques to determine the ideal placement of learning content that improves user adoption of self-protective behaviors. Finally, the unique features of Facebook will be exploited to design evaluation experiments and educational outcomes-based techniques that capture user behaviors in the context of their regular Facebook interactions.<br/><br/>This project is supported by a special initiative of the Secure and Trustworthy Cyberspace (SaTC) program to foster new, previously unexplored, collaborations between the fields of cybersecurity, artificial intelligence, and education. The SaTC program aligns with the Federal Cybersecurity Research and Development Strategic Plan and the National Privacy Research Strategy to protect and preserve the growing social and economic benefits of cyber systems while ensuring security and privacy.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
30
2018754Integration
CC* Integration-Large: Q-Factor: A Framework to Enable Ultra High-Speed Data Transfer Optimization based on Real-Time Network State Information provided by Programmable Data Planes
10/1/2020Jeronimo BezerraFLFlorida International University
Communication networks are critical components of today?s scientific workflows. Researchers require long-distance, ultra high-speed networks to transfer huge data from acquisition sites (such as Vera C. Rubin Observatory, also knowns as Large Synoptic Survey Telescope in Chile) to processing sites, and to share measurements with scientists worldwide. However, while network bandwidth is continuously increasing, the majority of data transfers are unable to efficiently utilize the added capacity due to inherent limitations of parameter settings of the network transport protocols and the lack of network state information at the end hosts. To address these challenges, Q-Factor plans to use temporal network state data to dynamically configure current transport protocol parameters to reach higher network utilization and, as a result, to improve scientific workflows.<br/><br/>Q-Factor leverages programmable network devices with the In-band Network Telemetry (INT) application and delivers a software solution to process in-band measurements at the end hosts. Using Q-Factor on Data Transfer Nodes (DTN)s, TCP/IP parameters will be configured according to temporal network characteristics, such as round-trip time, network utilization, and network congestion. This tuning is expected to result in increased network utilization, shorter flow completion times, and significantly fewer packet drops caused by network buffers overflow. Additionally, Q-Factor is geared to save host memory by tailoring kernel parameters and buffers to optimal sizes.<br/><br/>Q-Factor targets a timely issue in communication networks: underutilization of ultra high-speed networks for science workflows. In order to keep scientific progress unconstrained, future science workflows need to support emerging data-intensive science experiments (e.g., the Vera Rubin Observatory, High Luminosity Large Hadron Collider) where data generation grows significantly, reaching exabytes of traffic each year. Results of this project will also allow better understanding of optimal buffer sizes of network devices for huge flows and the interaction of various congestion control algorithms.<br/><br/>Experimental measurement data, network state information, network topology, software code, TCP tuning guidelines, and results will be available on the Q-Factor website https://q-factor.io, which will be maintained and indexed for at least three years after the completion of the project.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
31
2018811Regional
CC* Regional: Promoting Research and Education at Small Colleges in Alabama through Network Architecture Enhancements
7/15/2020Samuel D'AngeloGAGeorgia Tech Research Corporation
Advancements in data-intensive scientific instrumentation have greatly surpassed the capability of some campus networking infrastructures to effectively connect data-producing facilities to powerful computing and storage systems. Georgia Tech (GT) working in partnership with Southern Light Rail (SLR) and its high-speed research network, Southern Crossroads (SoX), has established this project to increase connectivity to smaller and HBCU institutions in Alabama. As a result of this project, both Alabama Agricultural and Mechanical University (AAMU) and the University of South Alabama (USA) are able to transition their connectivity from a low-bandwidth ISP to a true high-speed R&E network to help increase their research efforts. Since their IT networking staff and budget are smaller and do not possess the expertise or funding in procuring and managing multiple internet providers, this award is allowing GT to install pre-configured hardware appliances for connectivity, performance management, and large data transfers at SoX.<br/> <br/>With the enhancement of the network at AAMU, they are able to enhance their research on the following: UAVs to manage agricultural data collection and analysis, astrophysics visualization, and virtual mentoring research. Similarly at the University of South Alabama, this project is leading to an increase in the data transfer between their university and industry partners eliminating the need to exchange physical hard drives. This is expanding their research in the field of multi-spectral imaging for medical and other life science applications, as well as analysis of sensor data from airplanes.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
32
1925541Compute
CC* Compute: Integrating Georgia Tech into the Open Science Grid for Multi-Messenger Astrophysics
7/1/2019Mehmet BelginGAGeorgia Tech Research Corporation
Studying the Universe with all possible sources of information is the objective of multi-messenger astrophysics. Groundbreaking gravitational wave observations with LIGO, high-energy neutrino observations with IceCube, and very high-energy gamma-ray observations with VERITAS enable multi-messenger astrophysics. These and future instruments like CTA enable a complete view of the most violent and energetic phenomena in the Universe, such as the merger of black holes and neutron stars or the processes near the supermassive black holes at the center of large galaxies. All these efforts are computationally intensive, both in data analysis and in simulations. The Open Science Grid (OSG) infrastructure and services are an ideal platform to meet the computational requirements of Multi-messenger Astrophysics. This project acquires cyber infrastructure resources to connect Georgia Tech to the OSG and integrate these resources into computational efforts of the previously-mentioned NSF funded facilities.<br/><br/> The High Throughput Computing Cluster acquired under this award includes 12 compute nodes with 40-core Intel Skylakes with 192 GB memory to support LIGO project requirements and others. IceCube and LIGO are supported with 4 GPU nodes, each equipped with 16-core Intel Skylakes and 4 NVIDIA TeslaV100 GPUs with 96GB memory. A special OSG StashCache has 432TB of storage in direct support of projects such as CTA. The system is 10Gbps connected in the Georgia Tech data center which in turn has 100Gbps+ to Southern Crossroads R&E exchange point. This project's resources serve as a catalyst for Georgia Tech's long-term integration into the OSG, as a standard service offered to all researchers on campus. An important component of this proposal is a significant number of Graphical Processing Units, used to accelerate simulations. This project makes Atlanta the first StashCache provider in the Southeast, as a service that enables fast access to distributed OSG datasets by regional institutions that participate in this national grid.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
33
1827211Integration
CC* Integration: End-to-End Software-Defined Cyberinfrastruture for Smart Agriculture and Transportation
10/1/2018Hongwei ZhangIAIowa State University
Imaging and other sensor-based understanding of plant behavior is becoming key<br/>to new discoveries in plant genotypes leading to more productive and environment-friendly farming.<br/>Similarly, distributed sensing is seen as a key component of a safe, efficient, and sustainable autonomous transportation systems.<br/>Existing research and education in agriculture and transportation systems are constrained by the lack of connectivity between field-deployed testbed equipment and computing infrastructure. To realize that connectivity, this project proposes to deploy CyNet wireless networks to<br/>connect experimental science testbeds to high-performance cloud computing infrastructures.<br/><br/>The CyNet project will:<br/>1) deploy Predictable, Reliable, Real-time, and high-Throughput (PRRT) wireless networking solutions using the standards-compliant, open-source Open Air Interface software framework and commodity Universal Software Radio Peripheral (USRP) hardware;<br/>2) integrate these wireless networks with software defined networks to seamlessly integrate outdoor cameras, sensors, and autonomous vehicles, and connect these components to high performance cloud computing systems;<br/>3) implement an infrastructure virtualization system that partitions CyNet into programmable, isolated experiments; and<br/>4) create an infrastructure management system that performs admission and access control and establishes specified resource allocation policies.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
34
2019073Integration
CC* Integration-Large: SciStream: Architecture and Toolkit for Data Streaming between Federated Science Instruments
10/1/2020
Rajkumar Kettimuthu
ILUniversity of Chicago
Scientific instruments are capable of generating data at very high speeds. However, with traditional file-based data movement and analysis methods, data are often processed at a much lower speed, leading to either operating the instruments at a lower speed or discarding a (significant) portion of the data without processing it. To address this issue, SciStream project will develop software tools to stream data at very high speeds from scientific instruments to supercomputers at a distant location. SciStream hides the complexities in network connections from the end user and provides a high level of security for all the network connections.<br/><br/>The data producers (e.g., data acquisition applications on scientific instruments, simulations on supercomputers) and consumers (e.g., data analysis applications on high performance computing systems) may be in different security domains (and thus require bridging of those domains) and may, further, lack external network connectivity (and thus, require traffic forwarding proxies). SciStream establishes necessary bridging and end-to-end authentication between source and destination, while providing efficient memory-to-memory data streaming. Through the exploration of architectural and design choices and addressing issues of control protocols and security, SciStream will advance the understanding of the challenges in supporting high speed memory-to-memory data streaming between remote instruments in federated science environments.<br/><br/>SciStream will benefit all scientific applications that require memory-to-memory data streaming between distributed instruments. Recent trends suggest that this is an important and growing requirement for many scientific applications. SciStream will help significantly reduce the time to solution for these applications, resulting in improved scientific productivity and thus far-reaching benefits for society. Key design choices such as application-agnostic streaming and support for best-effort streaming will make SciStream appealing to a broader science community. SciStream will engage with domain scientists, campus computing centers, and a scientific user facility to reach a wider audience. Through on-campus programs at the University of Chicago, SciStream will train under-represented students in networking. Additional details on SciStream can be found here: https://scistream.github.io/<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
35
2018919Infrastructure
CC* Networking Infrastructure: Enhancing SIUC Campus Cyberinfrastructure to Accelerate Data-Driven Research and Education
7/1/2020Ning YangIL
Southern Illinois University at Carbondale
This project implements a high-performance research and education network architecture over the existing campus cyberinfrastructure (CI) at Southern Illinois University Carbondale. The project solves the performance bottleneck of the current campus core network and establishes new infrastructures enabling high-throughput data transfer with peer institutions and on-campus private wireless broadband service. The updated CI enables efficient and high-throughput data movement needed by fast-growing data-intensive research and education applications across broad academic domains on campus including Chemistry and Biochemistry, Physics, Engineering, Plant Biology, Statistics, Computer Science, etc. The project builds upon a strong partnership among campus information technology experts, networking faculties, and domain researchers. It also provides an exceptional opportunity to train the CI workforce. <br/><br/>The project includes three tasks: 1) enhance the campus core network by adding redundancy across 9 core locations with new routers and 10 Gbps links. The updates add the capacity and redundancy needed to offer reliable and high-performance network service to the whole campus; 2) establish a Science DMZ and the associated network performance monitoring framework, which enables researchers to obtain and share data with external hosts a high speed; and 3) establish a private LTE network dedicated to research and education over the 3.65 GHz CBRS spectrum. Five data-intensive science and education applications are prototyped to demonstrate the new CI capabilities and workflows enabled by this project.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
This project implements a high-performance research and education network architecture over the existing campus cyberinfrastructure (CI) at Southern Illinois University Carbondale. The project solves the performance bottleneck of the current campus core network and establishes new infrastructures enabling high-throughput data transfer with peer institutions and on-campus private wireless broadband service. The updated CI enables efficient and high-throughput data movement needed by fast-growing data-intensive research and education applications across broad academic domains on campus including Chemistry and Biochemistry, Physics, Engineering, Plant Biology, Statistics, Computer Science, etc. The project builds upon a strong partnership among campus information technology experts, networking faculties, and domain researchers. It also provides an exceptional opportunity to train the CI workforce. <br/><br/>The project includes three tasks: 1) enhance the campus core network by adding redundancy across 9 core locations with new routers and 10 Gbps links. The updates add the capacity and redundancy needed to offer reliable and high-performance network service to the whole campus; 2) establish a Science DMZ and the associated network performance monitoring framework, which enables researchers to obtain and share data with external hosts a high speed; and 3) establish a private LTE network dedicated to research and education over the 3.65 GHz CBRS spectrum. Five data-intensive science and education applications are prototyped to demonstrate the new CI capabilities and workflows enabled by this project.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
36
2018551ComputeCC* Compute: SIUE Campus Cluster7/1/2020Daniel ChaceIL
Southern Illinois University at Edwardsville
The goal of this project is to enhance high performance computing resources at Southern Illinois University Edwardsville to support a variety of research and education activities. Specifically, current campus facilities do not provide sufficient GPU or storage resources needed to enable work with large data sets. Additionally, SIUE campus IT staff participation increases experience supporting cyberinfrastructure resources to better support faculty, students, and collaborators.<br/><br/>This hardware directly supports ongoing projects and teaching activities which include: the use of machine learning models to predict complex phenotypic traits, development of high order accurate numerical methods for problems governed by hyperbolic partial differential equations, understanding the mechanism behind the quantum phenomenon in chemical reactions, drug interactions, and cybersecurity. Additional activities include faculty outreach in incorporation of high-performance computing into classroom exercises and curriculum.<br/><br/>Commitment to sharing this computing environment through the Open Science Grid (OSG) and other collaborative efforts increases awareness and adoption of other shared resources by the SIUE community.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
37
1827126Integration
CC* Integration: SENSELET: Sensory Network Infrastructure for Scientific Laboratory Environments
10/1/2018Klara NahrstedtIL
University of Illinois at Urbana-Champaign
Scientific instruments (e.g., scanning electron microscopes) are extensively used to<br/>discover new materials, develop novel semiconductor device fabrication recipes, and perform new biological processes.<br/>One way to speed up scientific discoveries is to provide scientists with advanced cyber-infrastructures to capture, transmit,<br/>store, share, analyze, and correlate as much environmental metadata (e.g., humidity, temperature)<br/>from scientific lab environments as possible. Current network infrastructure does not capture<br/>any external wireless sensory data around the instruments.<br/>The recent advent of low-cost, cloud-based sensors and the introduction of diverse wireless network<br/>technologies, low-cost mobile and personal devices, and Internet of Things (IoT) solutions provide a<br/>novel and viable path for automating sensory data collection in diverse science laboratory environments.<br/><br/>SENSELET, a SEnsory Network infrastructure for SciEntific Lab EnvironmenTs, has the goals of (a)<br/>deploying a diverse wireless and scalable sensory infrastructure close to scientific instruments, and (b)<br/>correlating and synchronizing sensory data with cloud-based instrument data and metadata in real-time<br/>and on-demand. The SENSELET infrastructure will provide additional measurements that will increase<br/>accuracy of scientific results, and enable better environmental monitoring and control of labs for lab<br/>managers. The SENSELET infrastructure will include (a) wireless sensors such as humidity, temperature,<br/>and vibration sensors; (b) an edge computing device with multiple wireless communication interfaces<br/>residing in the lab; and (c) private cloud computing service to store<br/>and correlate sensory data with instrument data in real-time or on-demand. SENSELET will<br/>provide trusted and real-time instrument data uploading, curation, search, and coordination services.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
38
2018926Compute
CC* Compute: Private Campus Cloud for Data Analytics and Machine Learning
7/1/2020Preston SmithINPurdue University
New usage patterns of computing for research have emerged that rely on the availability of flexible, elastic, and highly specialized services. Uniform batch computing pools traditionally provided by high performance or high throughput computing environments have difficulty adapting to meet these requirements. A new approach that updates and evolves the research computing ecosystem is needed to respond to these needs. This new model, a “Community Cloud”, provides a cost effective, highly responsive, sustainable, and customizable cloud and container computing solutions for specific applications and domain science communities.<br/><br/>This project, through the acquisition of a new compute cluster, knits together central and lab-scale data, instrument, and compute resources into a cloud ecosystem for researchers who need capabilities beyond batch computing, and extends the research computing ecosystem to include cloud capabilities at the campus level. The Community Cloud is designed to: 1) Devise a new approach to establish a community cloud service using virtualization, containers, and infrastructure-as-code (IAC) techniques to create running infrastructure as an artifact; 2) Support diverse science domains via an effective infrastructure that enables new kinds of discovery that cannot be well met through the use of traditional batch computing systems; 3) Develop a reference business model for the evolution of campus “condo” cluster programs to sustainably operate a production community cloud; and 4) Enable scalable and sustainable instructional use of the proposed community cloud for courses, real-world training, and workforce development for the campus and national research computing communities. The new compute cluster includes 8 application nodes (1024 cores), 2 GPU nodes (8 gpus), 6 storage nodes (288 TB), and one bastion node, all interconnected through a 100Gb network, and managed using Kubernetes.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
39
1925645Compute
CC* Compute: CAML - Accelerating Machine Learning via Campus and Grid
7/1/2019Kevin LannonINUniversity of Notre Dame
Machine learning, a subset of the field of artificial intelligence, has enabled impressive progress on a range of problems from identifying objects in images to translating French into English. At the University of Notre Dame, the Cyberinfrastructure to Accelerate Machine Learning (CAML) resource allows faculty across the university to leverage machine learning to address problems within their disciplines. CAML uses graphical processing units (GPUs) to provide a significant boost in the speed of training machine learning algorithms, enabling researching to solve problems faster and to train more complex models capable of addressing more difficult problems. CAML benefits a wide range of research activities, from searching for new particles at the Large Hadron Collider to exploring new chemicals leading to medical breakthroughs. CAML also benefits the broader community as part of the Open Science Grid, serving researchers from universities and labs across the US. Furthermore, CAML is used for education and outreach involving students ranging from high school to graduate school, helping to train the next generation to tackle data science challenges in the public and private sector.<br/><br/>CAML provides GPU resources for accelerating machine learning to the research community both locally at the University of Notre Dame and nationally through the Open Science Grid (OSG). CAML physically hosts GPU resources suitable for accelerating the training of models from standard Deep Learning libraries, but also enables on-demand cloud access to more experimental architectures like FPGA resources. Configured for both interactive and batch access, CAML supports both small-scale explorations to large-scale discovery science.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
40
1827184Infrastructure
CC* Networking Infrastructure: Integrating Big Data Instrumentation into Campus Cyberinfrastructure
7/1/2018Xiaohui Carol SongINPurdue University
Rapid advancements in data-intensive scientific instrumentation have greatly outpaced the capability of campus networking infrastructures to effectively connect big data-producing facilities to powerful computing and storage systems. This delays the analysis, use and dissemination of a huge amount of valuable scientific data for discovery and innovation in multiple science and engineering domains. Purdue's project bridges this gap by improving the campus high-speed network infrastructure between five big data instrument facilities and the centrally supported cyberinfrastructure (CI) consisting of supercomputers, large storage systems and network connections to outside the campus. The facilities support accelerated research in new materials, understanding brain functions and viruses, monitoring lower atmospheric weather, employing geospatial data in teaching and public engagement, and secure computing. The project builds upon a strong partnership between campus information technology experts and domain scientists. It also helps prepare the next generation of CI professionals by pairing student workers with the campus CI experts.<br/><br/>The project adds high-speed Science DMZ connections to the five selected big data facilities enabling high-volume, high-velocity, or interactive science data flows to both the campus research cyberinfrastructure and off campus facilities. Existing Cisco Nexus 7710 network switches at the distribution layer are augmented with new 40-Gigabit networking modules, increasing bandwidth to 20-80 Gigabits per second. This should yield a significant increase in research productivity, and more timely publication and dissemination of research data and results, through faster data transfer, easier access to the central computing infrastructure, and more effectively utilizing Purdue's 100-Gigabit WAN connections and research network peerings.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
41
1826994NPEO
CC* NPEO: The Research and Science Engagement Center: A Production Platform for Operations, Applied Training, Monitoring, and R&E Support
7/1/2018Jennifer SchopfINIndiana University
The scientific community has experienced an unprecedented shift in the way<br/>research is performed and how discoveries are made. Highly sophisticated experimental instruments are<br/>creating massive datasets for diverse scientific communities and hold the potential for new insights that<br/>will have long-lasting impacts on society. However, scientists cannot make effective use of this data if<br/>they are unable to move, store, and analyze it. This project establishes the Research and Science<br/>Engagement Center (ReSEC) as a collaborative focal point for operational expertise and analysis.<br/>This project will assist scientists in routinely, reliably, and robustly transferring their data.<br/>ReSEC will deliver end-to-end user support and network engineering solutions,<br/>and become a central community hub ready to provide personalized expertise and assistance on an ongoing basis.<br/><br/>ReSEC proposes four primary execution thrusts: 1) a Roadside Assistance center to reactively<br/>respond to immediate problems with science data transfers; 2) proactive network observation<br/>using tools such as perfSONAR and NetSage; 3) assistance with design & deployment of campus<br/>networking assets such as Science DMZs; and 4) training for campus network administrators.<br/>ReSEC will scale operations broadly by relying on Regional Network, Infrastructure and Science Community partners.<br/>ReSEC will deliver expertise and assistance on a sustainable, ongoing basis, with a particular emphasis<br/>on serving educational institutions with relatively limited local network administrative resources.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
42
2018766Compute
CC* Compute: GP-ARGO: The Great Plains Augmented Regional Gateway to the Open Science Grid
7/1/2020Daniel AndresenKSKansas State University
This project creates a regional distributed Open Science Grid (OSG) Gateway led by the Great Plains Network (GPN) to support computational and data-intensive research across the region through the development of specialized CI resources, workforce training, and cross-support methodologies and agreements. The GPN Augmented Regional Gateway to OSG’s (GP-ARGO) primary goal accelerates the adoption and experience of advanced high-throughput computing and data resources by developing a model for enhanced distributed computational systems, including design, implementation, and training. This project multiplies the number of OSG sites in the GPN region by 8, adding at least 2,048 cores dedicated to OSG use, and giving OSG potential access to over 42,000 additional existing cores at participating institutions. This project accomplishes the following key objectives: 1) Improves campus awareness and adoption of advanced HTC-oriented computing and data resources for STEM research and education activities. 2) Increasing the number and capabilities of campus research computing and data professionals. 3) Increasing the capabilities of campus high-throughput computing cyberinfrastructure resources such as advanced computing systems, data caches, and networks. 4) Enabling deployment, and operation of research and education cyberinfrastructure to make science more efficient, trusted, and reproducible.<br/><br/>This project advances both the regional infrastructure and regional research efforts by increasing the number of local CI resources across the region. GP-ARGO provides a distinctive model for distributed support teams, in particular institutions that lack a critical mass of personnel to support the key areas: OSG awareness, HTC resources, researcher support, workforce development.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
43
1925687Team
CC* Team: KyRC - A Kentucky Research Computing Team
7/15/2019Brian NicholsKY
University of Kentucky Research Foundation
High performance computing centers now serve a broad set of user interests with researchers from all disciplines now leveraging computation and/or big data in interesting and novel ways. This change is not confined to top tier universities, but rather impacts researchers at all institutions of higher learning. In short, the need to support researchers who span all academic areas and require a diverse set of cyberinfrastructure (CI) has become a key challenge for research centers and IT organizations nationally. Today, scientific discovery is enabled through compute-intensive and data-intensive research that would not be possible without advanced CI. This project forms a collaborative, state-wide support team, the Kentucky Research Computing team (KyRC), that provides researchers with direct access to expert CI engineers to assist in applying, consulting, and supporting a wide-range of computational systems and research disciplines. <br/><br/>The goal of KyRC is to serve a broad range of institutions across the state of Kentucky in higher education, smart cities, and community education and training efforts. KyRC provides support to a variety of emerging research areas in need of CI support that heretofore were not part of the computational community. In addition to state-wide support, KyRC collaborates with leading regional and national CI entities to facilitate a community of expertise. By providing access, education, and exposure to advanced CI, KyRC enables research programs to recruit more students (e.g., groups underrepresented in STEM) to computational research while enhancing the training of undergraduates, graduate students, and postdocs in Kentucky higher education.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
44
2020446Compute
CC* Compute: Deep Bayou: Accelerating Scientific Discoveries with A GPU Cluster
7/1/2020Le YanLALouisiana State University
Modern science increasingly relies on high performance computing (HPC) and data analytics to make discoveries about our world. In recent years, graphics processing units (GPUs), a specialty computer hardware originally developed for graphic applications with a unique, highly parallel architecture, has become a key enabling technology for both types of workloads. Through this project, Louisiana State University (LSU) expands its existing computing facilities with the addition of Deep Bayou, a GPU cluster consisting of 12 compute nodes and 26 NVIDIA GPU devices.<br/><br/>The initial research projects enabled by Deep Bayou include particle physics, gravitational wave source characterization with LIGO, ocean and ocean-atmosphere modeling, bioinformatics, infrastructure modeling for disaster management, material sciences, computational biology, and fundamental GPU architecture design. The Deep Bayou infrastructure and organization is also open to additional initiatives and projects through an application process. In addition to researchers across LSU campus, the project partners with the Open Science Grid (OSG) to share the GPU resources among the research community across the nation. Deep Bayou will make a significant contribution to the building of future HPC workforce, as many training and education activities plan to leverage its availability. Through existing state- and federal-sponsored programs such as Louisiana Optical Network Infrastructure (LONI), Baton Rouge: Bringing Youth Technology, Education and Success (BRBYTES) and Research Experiences for Undergraduates (REU), K-12 students and undergraduates can have access to courses and workshops facilitated on the cluster, many of whom are from underrepresented minority communities.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
45
2018936Compute
CC* Compute: Compute Cluster for Computational Sciences at Louisiana State University Health Sciences Center ? New Orleans (LSUHSC-NO)
7/15/2020Christopher TaylorLA
Louisiana State University Health Sciences Center
This project at Louisiana State University Health Sciences Center New Orleans (LSUHSC-NO) constructs a scalable 26-node high-performance computing cluster (HPC) named Tigerfish. Tigerfish provides LSUHSC-NO researchers with uninterrupted, local access to HPC resources to enable and accelerate their biomedical research. The project provides training to new and existing HPC users to ease their transition to utilizing both Tigerfish and other nationally available HPC resources to meet increasing computational needs of their research. Tigerfish supports research and education within multiple disciplines at LSUHSC-NO including Bioinformatics, Biostatistics, Computational Biology, Genetics, Computational Genomics, Microbiology, Neurology, and Proteomics. <br/><br/>This project has broader impacts for the community including training the next generation of researchers as a part of the Tiger Scholars program which provides high-school students with the opportunity to be exposed to the capabilities of HPC in research. Tigerfish is available as a computing resource supporting a high school, undergraduate, and postbaccalaureate student summer research program that provides summer research experiences to students coming to study at LSUHSC-NO from various parts of the country. Tigerfish provides an essential platform for our outreach to regional historically black colleges and universities which lack their own HPC resources for intensive computing. The recently launched Bioinformatics and Genomics program is able to access Tigerfish as an essential resource for their highly intensive computational needs and serves as a resource for the newly established MS Biomedical Sciences-Bioinformatics Track program. Finally, Tigerfish resources are shared with Open Science Grid for LSUHSC-NO to play its part in making HPC more accessible nationally.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
46
2019012Integration
CC* Integration-Large: N-DISE: NDN for Data Intensive Science Experiments
10/1/2020Edmund YehMANortheastern University
The project, N-DISE (Named Data Networking for Data Intensive Science Experiments), aims to accelerate the pace of breakthroughs and innovations in data-intensive science fields such as the Large Hadron Collider (LHC) high energy physics program and the BioGenome and human genome projects. Based on Named Data Networking (NDN), a data-centric architecture, N-DISE will deploy and commission a highly efficient and field-tested petascale data distribution, caching, access and analysis system serving major science programs.<br/><br/>The N-DISE project will design and develop high-throughput caching and forwarding methods, containerization techniques, hierarchical memory management subsystems, congestion control mechanisms, integrated with Field Programmable Gate Arrays (FPGA) acceleration subsystems, to produce a system capable of delivering LHC and genomic data over a wide area network at throughputs approaching 100 Gbits per second, while significantly decreasing download time. In addition, N-DISE will utilize NDN's built-in data security support to ensure data integrity and provenance tracing. N-DISE will leverage existing infrastructure and build an enhanced testbed with four additional high performance NDN data cache servers at participating institutions.<br/><br/>N-DISE will provide a field-tested working prototype of a multi-domain data distribution and access system offering fast access and low cost, as well as data integrity and provenance, to many data-intensive science and engineering fields. The project plans to hold annual workshops and hackathons to train students, postdocs, and other researchers on NDN architectural design, algorithms, as well as implementation methodologies for specific data-intensive science environments. The project will undertake initiatives for actively involving under-represented groups, and for educational outreach to K-12 students.<br/><br/>N-DISE will maintain a GitHub repository at https://github.com/neu-yehlab/n-dise. The repository will host up-to-date publications, code, data, results, and simulators. The repository will be maintained by the team for at least three years beyond the duration of the project.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
47
2018149Compute
CC* Compute: GPU Infrastructure to Explore New Algorithmic & AI Methods in Data-Driven Science and Engineering at Tufts University
7/1/2020Christopher SedoreMATufts University
Advanced scientific and engineering research at Tufts University is employing increasingly complex models, algorithms, simulations and machine learning approaches to large datasets. The larger the dataset or the more complex the model, the longer it takes to compute, slowing down researchers' progress and limiting their ability to innovate. Tufts' addition of six Graphics Processing Unit (GPU) enhanced compute nodes to its high-performance computing cluster accelerates scientific and engineering research in the areas of Algorithm Parallelization & Acceleration and Machine Learning & Deep Learning. Researchers in biology, chemistry, computer science, mathematics, physics, and urban planning leverage the GPU enhanced infrastructure to develop new algorithms and models and accelerate scientific discoveries. Through collaboration with the NSF-funded T-TRIPODS (Transdisciplinary Research in Principles of Data Science) project and the Center for STEM diversity at Tufts, the infrastructure provides new opportunities for underrepresented students to acquire and extend data science and high-performance computing skills.<br/><br/>The six GPU enhanced compute nodes are each configured with dual 20-core Intel Xeon Gold 6248 CPUs, 768GB of RAM and 8 NVIDIA Tesla V100 (32GB) GPUs interconnected with NVLink to improve scaling of multi-GPU computation. The nodes are linked with a 100 gigabit network and are accessible to researchers at Tufts and externally through the Open Science Grid (OSG). The large core count, large RAM, interlinked GPU architecture provides the greatest flexibility for researchers to mix traditional and GPU-enhanced approaches in complex computational analysis.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
48
2018074Integration
CC* Integration-Large: An 'On-the-fly' Deeply Programmable End-to-end Network-Centric Platform for Edge-to-Core Workflows
10/1/2020Michael ZinkMAUniversity of Massachusetts Amherst
Unmanned Aerial Vehicles (also known as drones) are becoming popular in the sky. Their application reaches from hobby drones for leisurely activities to life-critical drones for organ transport to commercial applications such as air taxis. The safe, efficient, and economic operation of such drones poses a variety of challenges that have to be addressed by the science community. For example, drones need very detailed, close to the ground weather information for safe operations, and data processing and energy consumption of drones need to be intelligently handled. This project will provide tools that will allow researchers and drone application developers to address operational drone challenges by using advanced computer and network technologies.<br/><br/>This project will provide an architecture and tools that will enable scientists to include edge computing devices in their computational workflows. This capability is critical for low latency and ultra-low latency applications like drone video analytics and route planning for drones. The proposed work will include four major tasks. First, cutting edge network and compute infrastructure will be integrated into the overall architecture to make them available as part of scientific workflows. Second, in-network processing at the network edge and core will be made available through new programming abstractions. Third, enhanced end-to-end monitoring capabilities will be offered. Finally, the architecture will leverage the Pegasus Workflow Management System to integrate in-network and edge processing capabilities.<br/><br/>Providing best practices and tools that enable the use of advanced cyberinfrastructure for scientific workflows will have a broad impact on society in the long term. The science drivers that will be supported by this project have the potential to increase the safety and efficiency of drone applications, an area that will grow in significance in the foreseeable future. The project team will enable access to a rich set of resources for researchers and educators from a diverse set of institutions (historically black colleges and universities (HBCU), community colleges, women’s colleges) to further democratize research. In addition, collaboration with the NSF REU (Research Experience for Undergraduates) Site in Consumer Networking will promote participation of under-served/under-represented students in project activities.<br/><br/>Information about the project will be available at http://www.flynet-ci.org to provide information on overall project activities, outreach activities, publications, tools and software, and the project’s team members. The project website will be preserved for at least three years after the project ends.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
49
1659403Integration
CC* Integration: SANDIE: SDN-Assisted NDN for Data Intensive Experiments
7/1/2017Edmund YehMANortheastern University
Advancing discovery in many scientific fields depends crucially on our ability<br/>to extract the wealth of knowledge buried in massive datasets whose scale and complexity continue<br/>to grow exponentially with time. In order to address this fundamental challenge, this project will<br/>develop and deploy SANDIE, a Named Data Networking (NDN) architecture supported by advanced<br/>Software Defined Network services for Data Intensive Science, with the Large Hadron Collider (LHC) high energy<br/>physics program as the leading use case. <br/><br/>The implementation of SANDIE will leverage two state of the art testbeds: the NDN testbed hosted and serving the climate<br/>science community at Colorado State, and the SDN testbed hosted at Caltech. Building on these facilities, and the support for SDN services<br/>From multiple advanced Research & Education network partners,<br/>we will deploy a set of ten high performance, relatively low cost NDN edge caches with SSDs<br/>and 40G or 100G network interfaces at six participating sites: Caltech, Northeastern, UCSD,<br/>University of Florida, MIT and CERN, together with an existing cache at CSU.
50
1659377Team
CC* Team: Improving Access to Regional and National Cyberinfrastructure for Small and Mid-Sized Institutions
5/1/2017John GoodhueMA
The Massachusetts Green High Performance Computing Center, Inc.
Cyberinfrastructure is as important for research in the 21st century as test tubes and microscopes were in the 20th century. Familiarity with and effective use of cyberinfrastructure at small and mid-sized institutions is essential if their faculty and students are to remain competitive. This regional initiative enables effective use of cyberinfrastructure by researchers and educators at small and mid-sized institutions in Northern New England (Maine, Massachusetts, New Hampshire, Vermont) by making it easier to obtain support from Research Computing Facilitators. Research Computing Facilitators combine technical knowledge and strong interpersonal skills with a service mindset, and use their connections with cyberinfrastructure providers to ensure that researchers and educators have access to the best available resources. It is widely recognized that Research Computing Facilitators are critical to successful utilization of cyberinfrastructure, but in very short supply.<br/><br/>Since most small and mid-sized institutions cannot individually support more than one or two Research Computing Facilitators, the project is developing a sustainable pool of experts who can work across institutions in the region. The project is investing further upstream, pairing mentors with students to both accelerate research and education projects at small institutions and develop a pipeline of new talent to meet growing academic and industry demand. In addition to supporting science, technology and education goals that are vital to the economic future of the Northeast, the project is testing and refining a new model for delivering support from Research Computing Facilitators that can benefit small and mid-sized institutions in other regions.
51
2018823Regional
CC* Regional: Advancing Maryland Research and Education Network for Under-Resourced Institutions Through a Science DMZ and 10Gbps Upgrade
7/1/2020Hamid BarghiMDUniversity System of Maryland
MDREN, the Maryland Research and Education Network, an organization under the University System of Maryland, provides advanced network services to 40+ education, research, and public service institutions throughout the State of Maryland and connections to regional and national resources.<br/><br/>High-speed networking infrastructure is vital to researchers and STEM educators in our community. Smaller/under resourced/rural institutions like Frostburg State University (FSU), Salisbury University (SU), the four research labs that are part of University of Maryland Center for Environmental Sciences (UMCES), the Maryland Psychiatric Research Center (MPRC) as part of University of Maryland School of Medicine (UMSOM), and HBCUs like University of Maryland Eastern Shore (UMES) and Morgan State University have serious limitations in accessing and transferring huge datasets amongst themselves and collaborators at NOAA, NASA, NIH, USGS, and others. <br/><br/>This proposal significantly increases network bandwidth for 6 institutions including research centers and labs by 10x and adds a Science DMZ for data transfers from other sites to enable the full potential of these institutions. These upgrades allow researchers at each institution to reliably connect to supercomputers and big data repositories, increase computational capabilities, and increase inter-institution collaboration.<br/> <br/>MDREN will a special path from the science DMZ machine to each lab that will facilitate high speed transfer. This would be significantly faster than doing transfers through campus firewalls.<br/><br/>This proposal helps rural or under-resourced institutions and HBCUs better collaborate with peer institutions in the greater Maryland region, collaborators around the world, and key science organizations such as NASA, NIH, and NOAA.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
52
1659356Integration
CC* Integration: Regional Embedded Cloud for As-a-Service Transformation (RECAST)
4/15/2017Xi YangMDUniversity of Maryland, College Park
Science DMZs (SDMZ) have been widely deployed in the Research and Education networking community and recognized<br/>as a global best practice for supporting cyberinfrastructure and big data applications. Despite innovations<br/>in performance, security, and management, the SDMZs remain focused on the<br/>support of big data projects and cyberinfrastructure. It remains difficult to create physical "disk<br/>to disk" or virtual "scientist to scientist" connections. Researchers are increasingly interested<br/>in an emerging class of hybrid services focused on turnkey, on-demand integration of private/public<br/>clouds, cyberinfrastructure, scientific resources, and high performance networks.<br/><br/>The RECAST project will develop and deploy a regional based Software Defined Science<br/>DMZ (SD-SDMZ) as a mechanism to provide flexible edge services that include traditional data<br/>transfer functions. RECAST leverages state-of-the-art Cloud and Software-defined networking technologies to transform<br/>an SDMZ -- that was historically focused on high-speed data transfer -- into a Software Defined Infrastructure that offers<br/>compute and network services in a fully virtualized, programmable and orchestrated fashion. Built upon technologies similar to those<br/>used in modern data centers, this "cloudified SDMZ" will provide a scalable, multi-tenant environment, where<br/>each user or user group can allocate dedicated resources and use the resources in an isolated,<br/>Independent, and elastic fashion. Embedding this resource in the University of Maryland Mid-Atlantic<br/>Crossroads regional network will allow the SD-SDMZ to leverage rich connectivity to advance<br/>cyberinfrastructure to provide these new and advanced services.
53
2018851Compute
CC* Compute: High-Memory Compute Resources for Maine
7/1/2020Bruce SegeeMEUniversity of Maine
Maine researchers are advancing the state of the art in areas including landslide prediction, hydrodynamic modelling, fluid-structure interaction and modelling the electro-chemical properties of organic molecules. Strong, scientifically compelling investigations have previously been hampered or stalled by the lack of adequate computational resources.<br/> <br/>This project advances research at the University of Maine in two ways through the addition of approximately 1000 processing cores in high RAM nodes along with a growth in CEPH disk storage. It enables research to move forward in areas such as landslide prediction, coastal modelling, DNA sequencing from single strands of DNA, and high resolution modeling of the cardiovascular system. It facilitates an increase in collaboration with the Eastern Regional Network, the Open Science Grid, the Open Storage Network, and with other institutions, particularly other EPSCoR sites in the Northeast. Data and code from this grant is disseminated to the public through tools such as github and EarthCube. The increase in computational resources as a result of this project allows opportunities for undergraduate and graduate students to engage in state-of-the-art numerical modeling. By having these new resources to meet the needs of researchers, previously existing resources are utilized to offer courses for which there was not previously the capacity. Thus the instrumentation advances research and also enables the project team to better train the next generation of scientists and engineers. The research projects facilitated by this cluster all include plans to distribute, and visualize model output for relevant stakeholders.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
54
2018432Infrastructure
CC* Networking Infrastructure: A Science DMZ For Quantitative Biology and Precision Agriculture
7/1/2020Brian O'SheaMIMichigan State University
Moore?s Law has ushered in a scientific data revolution. This is particularly acute in the life sciences, where the devices used to collect data and the theoretical tools used to generate models have benefited tremendously from the advent of inexpensive digital sensors and general-purpose graphics processing units, which have led to an explosive increase in the volume of high-quality data. Sharing large amounts of this data for analysis by other researchers will result in tremendous benefits to the scientific community.<br/><br/>This project creates a Science DMZ at Michigan State University, which will facilitate MSU researchers? ability to share huge volumes of data with the external research community at very high bandwidth. The project supports the networking hardware and software necessary to implement up to 100Gbps network connections used for sharing data already stored at MSU?s High Performance Computing System and on the NSF-funded MI-OSIRIS file system. The project uses data provided by four research groups on campus as a testbed, making cryo-electron microscopy images, hyperspectral imaging of crops using drones, three-dimensional volumetric images of plants generated via X-ray computed tomography, and a databank of biomolecular simulation data available to other researchers and the public. This project enhances the impact of MSU scientists and leverages prior NSF scientific and cyberinfrastructure investments. In addition, it involves the participation of students in the deployment and usage of the science DMZ.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
55
1925476Integration
CC* Integration: NetBASILISK: NETwork Border At Scale Integrating and Leveraging Individual Security Components
10/1/2019Eric BoydMI
Regents of the University of Michigan - Ann Arbor
The NetBASILISK (NETwork Border At Scale Integrating and Leveraging Individual Security Components) project enables researchers and network engineers at the University of Michigan to introduce the next level of security and privacy protection scaled to the vast volume of generated research data. As attackers develop more sophisticated tools to acquire student and faculty private data, institutional financial information, and proprietary, often classified, research information, it is imperative for information technology staff to detect and stop these attacks. By observing patterns of network traffic, NetBASILISK will accomplish these goals with a minimal impact on the speed or volume of network traffic.<br/><br/>Current threat prevention systems do not scale to the growing volume of research data at an affordable cost. The University of Michigan solution, a novel framework combining open source and proprietary components, significantly improves system performance and accuracy in detecting and preventing threats to institutional data. NetBASILISK comprises powerful load balancing threat prevention mechanisms, data filtering tools, and threat detection technology. NetBASILISK will create a secure network perimeter while facilitating science such as cryo-electron microscopy, Large Hadron Collider particle research, and non-distorted Internet measurement, as well as enabling innovative network enhancements such as technologies to circumvent web censorship.<br/><br/>NetBASILISK will be used to inform the design of advanced network security devices for universities that scale to accommodate the network traffic requirements of data intensive science. Lessons learned and technology enhancements discovered by the project will be shared with the university networking community, as well as commercial partners. The science community will be informed of lessons learned from new design patterns employed for border security. The science drivers of the project, including advances in cryo-electron microscopy, networking, and physics will provide broad, impactful benefits. Project funding will also be used to support faculty, graduate students, and a postdoctoral researcher.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
56
1827151Design
CC* Network Design and Implementation for Small Institutions: The Tommie Science Network- A dedicated research network for the University of St. Thomas
7/1/2018Edmund ClarkMNUniversity of St. Thomas
The University of St. Thomas, in conjunction with the University of Minnesota's Gopher Science Network Team, is building the Tommie Science Network, a dedicated research network that transforms the campus research environment by providing a reliable and secure high-speed research network capable of achieving sustained transmission rates of up to 100 Gigabits per second (Gbps) between campus research locations and Internet2 (I2). This project creates efficiencies in end-to-end workflows between existing instrumentation facilities and research centers, centralized computing and data storage facilities, and partner institutions. It significantly increases bandwidth available to researchers, allowing the St. Thomas community to fully participate in collaborative, global research activity, and provides a world-class working environment for professors and students to practice distributed research and collaboration techniques to learn skills they will use in their future employment.<br/><br/>The Tommie Science Network connects Owens Science Hall, O'Shaughnessy Science Hall and the St. Thomas E-learning and Research center (STELAR) via high-speed access layer switches linked to the network core at 80Gbps. Traffic on the Tommie Science Network then bypasses the firewall to connect directly to I2 at 100Gbps via the Northern Lights Gigapop. Laboratories and research locations in the science halls and STELAR are connected at 10Gbps to the access layer switches. Globus File Transfer Protocol is used to allow secure, reliable high-speed transfers between the science buildings, the DMZ, and I2. PerfSonar is used to monitor performance between the science buildings and the and Science Demilitarized Zone (Science DMZ) as well as from the DMZ to Internet2. The St. Thomas networking staff is responsible for ongoing operations.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
57
1925681Team
CC* Team: Great Plains Regional CyberTeam
7/1/2019Grant ScottMOUniversity of Missouri-Columbia
Advances in science and technology fields are increasingly accomplished as part of multidisciplinary and multi-institutional collaborations that require complex cyberinfrastructure. A regional CyberTeam led by the Great Plains Network will support and advance the computational and data-intensive research across the region through the development of specific cyberinfrastructure resources, workforce training, and the development of unique, mutual, and cross-institutional support methodologies and agreements. The project advances the adoption and experience of advanced computing and data resources by developing a model built upon best and emerging practices for cross training and researcher outreach, pairing an experienced mentor at one institution with a mentee at another.<br/><br/>The project objectives are to: 1) Improve campus awareness and adoption of advanced cyberinfrastructure. 2) Increase the number of campus research computing and data professionals at mentored institutions, especially for institutions with small IT staffs with many job duties. 3) Increase the capabilities of campus cyberinfrastructure resources. 4) Enable development, deployment, and operation of cyberinfrastructure to make science efficient, trusted, and reproducible. The CyberTeam is a cross-institutional team consisting of technical leaders in the region paired with new members of the workforce, graduate and undergraduate students interested in joining the cyberinfrastructure workforce, and the institutional research computing leadership for regional research universities. It provides a model for distributed support teams to support cyberinfrastructure and aid in the development of a cyberinfrastructure engineering and facilitation workforce. Generalized best practices for a regional team of CI mentors including specific mentorship plans, retrospectives, and reference materials are disseminated.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
58
1827225Design
CC* Network Design: Network Upgrades to Improve Engagement in Science Discovery and Education
8/1/2018Donna LissMOTruman State University
Truman State University, with the support of the Missouri Research and Education Network (MOREnet) and the University of Missouri, is upgrading the campus network and infrastructure to better enable access to, and use of, scientific data through improved data transfer capabilities for large datasets. The enhanced infrastructure supports faculty and undergraduate research in understanding star spots, quantifying light pollution in geographical areas, understanding the inhibition mechanisms of drugs to treat global health issues, natural language processing to improve approaches in artificial intelligence, and building low-cost, real-time, soil sensors. An expanded curriculum that includes hands-on training in cybersecurity and IPv6 technologies is also enabled by utilizing these network resources. <br/><br/>The improvements include upgrades to the network switch and distribution technologies that result in a ten-fold increase in data transfer and access rates for faculty in STEM-related disciplines, as well as increases in the data transfer bandwidth to the campus observatory, deployment of IPv6 in support of the computer science curriculum, establishment of network performance metrics to inform continued growth, and robust and secure access (through federated identity management) to off-campus research tools and intra-institutional collaboration opportunities.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
59
1827177Integration
CC* Integration: End-to-End Performance and Security Driven Federated Data-Intensive Workflow Management
10/1/2018Prasad CalyamMOUniversity of Missouri-Columbia
Data-intensive science applications in research fields such as bioinformatics, chemistry,<br/>and material science are increasingly becoming multi-domain in nature. To augment local<br/>campus CyberInfrastructure (CI) resources, these applications rely on multi-institutional resources<br/>that are remotely accessible (e.g., scientific instruments, supercomputers, public clouds).<br/>Provisioning of such federated CI resources has been traditionally based on applications'<br/>performance and quality of service (QoS) requirements. This project aims to augment<br/>traditional resource provisioning schemes through novel schemes for<br/>formalizing end-to-end security requirements to align security posture<br/>across multi-domain resources with heterogeneous policies.<br/><br/>This project addresses the end-to-end multi-domain security design for<br/>scientific applications by defining and formalizing security specifications along an application's workflow lifecycle stages.<br/>The research work will advance the current knowledge for a CI engineer in the following areas:<br/>(i) how to intelligently perform resource allocations among private and public cloud locations;<br/>(ii) by streamlining end-to-end security posture across domains that are constructed via dynamic network services; and<br/>(iii) how to "bring your own compute" programs in large facilities to reduce turnaround times in a secured and policy-compliant manner.<br/>The resulting security formalization and alignment schemes will be implemented as a security middleware coupled within a unified resource broker framework that: (a) operationally integrates various software tools and systems such as perfSONAR, OpenStack, iRODS and Shibboleth; and (b) supports prototypes of web-portals and actual users (e.g., researchers and educators) within usability evaluation and validation experiments.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
60
1827098Infrastructure
CC* Networking Infrastructure: Jackson State University (JSU)-Research Network
7/1/2018Deborah DentMSJackson State University
This project provides Jackson State University (JSU) researchers with a campus cyber infrastructure (CI) capability allowing expansion of big data and other network intensive collaborative research that depend on high-speed network access to local, regional, cloud and national compute and storage resources. The project increases research network capacity to 100G, re-architects the campus research network, and strategic relocation of equipment to utilize a Science DMZ (Demilitarized zone - i.e., free from firewalls and other friction devices). The research network includes centrally pooled High-Performance/Throughput Computing (HPC/HTC) resources designed to meet immediate and future research needs. An increased focus on domain scientist talent is possible due to management and monitoring of the CI assets by a centralized, well-trained enterprise information technology team. This project enhances the end-to-end network services for researchers and is an important catalyst for the growing campus-wide interdisciplinary data science program. <br/><br/>Application-specific network and computational needs for big data analytics, visualization, nanotoxicity, complex network analysis, science drivers and education are addressed. Faculty, students, and the IT staff on campus and across campuses are engaged to leverage the new environment for their research, education, and operational needs. The project is disseminated through outreach workshop activities with Historical Black Colleges and Universities and other universities or the community college systems within the state of Mississippi who may be planning similar campus network upgrades.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
61
2018112Infrastructure
CC* Networking Infrastructure: Deploying a Science DMZ to Advance Research at the University of Montana
7/1/2020Zachary RossmillerMTUniversity of Montana
A major component of the University of Montana's (UM) mission is to generate world-class research and creative scholarship. To that end, UM established a state-of-the-art Modular Data Center, ensured dedicated high-speed connections for key campus sites, and deployed a shared computing cluster. A team of researchers and IT professionals, reviewing the cyberinfrastructure plan alongside researcher needs, determined that a well-configured dedicated network for high-performance dataflow is needed to advance UM's mission. This project installs a Science DMZ (UM-DMZ), a separate friction-free network dedicated to scientific data transfer, into UM's research infrastructure. The UM-DMZ, along with a robust research infrastructure, serves and attracts quality students, educators and researchers who require access to high performance end-to-end data transfers to conduct their work.<br/><br/>Hardware includes using an Arista network switch that allows for low-latency and deep packet buffering. It supports sFlow and meets the needs for IPv4 and IPv6 scientific applications. Various sized Data Transfer Nodes are deployed and a solid security foundation using Zeek IDS system and network security policies are being implemented. The UM-DMZ serves research efforts from diverse disciplines that impact the well-being of society. This includes research on the environment and climate change, genomic studies, the development of tools used to further scientific discovery, and innovative student led research projects. Common themes uniting these diverse projects are enhanced collaboration, high-speed data transfer, and educational opportunities for students. Student participation through advanced network courses, internships, and student led research, are important aspects of this project.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
62
1925267Compute
CC* Compute: Improved Computing for Advanced Research and Education (ICARE)
7/1/2019Zachary RossmillerMTUniversity of Montana
University researchers and educators require access to high performance computing, high bandwidth connections, sharable storage, and high-throughput data transfers. Unfortunately, institutional cyberinfrastructures can be silos of dissimilar and unconnected computing resources, with limited functionality, and inconsistent service delivery. The Improved Computing for Advanced Research and Education (ICARE) team addresses these issues and advances discovery and understanding by serving as a shared high-throughput computing site. Information Technology (IT) professionals and the scientific research community work together to coordinate campus-level cyberinfrastructure improvements based on project-driven needs. These include scientific research projects that improve natural disaster forecasting, produce innovative modeling tools used for maintaining biodiversity, impact the design of nuclear receptor drugs used to treat many diseases, develop deep learning approaches to DNA sequencing, and explore genomic responses to climate change.<br/><br/>The main feature of ICARE is the UM Shared Computing Cluster (UMSCC). UMSCC is configured with 8 computing nodes, 3 GPU nodes, and 2 storage nodes. Open source software solutions and participation in the Open Science Grid (OSG) facilitates data intensive research and extends computing resources to internal and external research groups, graduate and undergraduate researchers, and the nation. This effort strengthens collaboration between diverse groups of researchers and provides a vehicle for ongoing discussion and planning between IT professionals, educators, and researchers. A special aspect of this project is the direct relationship between IT professionals and the campus scientific research community, which is providing sound IT solutions that are critical for future competitiveness in grant funding and academic contribution.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
63
2126116Networking
CC* Networking Infrastructure: Advanced Network For Research at UNC Charlotte
9/1/2021Christopher MaherNC
University of North Carolina at Charlotte
A diverse team of faculty researchers, research computing, and networking experts from across UNC Charlotte are collaborating to develop and deploy an advanced network for research across the campus. This network dramatically improves digital communication between researchers, scientific instrumentation, visualization workstations, high performance computing (HPC) infrastructure and external collaborators. Access to the advanced network enables higher speed to existing research workflows and allows for new research processes previously unavailable to the team. For example, massive datasets from multiple DNA sequencing instruments are streamed to powerful HPC clusters thus removing data analysis bottlenecks. Research applying motion analysis and artificial intelligence to Future of Work research, natural language processing and cyber physical systems are also supported. Applications include fluid dynamics simulation of airflow, which carries infectious microbes, in public transportation. Research collaborations between UNC Charlotte and other universities is enhanced by speeding acquisition and sharing of research results. Students from varied backgrounds, including UNC Charlotte’s large cohorts from underrepresented groups and first generation college students will work with this state-of-the-art cyberinfrastructure in their coursework and research projects.<br/> <br/>The design is a campus wide 100Gb fibre-based network with Science DMZ including a Data Transfer Node enabling data flows separate from the day-to-day traffic of University business. The network is implemented as a spine-leaf architecture with two spines (64x100GbE) interconnected to each other with 100Gb links by MLAG. One spine is placed in the HPC Data Center and the other is placed in a separate campus Data Center. The network has 6 Leafs (32x100 GbE) supporting connectivity between buildings housing academic departments and HPC infrastructure. Performance is monitored and tuned with PerfSonar nodes. Security is monitored through Zeek (40Gb/s) nodes. The network and Science DMZ are tuned and optimized in collaboration between researchers and network architects.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
64
2018401PlanningCC* Planning: NC Regional Science DMZ7/1/2020Tracy FutheyNCDuke University
Scientific approaches relying on big data and on large data transfers require unique cyberinfrastructure, called Science DMZs, to support fast and easy data transfers and access of data. Typically, the benefits of these complicated Science DMZs have accrued to large, well-resourced “Research-Intensive” institutions, which compounds an already growing disparity between large and small institutions, especially often-lesser-resourced minority serving institutions. This also makes it much more difficult for smaller institutions with a primarily instructional mission to expand existing research and expose students to scientific approaches or collaborations, including those that may rely on big data.<br/><br/>Through a partnership between Duke University, North Carolina Central University, Davidson College, and MCNC (NC’s regional research and education network provider), this project plans the creation of a shared Science DMZ (sS-DMZ) as a service of the North Carolina Research and Education Network so that lesser-resourced institutions do not have to bear all setup and maintenance costs and can instead commit their resources to local infrastructure, their scientific researchers and pursuit of new research grants. This project works with regional higher-ed institutions to 1) establish community trust and shared governance responsibility necessary to expand participation in a sS DMZ program and 2) architect the approach to build the technical solution of operating a sS-DMZ. The resulting proposal will also inform the feasibility of a sSDMZ in other Research and Education Networks in the United States as an approach to reduce overall costs while increasing access to Big Data science.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
65
1925623Infrastructure
CC* Networking Infrastructure: Gate City Research Network - A Multi-Institution Science DMZ
7/1/2019Jeff WhitworthNC
University of North Carolina Greensboro
The Gate City Research Network (GCRN) is a collaboration between the University of North Carolina at Greensboro (UNCG) and North Carolina A&T State University (NC A&T) to create a multi-institutional science DMZ supporting research activities through a dedicated, low-latency, high-speed research and education network connection. The GCRN enhances researcher access to high performance computing (HPC) resources supporting the competitive and innovative research environment state-wide and regionally by connecting the Southeastern Nanotechnology Infrastructure Corridor through the Joint School of Nanoscience and Nanoengineering (JSNN).<br/><br/>The GCRN, connected to the North Carolina Research and Education Network and Internet2, provides a dedicated, separate research network to overcome the latency challenges imposed on the enterprise networks that support the administrative and growing entertainment service traffic. The addition of a high-speed data transfer node facilitates fast transfers of large data to federated HPC environments at partner institutions and provides the foundation to enhance the end-to-end data flow processes from research instrumentation to analysis, simulation and modeling computational resources, significantly increasing the fundamental research capacity in disciplines such as chemistry, nano-engineering, nano-, computer-, and data science. The GCRN is committed to producing open access design, sustainability and governance documentation, use and performance data, and testing and operations protocols in support of developing a 21st-century data capable workforce and serving as a model for a scalable and efficient multi-institutional science DMZ.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
66
1925550Integration
CC* Integration: Archipelago: Linking Researchers On-Campuses and in the Cloud through SDN-Enabled Microsegmentation
11/1/2019Tracy FutheyNCDuke University
Researchers often generate very large amounts of data in their work, so much so that their data stresses the underlying computer networks. Some academic institutions build a separate network for research data, but this is very expensive. Archipelago is a project that seeks to improve the performance and security of computer networks to make it easier and less costly for scientists to move, store, and analyze their data. It uses relatively small numbers of very fast, flexible network devices which use a technique known as Software Defined Networking (SDN) to steer research data around bottlenecks giving the appearance of a separate science network without having to spend all the money needed to build a true second network.<br/><br/>Archipelago addresses shortcomings that have limited adoption of SDN within campus networks, beginning with evaluation of next-generation SDN forwarding elements, SmartNIC (Smart Network Interface Card) devices and purpose built SDN data planes to create its hybrid architecture. This architecture connects islands of SDN by enforcing traffic policies at nearby distribution points linked to commodity servers with SmartNICs or next-generation hardware-driven SDN forwarding elements (depending on scale) that provide control plane functionality. This shift from software to SmartNIC forwarding accelerates data plane performance and efficient use of distributed CPU resources for the control plane frees server resources for other uses. This robust policy-driven architecture enables fine-grained traffic forwarding policies for microsegmentation, enhanced cybersecurity, efficient data transfer, and other applications.<br/><br/>Utilizing agile software development techniques that are informed by the deployment experience at Duke and within partner environments, this project will deliver a powerful, vendor-agnostic research network design that is easy-to-adopt for institutions that have been reluctant to venture into SDN architectures due to personnel or technical resource constraints. Since the Archipelago architecture augments existing networks incrementally (and therefore cost effectively) and because its software components are deployed via open-source licensing, it will provide new and powerful capability to institutions that already have established SDN networks, as well as for those that have been reluctant to venture into SDN architectures due to financial constraints.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
67
1826997Integration
CC* Integration: Delivering a Dynamic Network-Centric Platform for Data-Driven Science (DyNamo)
8/1/2018Anirban MandalNC
University of North Carolina at Chapel Hill
Computational science today depends on many complex, data-intensive applications operating on datasets that originate from a variety of scientific instruments and data stores. A major challenge for data-driven science applications is the integration of data into the scientist's workflow. Recent advances in dynamic, networked cloud infrastructures provide the building blocks to construct integrated, reconfigurable, end-to-end infrastructure that has the potential to increase scientific productivity. However, applications and workflows have seldom taken advantage of these advanced capabilities.<br/>Dynamo will allow atmospheric scientists and hydrologists to improve short- and long-term weather forecasts, and aid the oceanographic community to improve key scientific processes like ocean-atmosphere exchange, turbulent mixing etc., both of which have direct impact on our society.<br/> <br/>The Dynamo project will develop innovative network-centric algorithms, policies and mechanisms to enable programmable, on-demand access to high-bandwidth, configurable network paths from scientific data repositories to national CyberInfrastructure facilities, and help satisfy data, computational and storage requirements of science workflows. This will enable researchers to test new algorithms and models in real time with live streaming data, which is currently not possible in many scientific domains. Through enhanced interactions between Pegasus, the network-centric platform, and new network-aware workflow scheduling algorithms, science workflows will benefit from workflow automation and data management over dynamically provisioned infrastructure. The system will transparently map application-level, network Quality of Service expectations to actions on programmable software defined infrastructure.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
68
1826993Infrastructure
CC* Networking Infrastructure: North Dakota Internet Leadership Level Infrastructure
8/1/2018Dana SkowNDNorth Dakota State University Fargo
This project creates data networking and storage infrastructure dedicated to scientific research and education at North Dakota State University (NDSU) and the University of North Dakota (UND). These network upgrades prioritize science data movement, allowing both institutions to more effectively participate in collaborative research across the globe. The project also prepares both schools for much higher capacity connectivity through North Dakota's research and education (R&E) state network. The networking upgrades enhance efforts to provide a strong and stable workforce equipped with the skills and knowledge necessary to support contemporary advanced research. Examples include the current North Dakota state EPSCoR Track I project, collaborations between researchers at NDSU/UND and the other North Dakota institutions and Tribal College schools, and student internship programs using the research computing facilities on each campus. <br/><br/>100Gbps connectivity is established from campus HPC systems to the campus network border. Science drivers for this project include Precision Agriculture and Digital Agriculture initiatives studying crop and livestock data collected using drone and satellite technologies; multi-campus research collaboration on 100TB class datasets used by UND?s Center for Regional Climate Studies; and predictive design of materials based in NDSU?s Materials and Nanotechnology program. These enable collaborations between researchers in chemistry, biochemistry, computer science, coatings and polymeric materials and mechanical engineering across the globe. The networking upgrades follow the scienceDMZ model and include perfsonar based end-to-end testing capabilities.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
69
1827116Integration
CC* Integration: Service Analysis and Network Diagnosis (SAND)
8/15/2018Brian BockelmanNEUniversity of Nebraska-Lincoln
Increasingly, science has become a "team sport" with projects less likely to be conducted by a single investigator or university group but with multi-institution collaborations spanning many universities and national laboratories. As these collaborations have become data-intensive, highly-performant network links connecting their distributed science platforms have become increasingly important. While true across all science, some collaborations, like those at the Large Hadron Collider (LHC) at CERN, transfer millions of gigabytes across global networks today and anticipate orders of magnitude increases within the next decade. Without performant networks and efficient data transfer and access services, the time to science will be greatly compromised. In this context, high-speed, long-distance networks see their performance degrade surprisingly quickly in the face of modest error rates. Accordingly, network engineers and researchers use sophisticated tools to monitor the network and transfer services. Without a means to aggregate and correlate network tests, performance measurements, and application response, they at best can only reveal a small piece of the overall problem. This project focuses on techniques that better combine, visualize, and analyze disparate network monitoring and service logging data, providing a comprehensive picture critical to the engineers and scientists relying on the network. This will allow problems to be located and fixed more quickly, reducing the time to science. <br/><br/>The "CC* Integration: Service Analysis and Network Diagnosis (SAND)" project brings together an experienced team that has been working for more than a decade on wide area data transfers for large scale science. The project develops a network monitoring archive and analytics platform, SAND-NMA, which integrates widely used data analytics tools (such as ElasticSearch, Kibana and JupyterLab) with infrastructure components (perfSONAR, HTCondor) and application sensors. Data from disparate sources are published to a messaging bus providing low-latency metrics describing the performance of research platforms on a global scale. Exploratory work is performed to provide engineers with pragmatic tools to identify and locate problems and perform analytics to understand the long-term evolution of network and higher level service performance. Programming interfaces allow external cyberinfrastructure (such as workflow management systems) to incorporate the network monitoring feeds into their decision making engines.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
70
2018927CRIACC* CRIA: The Eastern Regional Network7/1/2020James von OehsenNJRutgers University New Brunswick
This Eastern Regional Network (ERN) CI-Research Alignment project organizes six information-gathering events: four workshops and two All Hands Meetings. These events are aimed at building and sustaining regional partnerships that simplify multi-campus collaborations to advance the frontiers of research, pedagogy, and innovation. With a deeper understanding gained through the four workshops, augmented by wider discussions at the two All Hands meetings, the ERN is better positioned to ensure impact across broad institutional types, research disciplines and pedagogical approaches, and well versed in relevant technical and administrative challenges and opportunities.<br/><br/>Two workshops focus on learning from the Cryo-EM and Materials Discovery communities about their research workflows and their computational, storage, and network requirements. These workshops also cover data management plans, types of collaborations (national and international), and the pain points associated with sharing data and resources between institutions. A third workshop brings together university Vice Presidents of Research, Chief Information Officers, General Counsel, Institutional Review Board directors, and other ERN representatives to discuss the CI Sharing Policies needed to simplify sharing of data, infrastructure, and expertise between universities and across state lines. The final workshop, “Broadening the Reach,” includes researchers, educators, and senior administrators from under-represented colleges and universities, including MSIs, HSIs, and HBCUs. This workshop ensures that these institutions can develop strategic and attainable CI plans and successful, sustainable CC* funded initiatives that leverage the regional ERN collaboration.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
71
2018308Integration
CC* Integration-Small: Science Traffic as a Service (STAAS)
10/1/2020John BrassilNJPrinceton University
Future advances in scientific research will require computing on massive datasets and high bandwidth streaming scientific instrument data. New experimental research infrastructures will be required to advance the understanding of the networks capable of supporting these increasingly demanding science data flows. Testing advances in networking technologies and protocols with actual high-speed science data traffic is vital to networking experimenters, scientific instrument users, and data scientists. To address this need, this project will develop a prototype of a decentralized computing and networking system to create, collect and distribute a diverse collection of real and synthetic science traffic flows to the experimental research infrastructure user community. The proposed work will first develop and deploy the Science Traffic as a Service (STAAS) prototype on the Network Programming Initiative testbed connecting two US universities, and then prepare STAAS for later nationwide deployment on the FABRIC midscale networking research infrastructure now under development. The students exposed to research on networking testbeds with demanding science traffic workloads will learn skills to help strengthen a workforce prepared to advance the global-scale computing cloud application service platforms that are increasingly central to the US economy. All documents, software, presentations, and other artifacts created under this project will be made publicly available at http://www.cs.princeton.edu/~jbrassil/public/projects/staas/<br/><br/>The key project insight is that many science flows are already in transit at any moment on or between campuses. Using new campus cyberinfrstucture including passive optical Test Access Points, Network Packet Brokers, and data-plane programmable ethernet switches, STAAS will safely tap and forward copies of these flows onto the experimental testbed, while preserving both the timing integrity of the flows and the data privacy of their payloads. Large scale, high bandwidth experiments will be achieved by enlisting participation of many or all STAAS edge nodes on multiple campuses. By introducing a service-based model, STAAS can help advance the networking research community's transport of emerging science data, and help the operators of scientific instruments increase the amount and quality of data collected by their instruments.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
72
1925482Integration
CC* Integration: Rutgers University Next-Generation Edge Testbed (RU-NET)
10/1/2019James von OehsenNJRutgers University New Brunswick
The technological landscape of computing is changing at a phenomenal rate with recent advances in big data, deep learning, data mining, cloud services, and cybersecurity, along with many novel consumer and scientific devices. Research universities, especially IT support, are challenged to be agile in this landscape, as they must keep pace with the needs of the research and user community. To bolster researchers working on these technologies, this project seeks to support two key network-centric requirements: incorporating user-owned devices at the edge of the network and moving bulk data between data collection nodes and data processing nodes over the network.<br/><br/>This project will design, implement, test, and utilize a novel research testbed, the Rutgers University Next-Generation Edge Testbed (RU-NET). RU-NET is a campus-wide platform designed to simplify the deployment of new and emerging edge technologies, while preserving the stability of the campus network. RU-NET will allow integrating user-owned devices in laboratories at the edge of the network with Rutgers's three campus networks into a unified 100 Gbit/s network managed using programmable switches and host devices. The RU-NET design is multi-layered, constituting researchers' applications, RU-NET-owned physical resources, and orchestrator software mapping applications to resources and isolating different research applications.<br/><br/>RU-NET will serve as a catalyst to position Rutgers University researchers at the forefront of scientific discovery and to improve compute and network infrastructure that benefits the University community as a whole. The project plans to develop course material to educate undergraduate and graduate students about novel network-centric technologies, training students from Rutgers's diverse student body. The project also plans to share best practices with academic institutions and enterprise communities that are planning similar distributed computing environments, with the possibility of this project serving as a template for similar future efforts.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
73
2019000Compute
CC* Compute: From classroom to the lab: NMSU responds to the changing HPC landscape in New Mexico
7/1/2020Diana DugasNMNew Mexico State University
New Mexico State University (NMSU), a Minority-Serving Institution and Hispanic-Serving Institution, recognizes the vital need for universal access to a high performance computing (HPC) facility. Increasing the computational resources at NMSU, including storage, supports NMSU?s high research need, instructors interested in incorporating HPC into their classroom activities, and state-wide collaborations with faculty who are excited to have a supportive HPC team to assist their classrooms. New Mexico as a state has a high need for personnel experienced with HPC use, but lacks resources dedicated to student learning. By utilizing existing relationships between NMSU and other NM-based universities, the new resources increase HPC-based classroom activities around the state through a dedicated queue. <br/><br/>Students trained in HPC use are in high demand both in industry and academia, providing our students with new opportunities. Much of the research performed on the HPC is either unfunded or funded through smaller grants that cannot purchase dedicated HPC resources and many users start on their HPC journey without the basic knowledge of Unix or how to use an HPC. <br/><br/>Time on regional and national super clusters is valuable and not a good learning environment for those who are beginning their venture into computing-intensive science. With extensive collaborations across institutions and already having streamlined NMSU-affiliated user on-boarding and account creation, NMSU is key in increasing HPC knowledge across the entire state.<br/><br/>This activity expands the existing HPC resources at NMSU by roughly 30% through the acquisition of 10 compute nodes each with 36 CPU cores, at least 256 GB of RAM and 960 GB of local storage for a total of 360 cores, two GPU nodes similar to the compute nodes but with dual Tesla V100 GPUs, 600 TB of network connected storage, and Infiniband interconnect hardware. The cluster supports both research and educational activities in a range of fields from biology and environmental sciences to physics, chemistry, and material science.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
74
1925689Regional
CC* Regional: Tribal Consortium Research Network
7/1/2019Jason ArvisoNMNavajo Technical University
Navajo Technical University (NTU) is one of the nation's leading tribal colleges and a leader in delivering academic and research programs for the region and for the Navajo Nation. Tribal colleges regularly work in cooperation and in collaborative organizations. In 2017, NTU facilitated the development of a tribal consortium to build cooperative support and technology services. The tribal consortium includes Navajo Technical University, Crownpoint, NM, Dine College, Tsaile, AZ, Tohono O'odham Community College (TOCC), Sells, AZ, and A:shwi College and Career Readiness Center (ACCRC), Zuni, NM. Although Native American students have access to cultural, language, vocational, and Science, Technology, Engineering, and Math (STEM) programs, campus and student connectivity options are expensive and limited to cellular or broadband capacities below national standards. Students are not well connected to the internet and to the larger research and education community; there is a fundamental lack of internet connectivity with sufficient bandwidth to successfully participate in the ever-increasing opportunities of online courses and programs. This project improves campus network performance in support of education and research, by increasing external connectivity to each campus by connecting to NTU as a regional aggregator.<br/><br/>The project addresses distance education challenges by implementing an advanced wireless testbed to deliver homework gap solutions for local students and faculty leverages a strong existing regional relationship with the Northern Arizona University, a leader in tribal programs and services. The project emphasizes meetings to map needs to requirements, followed by design and training workshops, which benefit the regional Native American communities.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
75
1827199Infrastructure
Nilch Bee Naa Alkaa Go Ohooa Doo Eidii Tii (Using Air (Technology) to Learn and Understand New Things)
9/1/2018Jason ArvisoNMNavajo Technical University
Navajo Technical University (NTU) is one of the nation's largest tribal colleges and a leader in delivering academic and research programs for Native Americans. NTU students have access to a plethora of academic programs including strong programs in Science, Technology, Engineering, and Math (STEM). However, NTU and the residents of the Navajo Nation are not well connected to the Internet and to the larger research and education community. Connectivity limitations, especially at Navajo community centers and at their homes, restrict NTU's ability to collaborate and contribute in the ever-growing integrated global research and education environment. There is a fundamental lack of Internet connectivity with sufficient bandwidth to successfully participate in the ever-increasing distance or online learning courses/programs. <br/><br/>This proposal will increase Wide Area Network connectivity by connecting NTU to the Front Range GigaPoP (FRGP) regional network at much higher network speeds with dedicated bandwidth for NTU research and academic projects. The proposal addresses distance education challenges by implementing an advanced wireless test bed to deliver NTU distance education courses to Chapter Houses, tribal libraries, and other community anchor locations. This proposal engages the country's largest tribal university and is a collaboration with New Mexico and Arizona Tribal Colleges and Universities. It leverages a strong existing regional relationship with the FRGP, and it provides an organizational model for other tribal colleges to adopt a similar technology and associated collaborations. The proposal emphasizes needs and requirements-gathering meetings, followed by design and training workshops, which will benefit regional Native American community.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
76
2019164Integration
CC* Integration-Large: Robust and Predictable Network Infrastructure for Wide-Area Hybrid Sensor Networks
10/1/2020Engin ArslanNV
Board of Regents, NSHE, obo University of Nevada, Reno
Increased use of Internet-of-Things (IoT) sensor devices is revolutionizing science and engineering applications, such as smart cities and environmental hazard monitoring. These sensors are often deployed in remote and distributed environments and rely on complex data networks that use both wired and wireless communication to stream large volumes of data back for analysis and distribution. Design and management of these complex hybrid networks is a daunting task due to network capacity fluctuations and dynamic data flow characteristics. This project develops a new software-driven network infrastructure to help automate network management of these emerging hybrid sensor networks for science and public service.<br/><br/>This project develops and deploys an operational Software-Defined Networking (SDN) network management and monitoring infrastructure for hybrid wide-area research networks spanning hundreds of kilometers in Nevada for distributed applications in wildfire, climate, and traffic safety. Current practices of inflexible network setup with limited monitoring capability struggles to satisfy ever-increasing science needs, such as on-demand data pipeline creation and quality-of-service satisfaction. Moreover, the project enhances network transparency through deployment of high-precision (i.e., port-, flow-, and packet-level) network monitoring and performance measurement (i.e., PerfSonar) nodes. The project implements a deep-learning-based anomaly detection mechanism to protect sensitive data from cyber attacks. <br/><br/>Integrating SDN with high-precision monitoring into wide-area sensor networks has the potential to accelerate adoption of IoT devices in many science areas by addressing core hybrid WAN (wide-area network) challenges such as routing, troubleshooting, and anomaly detection. Developing these integrations now is critical, because hybrid WAN infrastructures (particularly in non-urban regions) will remain bandwidth-limited relative to data generation devices into the foreseeable future. This project allows University of Nevada, Reno (UNR) to continue leadership in wide-area research IoT systems, expand institutional platforms for hybrid-cloud operations, and scale up key products for communities as part of UNR's land-grant mission.<br/><br/>Any data produced in the context of this project will be made available to the public and maintained throughout the duration of the project and beyond. Developed source code will initially be maintained in a private GitHub repository, which will be released at “https://github.com/UNR-HPN/SDNWideArea” periodically when the codebase becomes stable. The repository will be maintained as part of ongoing support operations by UNR cyberinfrastructure personnel assigned to the infrastructure at the close of the project. Performance monitoring in the project will be associated with regional dashboards as best practices dictate.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
77
1827186Infrastructure
CC* Networking Infrastructure: Building a high-performance, flexible DMZ backbone at the University of Nevada, Reno
7/1/2018Graham KentNV
Board of Regents, NSHE, obo University of Nevada, Reno
Advances in science and engineering and the use of science-driven technology are transforming the scale and complexity of research at the University of Nevada, Reno (UNR). Fundamental to supporting these shifts in data volume and research workflows are core networking and computing infrastructures maintained by the central campus Office of Information Technology. As part of a strategic plan to increase technology support of research activity at the university, a collaboration of faculty and IT professionals are building a dedicated, high-speed research network backbone across campus with connections to off-campus high-performance-compute facilities, remote sensor networks, and national peering points. The primary tasks of this project are to: 1) increase external connectivity to Internet2 and Pacific Wave from 10G to 100G; 2) extend a set of 40G dedicated research network paths with data transfer management and monitoring to geographically distributed locations on campus to serve critical research efforts; 3) implement a "scienceDMZ" network architecture for scientific and research workflows across the UNR network and as a model for Nevada's higher education community; and 4) augment key wide-area network end-to-end research workflows impacting rural areas all the way to UNR's high-performance research computing colocation partner Switch, a new state-of-the-art world-class datacenter.<br/><br/>This project solves current and near-future connectivity problems for science-focused researchers at UNR by enabling larger-scale science, saving time, and increasing the rate of discovery, and creating the capability for the institution to participate in the emerging Pacific Wave and National Research Platforms.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
78
2126280Planning
CC* Planning: Undertaking a Process that will Create a Comprehensive Blueprint for Improving Cyber-Infrastructure at John Jay College, CUNY
7/15/2021Anthony CarpiNY
CUNY John Jay College of Criminal Justice
This award is funded in whole or in part under the American Rescue Plan Act of 2021 (Public Law 117-2).<br/><br/>John Jay College of Criminal Justice, a senior college of the City University of New York that is both a Minority-Serving and a Hispanic-Serving Institution, is creating a comprehensive blueprint for improving cyber infrastructure to advance instruction, learning, and research in the critical areas of forensics, cyber security, and bioinformatics. An interdisciplinary team of faculty, research staff, and information technology professionals is convening to identify, enumerate, and prioritize the college’s cyberinfrastructure needs, and produce a blueprint for meeting them. The goal of John Jay’s Campus Cyberinfrastructure planning initiative is to facilitate the development of a John Jay College Research Campus Cyber Infrastructure Resource Cluster. <br/><br/>This team is undertaking a comprehensive needs assessment focusing on the college’s overall management of cyberinfrastructure resources; requirements for hardware and high-performance computing; requirements for software, database, and simulation applications; applications specific to the college’s unique criminal justice focus; and professional development training needs for faculty, staff and students. This rigorous set of planning activities will produce an investment plan for an improved set of research computing resources to facilitate sophisticated faculty research across the disciplines. It further ensures the continued competitiveness of John Jay students in computing-focused career fields, particularly within the justice and security realms. Project deliverables include a five-year cyberinfrastructure investment plan that forms the basis for future proposal submissions to the NSF MRI and CC* programs to seek partial support for implementation.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
79
2018892Planning
CC* Planning: Designing a Process to Improve Research Computing Infrastructure at City Tech
7/1/2020
Justin Vazquez-Poritz
NY
CUNY New York City College of Technology
New York City College of Technology, a comprehensive college within the City University of New York and a Hispanic-Serving Institution, is creating a five-year strategic plan for cyberinfrastructure (CI) and future investments and strategic management in research computing resources. A team of faculty engaged in international high-performance computing projects are convening a broad-based stakeholder group to consider issues of CI governance and leadership, high-performance computing, data infrastructure, and faculty training and workforce development. The ecosystem of CI users includes researchers working on particle physics, galaxy formation, genomics and integrative systems biology, and protein dynamics and drug discovery, as well as faculty in a broad range of domains who need to learn the fundamentals of research computing, and students who are exposed to CI through their faculty mentors. These stakeholders are considering how to construct laterally and vertically integrated partnerships that draw upon the expertise of practitioners from New York State Education and Research Network, The Carpentries, and the NSF Established Program to Stimulate Competitive Research.<br/><br/>The goal of the CC* planning initiative is to design a flexible, extensible, accessible research cyberinfrastructure (CI) that is capable of meeting the evolving computational needs of City Tech’s most advanced CI users while cultivating a long tail of impact within the larger CI ecosystem. Identified institutional needs include a faculty-led governance structure, systematic rather than ad hoc resource acquisition policies to avoid the creation of silos, increased professional CI support for users, and the creation of a stable and sustainable starting point for systemic CI planning and governance. This planning grant provides that starting point. The work of the project will proceed through a combination of forums, Working Groups, and consultations from entities with expertise in CI development. Topics for investigation include a faculty led leadership structure, data, HPC, and stakeholder engagement. Project deliverables are a five-year Cyberinfrastructure Master Plan and planning for future proposal submissions to the NSF MRI and CC* programs to seek partial support for implementation of the Master Plan.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
80
2018822Compute
CC* Compute: A High Performance GPU Cluster at Syracuse University
9/1/2020Samuel ScozzafavaNYSyracuse University
Syracuse University is constructing a new compute cluster using Graphics Processing Units. This production cluster will serve the needs of the Syracuse University research community and the broader scientific community through integration with the Open Science Grid. The use of Graphics Processing Units to solve problems in science and engineering has grown significantly over recent years. Campus cyberinfrastructure providers have seen increasing demand for access to Graphics Processing Units in lieu of more traditional compute architectures. Syracuse University has created a strong partnership among campus-level cyberinfrastructure experts as well as at the national level through the University's contributions to the Open Science Grid. This cluster provides a significant resource for enabling science and cyberinfrastructure development at Syracuse University. The application drivers come from a variety of domains including computational forensics, high-energy physics, development of smart vision systems, computational chemistry, biomedical engineering, soft-matter physics, and gravitational-wave physics.<br/><br/>Access to the Graphics Processing Units will be an important resource for education of undergraduate and graduate students. Given the growth in related Graphics Processing Unit computing, Syracuse University faculty are introducing Graphics Processing Unit programming to undergraduate students from a variety of backgrounds. The capabilities and availability of this cluster will allow for broader adoption within the classroom environment, as well as provide compute power to researchers. Undergraduates from across the College of Engineering and Computer Science and the College of Arts and Sciences can use these resources, giving them early preparation for new computing schemes. Access to the resources via the Open Science Grid will allow researchers from across the U.S. access to the compute resources.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
81
1925596Compute
CC* Compute: Accelerating Computational Research for Engineering and Science (ACRES) at Clarkson University, A Campus Cluster Proposal
7/1/2019Joshua FiskeNYClarkson University
Clarkson University is building a computational cluster (ACRES: Accelerating Computation Research for Engineering and Science) to support data and computationally intensive projects aligned with Clarkson's four interdisciplinary research themes: Data Analytics, Healthy World Solutions, Advanced Materials Development, and Next Generation Healthcare. ACRES facilitates the conduct of high-impact, collaborative research that requires access to high-performance computing (HPC) resources, enables research currently not practical/feasible, and also supports student-learning opportunities through credit-bearing courses, undergraduate research, and an existing NSF REU site focusing on HPC. As a campus resource, ACRES is made available to any faculty member or student at the University according to queueing policies implemented to ensure fair-access. And, ACRES supports Clarkson's increased focus on computational research and a cluster hire of computationally active faculty. <br/><br/>The ACRES compute cluster replaces an existing, five-year-old high-performance compute cluster whose computational capacity provided 1.05M core-h/yr. Research need for computational capacity has grown to an identified total of 8.5M core-h/yr. ACRES is sized to meet current demands and modest near-term growth with unused computational capacity being shared via the Open Science Grid (OSG) to benefit the broader scientific community. This new computational resource provides 9.8M core-h/year through 1120 cores, high-speed Infiniband interconnect, four NVIDIA Tesla V100 GPUs, and 40 TB of scratch storage.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
82
1925590Compute
CC* Compute: High Performance Campus Computing for Institutional Research at the American Museum of Natural History
7/1/2019Juan MontesNYAmerican Museum Natural History
Through the National Science Foundation CC* program, the American Museum of Natural History (AMNH) expands the High-Performance Computing (HPC) capabilities that directly support the Museum's research. AMNH conducts scientific research and educational activities across astrophysics, anthropology, biology, and geosciences. The increasingly data-intensive nature of this research requires greater access to computational resources and to ever more sophisticated tools, including local and remote HPC clusters.<br/><br/>In this project, AMNH is expanding its on-premise computing cluster capacity and consolidating all existing clusters into a unified open-source software framework. These clusters are connected to the Museum's Science DMZ, a high-performance network specifically designed for research data flows, which provides high-speed network access between the Internet2 and the AMNH on-premise clusters. Additionally, AMNH researchers can execute complex workloads at scale using cloud resources at Amazon via the same local HPC management framework. Federation with InCommon provides both AMNH researchers and outside collaborators with secure access to these resources via a common authentication and authorization framework. Finally, the AMNH clusters are integrated with the Open Science Grid allowing AMNH to offer idle computing cycles to the wider research community while providing AMNH researchers with the same access to remote computing resources. These improvements greatly expand the overall HPC capacity available to AMNH scientists, increasing the speed and effectiveness of their research and decreasing time to discovery. Additionally, the work of AMNH scientists informs the Museum's educational and curatorial programs, directly benefiting AMNH students and the public.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
83
1827153Infrastructure
CC* Networking Infrastructure: High Performance Research Data Infrastructure at the American Museum of Natural History
7/1/2018Juan MontesNYAmerican Museum Natural History
Through the National Science Foundation?s Campus Cyberinfrastructure (CC*) program, this project provides a major data network upgrade for the American Museum of Natural History (AMNH) that makes scientific data flows a priority. AMNH conducts scientific research and education activities spanning astrophysics, geosciences, and genomics. Scientific collaborations require increasing network capacity among scientific instruments, collaborators and researchers. These data networking improvements to AMNH directly support these research activities. AMNH's ability to move large data sets quickly between its campus and other sites across the nation and throughout the world is critical to the success of the Museum's research program.<br/><br/>In the project, AMNH implements a high-speed Science DMZ, a network specifically tuned for large data transfers. Through the Science DMZ, AMNH connects to the NYSERNet Research and Education Network, which provides pathways to the higher education community in New York State, Internet2, and beyond. Using purpose-built Data Transfer Nodes (DTNs) based on the FIONA concept, AMNH scientists can move diverse, large-scale data sets between research facilities, supercomputing centers, and collaborators via a dedicated 10Gb/s connection. The efficiencies gained enable greater collaboration, more intelligent and timely coordination of research, and increased research throughput and quality. To further facilitate collaboration, the project leverages the GLOBUS file transfer system to easily orchestrate data transfers and for disseminating AMNH's research to the broader research community. Additionally, AMNH integrates with the InCommon Federation, utilizing a common framework to provide researchers secure access to online resources. Using perfSONAR, the Science DMZ is continually monitored to ensure that throughput, latency, and performance targets are met, and data can flow unimpeded. This allows Museum researchers to focus on science rather than the logistical details of moving data. Additionally, the work of AMNH scientists informs the Museum's educational and curatorial programs, directly benefitting AMNH students and the public.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
84
2126199Planning
CC* Planning: Virtual Research-Education Ohio (VROhio)
7/15/2021Pankaj ShahOHOhio State University
Governor Mike DeWine allocated $12.1M through the Governors Emergency Education Relief (GEER) program to upgrade OARnet's last-mile internet connections to 40 smaller higher education institutions (HEI) through a unique opportunity. This funding is a result of growing reliance on cyberinfrastructure to support in-person and remote teaching and research. For smaller institutions, having access to additional advanced CI enriches existing programs while enabling new opportunities for research and training that rely on external, shared, and co-developed resources. To foster full use of new connectivity resources, OARnet and a team from nine smaller HEIs (Northwest State Community College, Chatfield College, Lorain County Community College, Terra State Community College, Franciscan University of Steubenville, Sinclair Community College, Columbus State Community College, Xavier University, Baldwin Wallace University), service providers (OSC and the CWRU Electron Microscopy Facility) supported by national entities (EPOC, The Quilt, InCommon/Internet2, Trusted CI), are planning two CC* proposals - 1. "Regional Connectivity for Small Institutions of Higher Education" track and 2. "CI-Research Alignment" track. <br/><br/>The regional proposal will develop statewide research and teaching DMZ and VPN networks and a federated identity management system for secure access to shared network-accessible resources in classrooms and labs. The CI-Research Alignment proposal will provide human infrastructure and processes needed to co-develop shared educational resources and curricula, such as interactive laboratory simulations and other virtual environments for STEM training that will also enhance the high school to higher education STEM pipeline (College Credit Plus Program.)<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
85
2018912Integration
CC*: Integration-Large: POWWOW: Software-Defined Infrastructure for Wireless, Edge Cybersecurity Testbeds
10/1/2020Anish AroraOHOhio State University
The project, POWWOW, is a test-bed for large-scale, real-world experimentation in sensing, edge computing, networking, cyber-security, privacy and ethics. POWWOW will extend the large campus network that is in regular use at The Ohio State University (OSU). By embedding novel software-defined sensing, computing and networking capabilities at the edge and the core of this network, researchers will be able to study, in a carefully controlled manner, real-world communications and activities on campus, in order to develop new technology for not only sensing and networking but in particular technology and policies for cyber-security, privacy and ethical use of data.<br/><br/>The POWWOW research cyberinfrastructure is unique because it integrates with OSU's large-scale production networks to support ?living lab? studies in smart sensing. Its focus on real-world sensing and device operations distinguishes POWWOW from other wireless networking testbeds that focus on communication and networking. Researchers and educators will add their own devices and systems, including diverse sensors, wireless sensor networks, software defined radios, controllers, actuators, and vehicles all of which can be dynamically instrumented. Researchers may then, privately and in real time, process data from these devices, share the data, and communicate securely with authenticated users and their devices leading to the discovery of new methods for sensing the acoustic and the radio frequency environment to get private information as well as new methods for concealing information from being sensed. The project team is uniquely composed of researchers across OSU several institutes as well as cyberinfrastructure engineers from the OSU's Office of the CIO that will translate POWWOW into sustained, production use for wide-ranging projects in research and education. Lessons learned from POWWOW will enable systems like it to be replicated in other research environments, thus greatly scaling research.<br/><br/>POWWOW will accelerate the ability of research communities across the nation to perform new types of data-driven research and education. Because it connects to state level resources, such as the Ohio Cyber Range, it may be leveraged at the national level through initiatives such as Fabric-Net, to stimulate the development of an ethical basis for designing privacy preserving services in a world of ubiquitous sensing where explicit consent for every service is infeasible. Project activities will engage students via academic programming and R&D opportunities, ranging from scrubbing and curating data to hackathon-type projects that explore innovative uses of data, and developing, deploying and supporting POWWOW capabilities. This project will contribute to broadening participation in computing by prioritizing research users and projects where engagement of undergraduates from under-represented groups by leveraging established pathways at OSU such as the GEM Consortium and the Ohio LSAMP Alliance.<br/><br/>POWWOW will be hosted on campus at http://powwow.osu.edu. This site will disseminate the artifacts resulting from the project (such as publications, software, system designs and datasets), along with the policies and guidelines for accessing and using POWWOW. The site will be maintained for the duration of this project, and potentially well beyond it, as POWWOW is adopted by researchers and becomes widely used.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
86
1925678Infrastructure
CC* Networking Infrastructure: Network for Data Driven Science in Allied 21st Century Smart Multi-Campus System: A Use Case Design Through Kent State's Sharable Science DMZ
7/15/2019Javed KhanOHKent State University
This project from Kent State University (KSU) designs a ScienceDMZ sharable by the ten KSU campuses spread across northeastern Ohio aligned with NSF's goal of to innovate more scalable approaches to expand advance cyber infrastructure for massive data drive science. A ScienceDMZ at KSU's main campus connects to OARnet's optical exchange to have 100 Gbps unimpeded transfer rate capacity. KSU and OARNet teams to build a virtual DMZ perimeter over a highly-responsive regional WAN allowing researchers from bandwidth constrained regional campuses to access the cyber-facility with uniform access. IPv6, and shared network innovations like PerfSONAR and InCommons are employed. <br/><br/>The ScienceDMZ leverages a broad set of compelling big-data projects. The project launches outdoor ultra-high speed wireless access infrastructure in campuses connected to the ScienceDMZ to facilitate big-data-driven dense sensor and IoT research projects. This project is unique in the sense that a shared regional science DMZ is leveraged across ten campuses throughout north-eastern Ohio. The region of the Kent's allied campuses is the Rust Belt of America. The project brings the world of data-driven STEM research closer to this mass of students, a large percentage of whom are Pell-eligible and/or first-generation college students.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
87
2018453Regional
CC* Regional: Small Institution Multiple Organization Regional OneOklahoma Friction Free Network (SI-MORe-OFFN)
8/1/2020Vonley RoyalOK
OKLAHOMA STATE REGENTS FOR HIGHER EDUCATION
The Small Institution Multiple Organization Regional OneOklahoma Friction Free Network (SI-MORe-OFFN) project makes advanced cyberinfrastructure tools and services available to five smaller campuses in Oklahoma, covering scientific disciplines from chemistry and agriculture to health and nursing by connecting these campuses to the OneOklahoma Friction Free Network (OFFN). The five targeted campuses are Redlands Community College, University of Science and Arts of Oklahoma, Oklahoma State University Institute of Technology, Oklahoma State University in Oklahoma City and Oklahoma Christian University. The project provides new and diverse research collaboration opportunities for faculty across Oklahoma and enables the smaller, target institutions to expand their research and education activities. The project also has a significant educational impact for undergraduate and community college students at these metropolitan and rural campuses by providing them with additional STEM and cyber-infrastructure educational opportunities as well as exposure to leading researchers and cyber-infrastructure practitioners.<br/><br/>The SI-MORe-OFFN project extends the OneOklahoma Friction Free Network to five smaller institutions by implementing Science DMZs, Data Transfer Nodes and network performance monitoring capabilities at identified locations and integrating these with other similar capabilities across Oklahoma. In addition, the project enhances the skills and abilities of technical personnel at each campus and facilitates cross-campus collaborations, including shared uses of computing tools, scientific equipment, and educational materials by engaging faculty and students in the larger OneOklahoma CyberInfrastructure Initiative (OneOCII). OFFN is managed by OneNet, Oklahoma’s research and education network, and scientific leadership is provided by the University of Oklahoma.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
88
1925744Regional
CC* Regional: Extended Vital Education Reach Multiple Organization Regional OneOklahoma Friction Free Network (EVER-MORe-OFFN)
7/15/2019Stephen WheatOKOral Roberts University
The CC* Networking Infrastructure project, Extended Vital Education Reach Multiple Organization Regional OneOklahoma Friction Free Network (EVER-MORe-OFFN), expands the network capabilities of Cameron University (CU), East Central Oklahoma University (ECU) (both upgraded to 10Gbps), and Oral Roberts University (ORU) (upgraded to 100 Gbps) by connecting them to the OneOklahoma Friction Free Network (OFFN) through community-based Science DMZ technologies. Using 10Gb and 100Gb channels, OFFN interconnects Oklahoma's research organizations. OFFN allows the researchers at each institution to reliably connect to supercomputers and big data repositories, increase computational capabilities, and increase inter-institution collaboration.<br/> <br/>Previously, researchers at the EVER-MORe-OFFN institutions lacked throughput speed when dealing with large data sets. The EVER-MORe-OFFN expansion of OFFN affords these researchers access to a high-speed and efficient network allowing for analyses and simulations by researchers and an expanded repertoire of course projects and assignments available to professors. Additionally, the High Performance Computing Center at ORU is open to the collaborating universities through this expansion.<br/> <br/>Adding these three institutions increases the total OFFN Institutional participation to 12 institutions. As OFFN expands, so does the serviced population which provides new access to the necessary resources to create research opportunities previously not afforded. This project has broader implications for the scientific community including the next generation of researchers and scientists unlocking the door to many exciting discoveries.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
89
2019216Compute
CC* Compute: GPU-based Computation and Data Enabled Research and Education (G-CoDERE) at PSU
7/1/2020Feng LiuORPortland State University
Data and computationally intensive research and education is increasingly important at Portland State University (PSU). Scientists and students at PSU are producing massive quantities of data and investigating machine learning and data science approaches to research problems in many different fields. Graphics processing units (GPUs) excel at large-scale parallel computing and are critical for analyzing and visualizing such massive quantities of research data and developing data-driven technologies. This project establishes PSU's first GPU computing infrastructure and will support PSU research groups in a wide variety of fields, including Computer Science, ECE, Physics, Chemistry, Statistics, and Speech & Hearing Science, and benefit researchers in partner universities, including Oregon Health and Science University and Lewis & Clark University. It provides undergraduates, graduate students, and postdoctoral researchers new training opportunities for data-driven research; enables the creation of new machine learning, data analytics, and visualization courses; and supports upgrading existing courses with emerging data-driven paradigm. It facilitates the K12 outreach programs such as Oregon Mathematics, Engineering, Science Achievement and Saturday Academy?s Apprenticeships in Science and Engineering with GPU-enabled project and internship opportunities. This infrastructure allows PSU, Oregon's most diverse public university, to provide the state-of-the-art GPU facility and learning opportunities to students from underrepresented groups.<br/><br/>This project establishes PSU's first GPU computing infrastructure by acquiring twenty GPU servers with the related high-performance data storage. This GPU infrastructure complements PSU's Coeus high-performance computing cluster to support GPU-enabled research and education at PSU and share with external users through the Open Science Grid.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
90
2019161Team
CC* Team: Oregon Big Data Research and Education Team
7/1/2020Brett TylerOROregon State University
Today, more than ever before, big data pervade every area of the life, environmental, biomedical, earth, marine, computational, physical, urban, and social sciences, as well as numerous other domains. Increasingly powerful computing technologies have opened the pathway for researchers to address major global challenges through use of large and heterogeneous data sets and through complex models and simulations. This project provides domain scientists, including research students, with the expertise and training needed to collaborate effectively with specialists in these advanced computational and statistical methodologies. The project also provides training and research experiences for students and instructors from small regional colleges, including Hispanic-serving and Native-American-serving institutions. To widely share best practices, it supports a community of practice. <br/><br/>The project employs research and training staff (facilitators) with expertise in data integration, multi-modal data analytics and machine learning. These three related sets of methods enable the analysis of large complex data sets of different types or from different sources, which may or may not have been collected as part of a planned studies. Specifically, a four-person facilitation team is established across Oregon State University, the University of Oregon, and Portland State University. The interdisciplinary, cross institutional team will establish the tools and managements practices to serve the researchers in the state. The research facilitated by this project will lead to better understanding of earthquakes, diverse ecosystems, and plant and animal form and function. It supports development of faster computing systems, more secure energy systems, and improved environmental health. The data challenges posed by these application areas also motivate new foundational research in advanced data analytics and machine learning. The project also prepares a new generation of students from diverse backgrounds to enter the knowledge economy.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
91
2019035Compute
CC* Compute: Acquisition of a Lehigh University HPC cluster to enhance collaboration, research productivity and educational impact
7/1/2020Edmund Webb IIIPALehigh University
This project is constructing and deploying high performance computational capability at Lehigh University that will enable new research collaborations across Physics, Chemistry, Biology, Computer Science, Engineering. The work directly supports NSF’s mission to promote the progress of science by also providing critical infrastructure for broader incorporation of computation in science and engineering research pursuits. Further, a portion of the new resources is dedicated to education around computation, including support of efforts to increase the number of members of underrepresented populations in STEM-related professions. Finally, a portion of the resources is dedicated to contributing to the national open science grid (OSG). <br/><br/>The new resource combines CPU and GPU nodes to further broaden the applications and associated research that are supported including electronic structure calculations, atomistic and coarse-grained molecular simulations, Monte Carlo simulations, data science, and deep learning models. It also allows researchers who have traditionally utilized CPU-based architectures to explore GPU-based computation. Some of the specific scientific drivers around which new collaborations will be built include (i) understanding fundamental mechanisms of heterogeneous catalytic processes, (ii) realizing predictive design of bulk heterojunction organic solar cells, (iii) predicting thermo-mechanical properties of advanced materials, and (iv) elucidating mechanisms behind flow responsive proteins in human blood. Work under this support expands the Lehigh community’s research connections and advances Lehigh’s ability to generate high impact research, educate the next generation of scientists and engineers, and expand the usage of high performance computation and simulation-guided science in research and education.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
92
2018933Compute
CC* Compute: High-Performance Computing Backbone for Accelerating Campus-Wide and Regional Research
7/1/2020Aaron WemhoffPAVillanova University
Villanova University is acquiring a high-performance computing resource to expand the capabilities of at least 27 identified researchers in engineering, physical sciences, and social sciences. The grant also prepares students for the STEM workforce through engagement in computational research, education via the creation and modification of ten undergraduate and graduate courses, and student training in high performance computing operations. The impact of this grant extends well beyond Villanova University by establishing the Southeastern Pennsylvania High-Performance Computing Consortium to create new collaborative opportunities between Villanova and non-Villanova researchers, and by connecting Villanova to the broader Open Science Grid network to distribute resources to researchers nationally.<br/><br/>The grant focuses on three objectives to expand Villanova?s computational infrastructure for research and education. First, this work establishes new computational hardware ? including 1,184 central processing units (CPUs), 10,240 graphical processing units (GPUs), and 448 terabytes (TB) of data storage ? along with complementary software and networking resources. Second, resource usage expands fundamental research in seven project areas relevant to the mission of the National Science Foundation, including (1) materials for fusion energy applications, (2) causes of various nasal sinus diseases, (3) ion transport in energy storage devices, (4) speech perception and language processing, (5) river behavior, (6) nonlinear mechanical behavior, and (7) machine learning algorithms. Finally, practices are developed to mitigate the costs associated with growing and maintaining the computing resource.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
93
2018299Team
CC* Team: Research Innovation with Scientists and Engineers (RISE)
7/1/2020Chad HannaPA
Pennsylvania State Univ University Park
The pace of scientific discovery and the dissemination of scientific knowledge are increasingly being driven by computation through modeling, data science, and digital communication platforms. Not all researchers are equally positioned to leverage this computational revolution due to having inadequate expertise in their groups or insufficient funding to hire computational experts into full time positions. Penn State is working to ensure that researchers and educators across the 24-campus Penn State system have access to the cutting-edge cyberinfrastructure and computational expertise that they need to conduct the highest quality research and education. Penn State's approach is to build a team of cyberinfrastructure facilitators who are shared across investigators and who consult on projects at various scales to bring shared knowledge and the best-practices of modern computational techniques and tools to the broadest possible Penn State community. These facilitators, known as the "Research Innovation with Scientists and Engineers" (RISE) team, are experts in databases, visualization, code optimization, application development, web services, and cloud computing. They have broad knowledge of research cyberinfrastructure and are able to architect, design, and develop new cyberinfrastructure. They will also have deep knowledge of various scientific domains and will enable computational discovery. Investing in such a team will pay substantial dividends through increased productivity of faculty, more efficient use of research and education funding, and ultimately new discoveries across a broad swath of scientific domains including Physics, Astronomy, Bioinformatics, Chemistry, Energy, and Climate Modeling.<br/><br/>This project builds a cyber-team for Research Innovation with Scientists and Engineers (RISE) who will partner with campus-level CI experts, domain scientists, research groups, and educators to drive new approaches that support scientific discovery across the state-wide Pennsylvania State University system including 24 campuses serving more than 100,000 students. The RISE team will directly facilitate the usage and creation of research cyberinfrastructure across domains including Astronomy, Biology, Chemistry, Meteorology, Physics and more through consulting and providing direct services to faculty. The RISE team will partner with the Open Science Grid to establish Penn State as an OSG site, develop replica-exchange molecular dynamics software, apply machine learning to molecular biophysics, build digital signal processing software for radio astronomy, collaborate on feature development and testing with HTCondor, onboard new climate modeling tools and software in a sustainable and maintainable ecosystem, develop a gene sequencing management platform, deploy and maintain infrastructure to support real-time gravitational wave analysis with LIGO, and build a science gateway for stellar astrophysics simulations. Through the RISE team's shared knowledge, they will elevate the productivity of researchers who use and develop cyberinfrastructure allowing them to accomplish far more than they could in isolation. In order to broaden participation, the investigators will develop a seed grant program whereby faculty can apply to receive extended engagement with the RISE team where members would be embedded into research groups. RISE members will also regularly conduct training workshops and seminars in response to the needs of faculty across all Penn State campuses. Participation broadening and coordination activities will involve regular travel among the branch campuses by RISE team members and project investigators.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
94
1925704Infrastructure
CC* Networking Infrastructure: A High-Performance Science DMZ and Dedicated Research Network for Duquesne University
7/1/2019Sheryl ReinhardPADuquesne University
Duquesne University is implementing a dedicated research network and Science DMZ for the facilitation of science-driven research, education and collaboration. A vibrant and growing Duquesne research community conducts leading edge research and scientific experimentation in a broad range of science domains. The campus network is unable to support the data-intensive workflows generated by the research community. While science drivers and research workflows vary, there is a common need for fast and unrestricted data movement to internal and external research computing resources. The research-focused Science DMZ cyberinfrastructure addresses the limitations of the general purpose network by establishing a dedicated, scalable and secure network that is optimized for data-intensive science workflows. <br/> <br/>The project objective is to establish a dedicated 120Gbps Science DMZ network to support the Duquesne research community. A critical element of the Science DMZ is a Data Transfer Node (DTN). The DTN will provide University researchers with a secure, highly-available, high-performance data exchange point to facilitate the secure and efficient sharing of research data with collaborating institutions and national computational research facilities. The Science DMZ will connect to a dedicated 10Gbps uplink to the Pennsylvania-wide education and research computing network (KINBER).<br/> <br/>The new friction-free cyberinfrastructure not only accelerates large scale data movement, it facilitates new and more efficient workflows resulting in the acceleration and advancement of scientific discovery. The research network improves opportunities to train the next generation of research scientists and to inspire underrepresented groups to participate in scientific research activities.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
95
1925192Compute
CC* Compute: Building a state-of-the-art campus compute resource at Franklin & Marshall College
7/1/2019Carrie RamppPAFranklin and Marshall College
Franklin & Marshall College (F&M) is building and deploying a campus cluster resource to better meet the needs of our researchers and their students who need greater access to high performance compute resources to support intensive data analysis and computation. This project provides much needed local compute nodes for F&M's researchers and students while also contributing to the growing fabric of shared computing clusters across the country. This project contributes these new resources to the Open Science Grid (OSG) which is a national, distributed computing partnership that allows participants to share their resources with other researchers to maximize the impact these investments have on scientific research and discovery. As an institution, F&M has many top-tier scientific researchers who also partner with students in research that is regularly funded by public and private agencies. Providing substantially improved infrastructure to support this research will advance and expand the institution's capacity to support important investigations, from the search for pulsars to brain science. F&M has a demonstrated commitment to recruiting and supporting STEM students and this infrastructure improvement and investment allows the institution to continue to be a leader in this arena, providing access to the best resources and opportunities for future scientists. This project, similar to other recent initiatives, demonstrates how it is possible to design and implement significant research infrastructure, even at a smaller institution, that advances scientific discovery both on and beyond our campus.<br/><br/>The compute cluster maximizes available resources for research that requires both HPC and HTC solutions, comprised of a master node running dual Xeon Gold 6130 2.10GHz CPUs with 192GB of 2666 MHz ECC memory, dual 480GB SSD drives configured in RAID 1, and 38TB of usable SSD storage in a Raid 6 array. There are 12 standard compute nodes each using dual Xeon Gold 6130 2.10 GHz CPUs each node with 192GB of 2666 MHz ECC memory and 480GB SSD. There is one GPU node for use with software that takes advantage of cuda compiled software and GPU co-processing. It mirrors the same specs as standard compute node, but includes two Nvidia V100 GPU cards featuring 32 GB HBM2 memory and 5120 stream processors.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
96
1827050Design
CC* Network Design: Transforming Arcadia's Networking Capability, Enhancing for Innovation to Grow Research Leaders in a Technology-driven World
7/1/2018
Rashmi Radhakrishnan
PAArcadia University
As a small private university, Arcadia's existing computing infrastructure constrains the productivity of faculty in Bioinformatics, Computer Science, Chemistry, and Physics who are conducting data-intensive research. Specifically, the current infrastructure impedes researchers' ability to efficiently and securely access, share, or analyze large-data sets with collaborators at other institutions. To address these research and education needs, a collaborative team representing key university faculty and technologists at Arcadia is creating a Science DMZ with a data transmission network capable of 10Gbps connectivity (more than 10 times faster than current speeds) to the Keystone Initiative for Network Based Education and Research's (KINBER) PennREN network.<br/><br/>Project's objectives are to: (1) provide high performance, secure Science DMZ network capabilities for sharing of large datasets and cloud-based education; (2) eliminate technical barriers for faculty engaged in data-intensive projects through a dedicated, friction-free path to Internet2, PennREN, and other high performance computing and data resources; (3) leverage authentication and authorization mechanisms to support our faculty through the InCommon Federation; and (4) enable future scientific possibilities and unleash innovation for students and faculty researchers. <br/><br/>Arcadia is currently considering to incorporate a data analytics requirement into its general curriculum and leverage the newly developed cyberinfrastructure to enable cloud-based opportunities, distance learning and researching on a global scale. This opportunity is supporting greater faculty and student analytical scholarship by forming a frictionless environment built to innovate and thrive in our technology-drive world.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
97
1925641Infrastructure
CC* Networking Infrastructure: Claflin Research Network
8/1/2019Joey BrennSCClaflin University
This award supports Claflin University's research capabilities through a coordinated information technology expansion plan called the "Claflin Research Network". Claflin University is building a 10Gbps Science DMZ supporting Big Data research projects through a re-designed campus border prioritizing science data flows. This new component enables the transfer of very large datasets over long distances in a friction-free path. Claflin's collaborators include the Medical University of South Carolina (MUSC), Clemson University, and many domestic institutions as well as several international institutions. This project greatly increases Claflin's ability to share data between research collaborators in geographically dispersed institutions around the world.<br/><br/>This project will provide Claflin with the ability to be more competitive and makes Claflin a stronger research partner with the innovation ecosystem. In addition, this project allows for greater research and collaboration possibilities for Claflin's undergraduate and graduate student researchers giving them access to data and technology not currently available at our institution.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
98
1925484Infrastructure
CC* Networking Infrastructure: Building a Science DMZ for Data-intensive Research and Computation at the University of South Carolina
7/1/2019Jorge CrichignoSC
University of South Carolina at Columbia
The University of South Carolina (UofSC) is establishing a new network, namely a Science DMZ, operating at 100 Gbps. The Science DMZ supports current research moving terabyte-scale data between UofSC and national laboratories (e.g., Argonne, Fermi, Oak Ridge, Savannah River, Los Alamos), university collaborators, and the national network of supercomputer centers (XSEDE). The project serves the national interest, as it addresses the need to connect UofSC to the national "cyber-highway" system to share big science data, hence promoting collaboration and national competitiveness, aligned with NSF's mission. The new cyberinfrastructure also permits researchers to exchange large datasets with collaborators geographically distributed across the world. Examples include nuclear physics results from the Paul Scherrer Institute in Switzerland and observation files from the Cryogenic Underground Observatory for Rare Events (CUORE) in Italy. <br/><br/>The elements of UofSC's Science DMZ include: i) data transfer nodes (DTNs), built for sending/receiving data at a high speed over wide area networks; ii) high-throughput, friction-free paths connecting DTNs, instruments, storage devices, and computing systems; iii) measurement devices to monitor end-to-end paths; and iv) security policies tailored for high-performance environments. The proposed Science DMZ substantially increases the bandwidth to compute and XSEDE resources, permitting their use on digital image correlation, semiconductor material development, DNA/RNA sequencing, and other areas. Additionally, UofSC hosts key national resources and centers including the first U.S. deployed Time of Flight-Inductively Coupled Plasma-Mass Spectrometer, NOAA's National Estuarine Research Reserves database, McNair Aerospace Center, and Baruch Institute. These resources are now more efficiently used by researchers and collaborators.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
99
1827127Infrastructure
CC* Networking Infrastructure: Bulldog Connectivity and Research
8/1/2018Damian ClarkeSCSouth Carolina State University
The Bulldog Connectivity and Research Network (BCR net) project at South Carolina State University (SCSU) creates a network infrastructure to accommodate increasing research activities in STEM and non-traditional areas such as the library, visual arts, museum and the planetarium. The campus enterprise network, plagued with bandwidth bottlenecks, supports only administrative, and academic needs, which competes for simultaneous use of limited resources. Disparate silos of unconnected computing resources have supported research computing in research labs with little scalability to the evolution of research needs. The BCR net provides a frictionless redundant design dedicated to research computing that delivers high-speed connectivity. Its design focuses on isolating high-throughput research functions through data paths on operationally efficient high-speed connections to research partners.<br/><br/>BCR net, separate from the campus enterprise network, gives SCSU researchers access to on-demand high bandwidth science DMZ connections to sharable storage of research data connected to high throughput data transfer nodes with secure digital connections. Transfers of large datasets and instructions will be enabled under projects such as the Robotically Controlled Telescope Consortium, the NSF Partnership in Observational and Computational Astronomy (POCA), the NSF SCI-STEPS INCLUDES Project. Other areas affected include; the Physics and Chemistry computational labs for astrophysics and cancer research, the digital media lab, the historical collection and archiving Orangeburg massacre collection, applied radiation sciences lab, the Nuclear Engineering program reactor simulation lab, and the computer science security lab.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
100
1659300Data
CC*Data: National Cyberinfrastructure for Scientific Data Analysis at Scale (SciDAS)
2/15/2017Frank FeltusSCClemson University
Scientific discovery is increasingly dependent on huge datasets that require computing at unprecedented scale. Laboratory computers and spreadsheets simply cannot handle the data flowing from modern measurement devices, such as DNA sequencers. Scientific experiments now require understanding of both the underlying science and the cyberinfrastructure (CI) ecosystem to design and execute necessary computations. Fortunately, significant and strategic support from the public and private sectors is creating a distributed computational ecosystem at the national level to help meet the computational demands of large datasets. This project, the Scientific Data Analysis at Scale (SciDAS) is designed to improve flexibility and accessibility to national resources, helping researchers more effectively use a broader array of these resources. SciDAS is developed using large-scale systems biology and hydrology datasets, but is extensible to many other domains.<br/><br/>On a technical level, SciDAS federates access to multiple national CI resources including NSF Cloud, Open Science Grid, the Extreme Science and Engineering Discovery Environment (XSEDE v2.0), petascale supercomputers such as COMET, and campus resources. Central to SciDAS is the use of ExoGENI dynamic networked infrastructure to enable Layer-2 connectivity and data movement between these resources and data repositories. SciDAS relies on the integrated-Rule-Oriented-Data-System (iRODS), enhanced with software-defined-networking (SDN) capabilities, to support network-aware data management decisions and efficient use of network resources. The distributed and scalable nature of both the data-sharing and the compute infrastructure are exploited to optimize for computer and data locality, boosting the performance of workflows and scientific productivity. Scientific discovery use cases in systems biology and hydrology will drive cyberinfrastructure development at the petascale level while simultaneously generating useful results for domain scientists.