- 1 Vapor IO Brings OpenDCRE to General Availability
- 2 VMware Takes the Wraps Off vRealize Automation and vRealize Business
- 3 Microsoft Previews Hyper-V Containers for Windows Server 2016
- 4 Mirantis Led FUEL Project Gets Installed Under OpenStack Big Tent
- 5 Red Hat Enterprise Linux 7.2 Adds Security, DR Features
OCZ Deneva 2 Series SSDs to Be Deployed in Scientific Computing Server Project
OCZ Technology Group, Inc. announced the Deneva 2 Series SSDs will be used as the storage device of choice in a pending "Data-Scope" research project at The Johns Hopkins University (JHU) to create servers for scientific data processing.
This initiative to maximize data processing power is spearheaded by Dr. Alexander Szalay, Alumni Centennial Professor in the university's Department of Physics and Astronomy and Director of the JHU Institute for Data Intensive Engineering and Science.
With the goal of creating an affordable, powerful computational environment that can be used as a blueprint for future science applications, the JHU project comprises a system of nearly one hundred servers using hundreds of OCZ Deneva 2 SSDs combined with regular hard disk drives with two tiers for storage and computing. These powerful yet inexpensive systems also serve to expose students and researchers to leading-edge technology at an early stage.
"We are extremely pleased to collaborate with The Johns Hopkins University and contribute some of our technical expertise to projects that are on the cutting edge," said Dr. Michael Schuette, V.P. of Technology Development at OCZ Technology. "The Data-Scope project using OCZ Deneva 2 SSDs as key components is an important step towards revolutionizing scientific computing, and we are proud to be a part of it."
One of several projects on the Data-Scope is a digital "multiverse" which will contain a database of the most astronomical objects ever detected. The project will allow every astronomer in the world to perform their own data analyses through remote access to the entire database, without the need of downloading tens to hundreds of terabytes of data through the Internet. Similar projects are in progress to analyze hundreds of terabytes of genomic data, and petabyte-scale numerical simulations in turbulence, cosmology and ocean circulation, all "Big Data" problems which do not fit the traditional models of scientific computing.
The completed Data-Scope will drive a new approach to science, where discovery is driven by large data set analysis. In order to be successful, scientists must be able to simultaneously build statistical aggregations over petabytes of data, yet explore the smallest aspects of the underlying collections. The unique advantage of this system is its ability to function both as a "microscope" and as a "telescope" for data, as well as its storage capacity of 6 petabytes, and its 500 gigabytes per second sequential I/O performance and 20 million IOPS. In addition to raw bandwidth, SSDs provide a smaller operating footprint over traditional HDDs, greatly reducing power consumption while still delivering the same amount of IOPS performance.
Leveraging the benefits of General Purpose Computing on GPUs (GPGPU) for scientific and engineering computing, random access data is streamed directly from SSDs into the co-hosted GPUs over the system backplane. The two major benefits of this architecture are the elimination of access latency by the SSD tier of the storage hierarchy, and the elimination of the network bottleneck by co-locating storage and processing on the same server.
"The JHU Data-Scope project, which is scheduled to deploy in early spring, is the ideal showcase for the strengths of SSDs in efficiently and reliably processing multiple petabytes data," said OCZ.