GuidesHardware Today: The State of Grid Computing

Hardware Today: The State of Grid Computing

ServerWatch content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

“Grid technologies have long been used for scientific and technical work, where dispersed computers are linked to create virtual supercomputers that rapidly process vast amounts of information,” Sara Murphy, program manager for grid computing at Palo Alto, Calif.-based HP, said.
Long associated with scientific and technical applications, grid adoption is gaining acceptance in the commercial sector thanks in part to service-oriented architecture.

“Now the commercial enterprise is moving to an IT model based on a service-oriented architecture (SOA) where grids can be used as the technology infrastructure,” Murphy said. “In some cases, IT is being recast as an internal utility for enterprise-wide use, and the deployment vehicle is grid.”

Patrick Rogers, vice president of products and partners at Network Appliance of Sunnyvale, Calif., also ties grids into the SOA framework.

“The use of grid computing appears to be increasing in popularity, particularly in the context of large database applications and high performance computing (HPC),” he said. “The ability to create a network-based virtual computing resource is viewed as an enabler of service oriented architectures in the enterprise as well as compute intensive applications in the HPC market.”

Three Types of Grid
Grids, however, should not be viewed as a single-faceted concept. There are, in fact, three primary methods of grid computing.

  1. Linking Data Centers
    This approach is used mainly by research institutions to share their facilities for high-end applications. For example, the National Science Foundation sponsors the TeraGrid, which uses high-speed networking to link 16 compute resources at universities and laboratories around the country. Through TeraGrid, users can access 102 Teraflops of processing power, more than 15 Petabytes of online and archival storage and more than 100 discipline-specific databases.

    This concept is finding favor in the enterprise. HP, for example, has developed technology to assist in the deployment of commercial grids.

    “In multinational companies with data centers in many locations, efficient IT utilization is a serious challenge,” Murphy said. “Grid is not a packaged product, but rather a set of components, technologies and services pulled together.”

    In addition to grid-enabled servers and grid management software, the company offers HP Flexible Computing Services. This, according to Murphy, makes it easier for customers to reap the benefits of a utility approach to enterprise-scale IT. Customers gain direct access to data center computing via a grid-type architecture. In addition, the HP Grid Consulting Services to provide a single point of accountability from planning through migration and transition to ongoing maintenance and optimization of the grid.

    “HP grid solutions allow customers to provision applications and allocate capacity across geographically and organizationally dispersed teams as business needs change,” Murphy said. “This ability to handle peaks and troughs in demand enables organizations to take advantage of underutilized resources, rapidly deploy resources for new projects, and improve time-to-market for new products.”

  2. Capturing Unused Cycles on PCs
    PC CPUs typically run at less than 10 percent utilization. Link thousands of them together and you assemble a supercomputing juggernaut. The largest such project is SETI@home (Search for Extraterrestrial Intelligence). Hosted by the University of California at Berkeley, SETI@home harnesses the combined power of hundreds of thousands of PCs to analyze radio signals for evidence of extraterrestrial life. It runs around the clock at an average of around 180 Teraflops.

    These types of grids can also be used inside the firewall to harness idle workstations. Platform Computing of Markham, Ontario, has software for managing server clusters, but can also include Windows PCs as part of the infrastructure. This allows companies to run large-scale simulations when the PCs aren’t being used.

    Pratt & Whitney, a division of United Technologies Corp. of East Hartford, Conn., uses Platform’s LSF software to model jet engines and gas turbines rather than relying on physical testing of the hardware. A single physical test could run million dollars and take months to complete. By installing this software on 150 servers and 5,000 workstations at five locations, Pratt & Whitney runs such simulations overnight.

  3. Renting Processing Power
    Sun Microsystems of Santa Clara, Calif., for example, offers the Sun Grid Compute Utility, which allows customers to rent additional processing power on a per-hour basis. CDO2 Ltd, a London-based firm that produces software that allows banks, hedge funds and investment firms to run complex financial risk simulations on their portfolios, was one of the early adopters.

    “Before using Sun Grid, we had to deploy our software in various environments and geographical locations to meet customers’ needs in-house,” CDO2 director Gary Kendall said. “We now have customers around the world running on Sun Grid without having to do any local software installation ourselves.”

    Offering the option of harnessing Sun’s computers has expanded CDO2’s potential customer base from the top 100 financial services companies to the top 1000. Current clients still run the software in house, but any new customers are set up on the grid.

    “We suggest our customers start with a relatively small amount of compute power, say 10 CPUs per hour, and then buildup as their businesses grow,” Kendall said. “They can always access increased power, to say 100 CPUs per hour, when they need it.”

Storage Grids
Grid technology is also being embraced by storage innovators. NeoPath Networks of Mountain View, Calif., released NeoPath File Director, an appliance that provides network file management capabilities. The File Director is not a file server itself and no user data is stored on it. Instead, it creates a virtualized namespace that enables the appliance to act as a router for the NFS and CIFS protocols between clients and file servers to access files in a consistent manner regardless of their physical location in the network.

“Virtualization is a key component of building grids,” Rajeev Chawla, founder and executive vice president of products at NeoPath, said. “The idea is to shift file management intelligence into the network.”

NetApp, too, is muscling in on the grid act. Its operating system Data ONTAP 7G is being used for commercial grid applications based on databases such as Oracle.

“This is a great fit for commercial grid applications, as it enables both increased performance and higher levels of capacity utilization in a grid environment,” Rogers said. “NetApp storage is integrated with Oracle Enterprise Manager Grid Control, which enables management of NetApp storage from an Oracle console.”

Grid Future
Rogers anticipates increased adoption of grids in technical applications such as rendering animated motion pictures or analyzing geologic data, as well as commercial applications sung enterprise databases.

“As grids become easier to deploy and manage, they are being integrated with mainstream and mission-critical application software,” he said. “They will become more common across all types of organizations.”

HP’s Murphy sees a similar trend. She highlights the broad adoption of the technology in financial services for risk analysis, and by R&D groups in pharmaceutical, manufacturing, and oil & gas.

“IT departments in these same companies are beginning to adopt some of the technologies that have been pioneered by their R&D groups,” she said.

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends & analysis

Latest Posts

Related Stories