Read more on "Server Hardware Spotlight" »

Microsservers Are a Go! Page 2

Microservers Are a Go!

You may have heard of a microserver but just in case you haven't, a microserver is a very small server with reduced capability and options relative to a full server (seems pretty obvious given its name). While they typically have slower processors, less memory per core and less disk per core, a microserver is nonetheless a complete server solution.

For the most part, current servers have a single-core processor (sometimes dual-core or maybe even quad-core), some basic amount of memory, one or two NICs, and at least one hard drive of some size and capacity. What makes a micro-server unique is the focus on low-power. Instead of 95-130W per processor (16.25W per core), a microserver can get as low as 5-20W for the entire server (memory, network and storage). What this means is that you can get a complete microserver that uses less power than a single core of a larger server.

As a result of the low power draw, microservers can be packed very, very densely. It is not unusual to be able to pack up to several thousand servers in a single 42U rack. There are a number of microserver solutions shown by various vendors, of which most are really "demo units," although some are available for commercial purchase.

Since ARM is somewhat new to the server world, the software eco-system is not quite up to speed. Many of the systems listed below are really units intended to help the software eco-system mature rapidly. Plus they can be used to understand how microservers might fit into a server farm or data center. The list below is a quick summary of systems based on recent press releases and various articles.

  • Dell Copper
    • Sled (like a blade) with a Marvell SoC (System on a Chip) that has four 32-bit Marvell Armada XP 78460 ARM processors that have 4 cores each. This means you have 16 cores per sled. The servers are 32-bit and the processor runs at 1.6 GHz based on Marvell specifications.
    • Up to 12 blades fit into a 3U chassis that provides power for all blades. This is a total of 48 servers in 3U for a total of 192 cores.
    • One DDR3 UDIMM VLP slot per server running at 1333 MHz. Up to 8GB of memory per server (2GB per core).
    • There is a GigE link from each server (4 servers per sled). The sled has 2 GigE ports coming out of the front and an internal network connecting to a back-plane so there is some sort of switch on the sled.
    • One 2.5" drive attached to each server (4 drives per sled).
    • Total chassis power draw is 750W. Each server then draws about 15W.
    • Totals:
      • 48 quad-core servers per 3U (192 cores). This is the same as 16 servers per U, or 64 cores per U.
      • 15W per server or about 4W per core.
  • Dell Zinc
    • A Calxeda EnergyCore EXC-1000 card is the basic building block. It has four quad-core (4 cores) processors on the SoC with 32-bit ARM processors (Cortex-A9). These processors have a range of speeds (1.1 GHz to 1.4 GHz), but Dell does not specify which processor speed.
    • The sled (like a blade) has 3 rows of 6 slots where a Calxeda card or a storage card is placed. The Calxeda cards result in a total of 72 servers (4 servers per card, 6 cards per row and 3 rows total). But these servers do not have drives associated with them, requiring some sort of network-based storage. You can also put in cards that have drives on them instead of the Calxeda cards. The example Dell mentioned has 1 row of 6 cards (a total of 24 servers) as well as two rows of cards with only drives (2 drives per card), resulting in 24 2.5" drives. Then you can map one drive to each server.
    • Single DIMM slot per server (presumably up to 8GB per server)
    • The network fabric is discussed here
    • Up to 5 sleds per 4U server node (4 is more likely)
    • One 2.5" drive attached to each core as explained previously
    • Totals:
      • 288 nodes per 4U chassis (4 sleds but no drives - uses network storage)
      • 2,880 nodes per 42U rack
  • Dell Iron
    • No real details released. Data below is based on link above.
    • X-Gene 64-bit ARM processors (Applied Micro)
    • C5000 chassis
    • 6 servers per board. 12 boards per chassis. 72 servers per 3U
    • Totals:
      • 1,008 nodes per 42U rack (estimate based on ArsTechnica article)
  • HP Moonshot
    • A Calxeda EnergyCore EXC-1000 card is the basic building block. It has four quad-core (4 cores) processors on the SoC with 32-bit ARM processors (Cortex-A9). These processors have a range of speeds (1.1 GHz to 1.4 GHz), but HP does not specify which processor speed.
    • Each card has 4 servers (each servers is quad-core).
    • Single DIMM slot per server (presumably up to 8GB per server).
    • Half-width, 2U chassis with 3 rows of 6 cards. 4 servers per card for a total of 72 servers per chassis.
      • Each chassis has 4 10GigE uplinks, which come off internal EnergyCore Fabric.
    • SL6500 accommodates 4 chassis for a total of 288 servers per 4U (each server is quad-core).
    • Totals:
      • 288 nodes per 4U chassis
      • 2,800 nodes per 42U rack
      • Half-rack of 1,600 servers, 9.9 kW (6.1875W per server). Costs $1.2M ($750 per server). (reference)
  • HP Moonshot - "Gemini" - Atom processors
    • Presumably a similar layout to "Moonshot" but undetermined at this time.
    • Centerton Atom processors (S1200):
      • ~10W
      • 64-bit
      • 2 cores. Between 1.6 and 2.13 GHz (6.1W to 10W). 512KB L2 cache
      • ECC (SODIMM, DDR3L, at 1067 MHz. UDIMM and So-DIMM DDR3 at 1333Mhz)
      • Supports Hyperthreading
      • Links:
  • Boston Viridis
    • Uses Calxeda EnergyCore SoC
    • 48 nodes per 2U
    • 300 W per chassis (6.25W per server). Article at InsideHPC points out it used 8W per server when running STREAM (Linpack was 7.9W). See this link.
    • Up to 24 connected SATA devices
    • Fully loaded 2U chassis, 192GB memory (4GB per server) and 24 disks costs $50,000 ($1,041.67 per server)
    • Totals:
      • 48 servers per 2U (up to 4-core per server)
      • 1,008 servers in 42U
    • Links:
  • Quanta S900-X31A
    • 48 Atom S1200 servers into 3U
      • Two Atom servers per sled. Three rows of 8 sleds. 24 total sleds or 48 total servers
    • Dual-core processors up to 1.6 GHz
    • Up to 8GB of memory per node (2 nodes per sled)
    • GigE port per node (2 nodes per sled)
    • One 2.5" drive per node (2 nodes per sled)
    • Totals:
      • 672 servers per 42U rack (48 servers per 3U, fourteen 3U chassis per rack)
    • Links:

While maybe not an enterprise solution, the epitome of a low-power server is the Rasberry PI (RPi). This is a rather simple single-core server, but it has a GPU, sound, networking (Fast Ethernet), a slot for an SD card, two USB ports, an HDMI output and 512MB of memory. All of this can be had for about $30.

Sure the processor isn't that fast (750 MHz ARMv7), and there is only Fast Ethernet, but you can get a complete server that is many times cheaper than a processor or even a memory DIMM in a normal server (around $30 - $50). Plus it is positively tiny. A Raspberry PI is about the size of a credit card. It's a little thicker to accommodate the HDMI and USB ports, but overall it's just about 1" thick.

Imagine traveling with your own server? It would be lots of fun to go through airport security and put one of these in a bucket by itself. It might prompt a question or two from TSA but would hopefully not require any further investigation.

This article was originally published on February 11, 2013
Page 2 of 4

Read more on "Server Hardware Spotlight" »
Thanks for your registration, follow us on our social networks to keep up-to-date