Microsoft, Multi-core and the Data Center

By Drew Robb (Send Email)
Posted Feb 1, 2007


Dual-core and multi-core are subjects normally discussed in reference to Intel, AMD, Sun or IBM. Now, Microsoft is on that list as well. The software giant has created a large multi-core unit at its Redmond, Wash. campus.

Multi-core and Microsoft? Yep. The software company has created a multi-core unit to ensure its products keep pace the changing needs of the hardware on which they run.

"We are at the dawn of a new era of processor design," says Brad Waters, multi-core virtual team leader at Microsoft. "This represents tremendous opportunities in terms of software tools as well as high challenges."

Discuss this article in the ServerWatch discussion forum

He predicts future advances in performance will no longer come via increases in processor speed. The days of the GHz race are over. Instead, progress will come as cores are added.

Microsoft's interest in all this, of course, is from a software perspective. With hardware vendors boldly forging ahead where none have dared go before, the software guys must keep up — or risk becoming obsolete.

Remember what happened when 64-bit chips came out? Most OSes and applications were designed for 32-bit and couldn't function in the new environment. New OSes and apps had to be developed, or code extensively updated, for them to work with both 32-bit and 64-bit chips.

The same thing is happening with multi-core, although it may not be as apparent. More than 50 percent of processors shipped now are in the multi-core category. Dual-core processors may not have torn apart the software landscape at this point, but don't expect that to last long.

"Most client applications don't scale too well," says Waters. "Fortunately, server applications scale a lot better."

He reckons server applications may be able to scale as well as eight cores before they run into trouble serious trouble. That means Intel's recent unveiling of quad-core server will not rock the boat too much — yet. But starting with the next generation, problems may begin to crop up.

Breaking the Many-Core Barrier

As the lingo develops, the term multi-core has taken on new meaning. Whereas it once meant any number of cores beyond one, it now encompasses two to eight cores. Beyond eight cores, says Waters, is being called "many core." He believes many-core chips will not be on the market before 2010.

That may be the case for AMD and Intel. But Sun seems likely to be first to break the many-core barrier — and long before the end of the decade. Sun's had 8-core server chips out since 2005 in the form of its UltraSPARC T1 processor (known under the codename Niagara). Its 8-core/4-thread-per-core architecture leads the way with its ability to simultaneously execute 32 threads. This architecture is currently available in servers such as the Sun Fire T1000 and Sun Fire T2000.

"The next generation, Niagara 2, will be out by the second half of 2007," says Warren Mootrey, senior director of volume SPARC systems at Sun. "The Niagara 2 will double the number of threads from 32 to 64, add even better encryption and keep the power consumption at 72 watts."

Despite the expansion in the number of threads, Niagara 2 maxes out as an 8-core processor. It uses a 65 nm manufacturing process and is expected to have twice the throughput of the current UltraSPARC T1 processor.

To cope with the 32 threads of the current Niagara, Sun had to redesign its Solaris OS (version 10). It appears that the same OS will be able to cope with the rigors of the next generation. Beyond that, however, yet another redesign is likely.

That's probably why Microsoft has formed up its multi-core team. Rather than risk the possibility of its OSes breaking under quad, eight or more cores, it is taking steps to ensure it doesn't get caught asleep at the wheel. As a result, Waters is surprisingly well-briefed on the pluses and minuses of both the AMD and Intel multi-core architectures.

He explains, for example, that each socket on an AMD dual-core processor has local memory that both cores share. A HyperTransport bus takes data quickly from one socket to another. With Intel, on the other hand, each socket has four logical processors on two cores. The memory is connected by means of a front-side bus.

"The Intel memory design is slower than AMD as it's not directly connected," says Waters.

The Road Ahead

What does all this mean to Vista and future server OSes? Existing client applications, says Waters, are largely sequential. They aren't designed for multi-core environments. That is about to change, however.

Beyond the obviously multi-threaded world of gaming, which tends to lead the way in harnessing every ounce of processing power, applications are evolving that use multiple cores and threads. Various speech technologies, such as voice recognition are on the forefront.

"Current applications will become legacy within five years as new, richer applications are developed," says Waters. "This poses particular problems with server applications, however, as it is more difficult to improve their response times while maintaining concurrency."

Within the Microsoft Office portfolio, for example, Excel 12 has been rewritten to take into account modern chip designs. Waters reports response times on Excel 11 got slightly worse using dual-core chips, due to synchronization and disk I/O issues. With version 12, these issues have been ironed out, and a 1.9x gain in response is achieved.

On the newest version of Outlook, software developers have been working to improve startup. A feature known as super-fetch has been built into Windows Vista. Super-fetch pre-reads file data into memory using low-priority I/O. It uses current and historical system usage data to predict what the user may want to see next.

"This 'psychic cache' gets what you want before you ask for it," says Waters. "In tests, it cut some tasks down from 7 seconds to 1.4 seconds."

Applications also struggle with high-definition (HD) images. Waters believes the hardware has been holding back good HD; single-core chips can't cope with the load. Same with the upcoming generation of speech technologies. He believes the true potential of speech will be seen only when the amount of processing power is beefed up 100-fold.

Current production systems, although much better than those available a few years ago, still have error rates 10 to 20 times higher than humans. Vista, says Waters, has good voice recognition capabilities when using super-fetch, but he admits it still has a ways to go. With Moore's Law having now morphed into the world of multiple cores, he thinks the technology will eventually get there.

"The number of cores per socket will double every two years, giving us an increase in frequency of about 20 percent," says Waters. "Other system resources, such as memory and I/O, will have to scale up rapidly in order to keep pace with processor advances."

Page 1 of 1


Comment and Contribute

Your name/nickname

Your email

(Maximum characters: 1200). You have characters left.