GuidesWe Can Compete in HPC, Say Chip Vendors

We Can Compete in HPC, Say Chip Vendors

ServerWatch content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.




Intel and AMD don’t agree on much very often these days, but the two companies are in lock step on one issue: They can compete and offer scalable performance in the high performance computing (HPC) arena, despite findings to the contrary.

Despite complaints that x86 chips can’t scale properly for high performance computing, Intel and AMD say they have solutions in the works.

A recent report in the Institute for Electrical and Electronics Engineers (IEEE) publication IEEE Spectrum found that multi-core processors are not the best solution in some HPC scenarios.

The problem, uncovered by engineers at Sandia National Laboratories in New Mexico, home of the second-fastest supercomputer in the world, is the connection between the CPU and memory.

CPU vendors like Intel and AMD ran into a wall several years back where it became impossible to get any more clock speed out of their CPUs. The promise of a 4GHz processor never happened.

The solution became to go multi-core and let two or four cores do the job in parallel. In less than five years, a single-core x86 CPU has became virtually impossible to purchase.

But in some specific circumstances, multicore isn’t very helpful. Sandia engineers found as more cores are added, the more performance degrades because the memory can’t keep up with the cores. Instead of one CPU talking to the memory, four CPUs are talking to the memory. That effectively means one-quarter of the memory performance.

The Chip Vendors’ Rebuttal

Discuss this article in the ServerWatch discussion forum

Unsure About an Acronym or Term?
Search the ServerWatch Glossary

 

Intel said it is addressing the issue in its TeraScale 80-core chip. “Intel’s work on stacking memory could be key to resolving long-term multicore memory bottlenecks, but this was not discussed in the article,” said an Intel spokesperson in a statement e-mailed to InternetNews.com.

“We’ve been talking in public about the need to integrate memory closer to processors for more than two years now; showing directionally what will need to happen. We are confident that the industry will work around the memory bandwidth issue, currently there are very fast memory bandwidth subsystems, but reasonable cost as well as performance needs to be achieved,” the statement went on to say.

Intel recently released its Core i7 processor, the first without the frontside bus, a chip that acted as a bridge between CPU and memory. The FSB was long-viewed as an impediment to performance. The Core i7 now uses a much faster memory interface called Quickpath Interconnect (QPI), which offers 25 gigabits per second of throughput.

AMD has its own CPU-to-memory interconnect, HyperTransport 3.0, and Jeff Underhill, HPC business development manager in the Server/Workstation division of AMD, said faster technologies are in the works.

“Looking further out in time there’s definitely a need to maintain or extend the balance between processor and memory performance if we’re to realize the maximum potential from multi-core processors. However, as we’ve seen by some recent examples new memory technologies need to increase performance, deliver energy efficiency and have broad market adoption in order to be cost effective and ultimately succeed. We’re looking at multiple ways to collaborate on solutions that meet all of those goals going forward,” he said in a statement to InternetNews.com.

This article was originally published on InternetNews.com.

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends & analysis

Latest Posts

Related Stories