- 1 Vapor IO Brings OpenDCRE to General Availability
- 2 VMware Takes the Wraps Off vRealize Automation and vRealize Business
- 3 Microsoft Previews Hyper-V Containers for Windows Server 2016
- 4 Mirantis Led FUEL Project Gets Installed Under OpenStack Big Tent
- 5 Red Hat Enterprise Linux 7.2 Adds Security, DR Features
10 Data Center Management Mistakes You Might Be Making
More on data center management
For those who think (falsely) that they have the perfect data center, read on for some enlightenment. Those who work in the data centers of their dreams might beg to differ with your fantasy. Though you may not achieve desired perfection affordably, you can come close by changing the way you handle certain aspects of your data center management. Managing a collection of computer systems is no easy task. But, through better management and proper planning that task might involve popping fewer pain pills. Here are the 10 major data center mistakes to avoid.
1. Inadequate VirtualizationThere's no such thing as the perfect data center. There's always a data center management lesson to be learned. Try learning these 10 lessons -- the easy way.
If you operate a data center and haven't caught on that virtualization saves money, you're way behind the curve. Virtualization saves valuable rack space. It saves additional money on cooling, power and service contracts for those non-existent systems.
More on server virtualization
2. Untapped Cloud Computing
Similar to virtualization, cloud computing requires that you obtain a clue about its capability for your company or your customers. Amazon.com offers flexible and scalable plans that fit into an on-demand capacity scenario. Using Canonical's Ubuntu Linux Server Edition, for example, you can create your own private cloud or leverage Amazon.com's Elastic Compute Cloud (EC2) dynamically.
More on cloud computing
3. Design Flaws
Design flaws of a standing data center are difficult to overcome, but a redesign is less expensive than a fresh build. A 20-year-old data center still looks good, but it doesn't perform up to today's greener standards. You'll also have to retrofit your electrical apparatus to handle blade systems. You'll probably need to toss that old cooling system as well, since contemporary servers run cooler and more efficiently than their predecessors did.
4. Limited Expandability
"640K of RAM ought to be enough for anybody." How many times have you heard that quote that's attributed to Bill Gates, circa 1981? Whether he said it is of little importance now. The lesson to learn is that when you build anything, pretend you're converting a Celsius temperature to Fahrenheit: Double the amount you think you need and add 32. Using the Celsius-to-Fahrenheit equation will allow for some expandability in your data center. Two thousand square feet of floor space isn't enough? Try 4,032 feet instead. Poor planning is no reason to run out of floor space or any other capacity.
5. Relaxed Security
Enter any data center and you'll see card readers, retina scanners, circle locks, weight scales or other high technology security systems in place. But, next to those extreme security measures, you'll see a key entry access door for security bypass. Physical security requires no bypass. If there's a bypass in place, consider your security compromised.
More on security
6. Haphazard Server Management
To manage your server systems, do you need physical access or can you manage them remotely? Every contemporary server system comes with a maintenance connection with which to manage that system remotely. Use it. Enable it. For each person who enters a data center, you can expect some amount of system failure. Incorrectly labeled systems, incorrect locations, a misread system name and list goes on. Do yourself a favor: Enable those remote access consoles when you provision your physical systems.
More on server management
7. Ill-fated Consolidation Efforts
One order of data center management business is to minimize the number of systems on the floor or in the racks. Server consolidation is the method by which this effort is carried out. Consider a consolidation ratio of 2-to-1 or 3-to-1 unacceptable. Physical systems that operate in the 5 percent to 20 percent utilized range can easily consolidate onto a system with five, six or more of its peers. Underutilized systems waste rack space, power and money in the form of service contracts.
More on server consolidation
8. Overcooled/Undercooled Space
What temperature is your data center? You should find out. If your data center operates below 70 degrees Fahrenheit, you're wasting money. Servers need air flow more than they need arctic temperatures. Take a stroll through your data center. If it's comfortable for you, it's comfortable for your servers. There's no need to freeze your data center employees or make them sweat.
More on cooling
9. Underpowered Facility
How many times have you heard that a particular data center has floor space but no more power? You hear it more than you should, if you hear it at all. An underpowered facility is a victim of poor planning. (See No. 4 above.) Virtualization can help give you back some power. Server consolidation can also assist. But, those are short-term fixes for the greater problem of an underpowered facility.
More on server power
10. Rack Overcrowding
If you've ever attempted to work in a fully populated rack, you probably wished you had miniature hands or extra long fingers. It might seem inefficient to leave a bit of space between systems, but those who have the job of plugging and unplugging components for those systems will thank you. Poor planning leads to rack overcrowding, and it's unnecessary. Virtualization, consolidation and a more efficient arrangement will ease the problem. Experiencing an outage because of accidentally unplugging a server might convince you to leave a bit of space between systems.
Ken Hess is a freelance writer who writes on a variety of open source topics including Linux, databases, and virtualization. He is also the coauthor of Practical Virtualization Solutions, which was published in October 2009. You may reach him through his web site at http://www.kenhess.com.