The fully virtualized environment
This is the decade of virtualization, the decade in which the whole data centre will be integrated into a single virtual machine. This is less of a prediction than a simple statement of what will inevitably happen. Just as 2 + 3 = 5, so the current combination of ageing and increasingly complex data centres running out of capacity and space, plus rapidly increasing energy costs, plus recession, plus relevant technology inevitably equals virtualization. That is the argument.The reality is that IT, and therefore the business that it serves, will benefit in many other ways. Virtualization is not merely a defensive action against rising costs, but also an affirmative action to improve the performance of the data centre. Put simply, with virtualization you get much more for much less. And that’s what we’re going to look at: the why, how and what of getting more for less in a virtualized data centre.
Why can’t we just carry on as we are?
The status quo in the traditional data centre cannot continue. There are several reasons. The first is very simple. As business expands, and its reliance on IT intensifies, so the computing capacity of the data center must increase. This has generally meant more equipment occupying more floor space consuming more energy. Many data centres are now physically bursting at the seams. This usually means that a new site must be bought or obtained and established, or that the company must consider a third party hosting option. Neither may be entirely satisfactory, and both can be very expensive.
Kennedys, a law firm that has doubled in size in the last four years
The second reason is also ultimately based on cost. Most existing data centers were built for a different age, one in which energy was relatively cheap. But times have changed. Energy is now expensive and getting more expensive, and we already have warnings of potential power cuts in the relatively near future. So even if the wholesale price of the gas and oil that provides our energy remains stable, which isn’t likely, the increasing demand on our existing energy supply will inevitably force prices further upwards.
This natural price increase is compounded by the data centres’ increasing power consumption. As long ago as 2007, Gartner noted:
These legacy data centers typically were built to a design specification of about 35 to 70 watts per square foot. Current design needs can vary from between 150 to 200 watts per square foot, and by 2011, this could rise to more than 300 watts per square foot. These figures for energy per square foot represent just the energy needed to power the IT equipment; they don’t include the energy needed by air-conditioning systems to remove the heat generated by this equipment. Depending on the tier level and future equipment density plans in the data center, these cooling needs can increase the overall power requirements by an additional 80 per cent to 120 per cent.
U.S. Data Centers: The Calm Before the Storm: Gartner, September 2007
This combination of rising prices and increasing consumption against a background of either recession or at least economic doldrums means that costs are in danger of spiralling out of control over the next few years. But there is a solution. Traditional data centres are not merely expensive, they are inherently inefficient in their use of resources. Gartner again:
Utilization of infrastructure remains low for most hardware platforms. A typical x86 server uses between five per cent and 10 per cent of its available capacity during a 24-hour period – reduced instruction set computer (RISC) Unix systems are slightly better, at 10 per cent to 20 per cent.
How IT Management Can “Green” the Data Center: Gartner, January 2008
Fredrik Sjostedt, Director – EMEA Product Marketing at VMware, explains it like this: “Over the last 10 years, data centres have become x86 processor-based. The way organizations have been deploying this has been to install one application per server, so that no single application can bring down another or start to consume too many resources to another application’s detriment. The result, with just one application per server, is that each server tends to operate typically at something between five per cent and 15 per cent capacity. But if we translate this to financial terms, it means that any company with 100 servers has spent 90 per cent too much money in relation to actual requirements, and ends up with a lot of wasted resource in both processing and storage capacity tied up in underused servers.”
Server virtualization, a key element in virtualizing the data centre, places multiple virtual servers on each physical server. This reduces the number of physical servers required, saves on floor space, and cuts down on energy consumption. Virtualizing the data centre alters the position from one of ‘no room to expand’, to one with ‘ample space and existing physical servers to cater for any necessary expansion’. That’s the first stage: consolidation of a large number of servers that are underused to a much smaller number that are efficiently used. But the management layer in the virtualization software takes it to the next level.
“Let’s say,” says Sjostedt, “that you start a new marketing campaign and set up a new website – and the campaign is wildly more successful than you expected. Instead of hundreds of hits, you get hundreds of thousands of hits. The physical serer allocated to that application now needs more resources. This is the Nirvana of virtualization. The management layer in the virtualization software recognises the situation and automatically and seamlessly moves other applications from that physical server to another physical server.” What you have is no longer 100 underused servers, nor really 20 correctly used servers – you have one big virtual machine that uses all of the resources of all of the servers to the best advantage of all of the applications.
OK, so we know we’ve got to do something, and virtualization seems to offer the best possible route. But it’s not a process to take lightly, and we need a plan of action. Many companies will have already made a start, probably with IT development systems. The IT Department may already understand the benefits – now it has to sell those benefits to senior management and develop an implementation plan.
Alstom, a VMware customer since 2002, decided in early 2009 to upgrade to VMware vSphere 4
Before going live, Jones was eager to test the performance of Alstom’s upgraded virtual infrastructure. The VMware consultants assisted in setting up a test environment in which Alstom initially upgraded six hosts to VMware vSphere 4. There, Alstom gauged the performance of core business applications such as Lotus Notes Domino servers, Blackberry Enterprise servers, and clients for desktop infrastructure, domain controllers, Oracle databases and Citrix Terminal Services.
- Don’t assume you can do it on your own with in-house expertise (if you could, you would have already done so). So choose a VM supplier that can provide the complete virtualized data centre – and stick with it. You really don’t want to have to change suppliers half way. Choose a supplier that can provide an experienced consultancy team and all the possible virtual requirements: servers, desktops, storage, and cloud at the least.
- Select an in-house Change Agent Team. You need champions who can bridge the gap between IT and senior business management; who can get management on-board and keep them there. “The things that tend to slow down implementation tend to be last minute nerves or technical issues that are actually communication problems – and they are more likely to come from the business side than the technical side of the company,” says Martin Snellgrove, EMC Consulting Global Virtualisation Director. So you also need to get your Change Agents trained and certified with your VM supplier so that they can anticipate and counter nerves with knowledgeable solutions.
- Develop an implementation plan. Don’t try to do it all at once; do it in project waves, a bit at a time. Don’t be afraid to cherry-pick. Include a detailed ROI statement, both for individual waves and the full project. Not only will you find predicting future costs can be accurate, you will obtain a more accurate picture of what you’ve currently got: discovering orphaned databases and unused, but still networked, servers is not unusual. Make sure this implementation plan takes account of your existing IT projects – hence the need to virtualize in waves. This will also help in getting management buy-in: they won’t feel that they are irrevocably committing themselves too heavily too soon. Nevertheless, your aim is to be able to complete the full virtualization as rapidly as possible.
- Implement the first wave. This becomes your proof of concept. It will confirm to senior management that you were right: it will demonstrate the advantages and confirm the full potential ROI. That’s when you get complete management buy-in.
- Be aware that this piecemeal approach does have a potential roadblock. You will have managers of the new virtualized areas and managers of the remaining old physical areas; and the old guard can potentially become entrenched in their old ways. Conversely, you can simultaneously start getting new demands from the business side of the company. “It is essential,” says Snellgrove, “that you must start your change and configuration and overall asset management as you transform the data centre so that you remain in control of how and when you would create a new virtual machine. This will stop a now enthusiastic business side demanding, at speed, a mass of new implementations – resulting in a project running away from you before you’ve got full control of it.” The success of virtualization can become its own enemy.
- Go back to the implementation plan, and just do it. Result. You have a fully virtualized data centre that is more efficient, costs less to run, requires less maintenance and now has space and capacity for further expansion at a fraction of the earlier cost.
What you get
The benefits of full virtualization will be apparent almost immediately.
- capital costs – fewer servers, less hardware investment
- running costs – less floor space, less air-conditioning costs, lower maintenance costs; potential software savings in lower OS licenses, bundled ISV licenses
- manpower costs – what typically took weeks now takes hours: provisioning and testing an OS and new applications; backup; full disaster recovery process testing and validation; and just about everything else that used to tie up IT.
Improved IT services
- IT staff will be able to achieve results in a fraction of the time they used to take. This means that they will be able to take on more work, test out new suggestions, implement improvements and undertake new projects where before they just didn’t have the time.
- IT management will be released from mundane maintenance and catch-up. Your most experienced and capable IT people will be able to stop and think, to take on a more strategic role within the business: in short, they will be able to support the business rather than just support their department.
- private cloud options: one example, already used in some installations, is to provide development machines on licence – they can be licenced for a set period after which they are automatically reclaimed to the pool. This concentrates the minds of the developers and prevents ‘server sprawl’.
To summarize, in the words of Burton Group’s Chris Wolf: “VMware is data center proven… Virtualization provides too many benefits to stand by and watch others improve their availability and IT processes, while saving on power and server hardware costs as a result of virtualization implementations. What’s virtualization worth? Ask one of your Windows server admins who is struggling to return a critical server to operation on new hardware. Ask a developer who wants to test a piece of his code but is weighing whether the time to stage a system is worth it. Ask a server team in a data center where there is no more physical room or power to add servers… The question should not be what is the cost of virtualization, but rather what is the cost of not incorporating virtualization within your infrastructure.”