Why government virtualization initiatives fail
Virtualization being the solution for modernization efforts, federal initiatives or executive orders, rely heavily on it. The greening of IT, telecommuting, cloud adoption, securing data at rest, disaster recovery, big data analytics and datacenter consolidation are all virtualization-centric. However government’s virtualization software is in most cases failing to succeed.
Dave Gwyn in his blog in fcw.com claimed that nine in 10 government desktop virtualization initiatives (VDI) never reach production. Some seasons for the lack of completion of these projects are as follows:
Cost: Unnecessary infrastructure costs such as hardware, set-up, and maintenance, which are typically seven to 10 times greater than the cost of software licenses can have detrimental effect on virtualization initiative.
Complexity: In datacenters normally, multiple components of your virtualization pilot, starts arriving from different vendors, on different days, with missing parts, and numerous pages of bills. If it takes more than a couple of weeks to test drive the solution, then it means the initiative will probably never see the green light.
Power: Excessive power requirements can delay virtualization efforts by months—while agencies wait for additional power circuit installations for servers, storage area networks (SANs), controllers, disk shelves and switches. Meanwhile, adding power flies in the face of many current greening and consolidation initiatives.
Cooling: Adding cooling is as time-consuming, expensive, and ecologically unsound as adding power.
Scaling: Agencies are typically given two undesirable alternatives: risk buying all the infrastructure up front, or run a pilot on less expensive, non-production infrastructure, then replace (to untested production hardware) when the pilot runs out of horsepower. More often than not, they choose neither.
Space: In eight of ten virtualization initiatives, rack space presents a problem both in real cost and opportunity cost. And nothing fills up rack space faster than servers, switches and SANs.
Weight: Shipping, hand-carrying heavy infrastructure might also spoil the initial enthusiasm for the pilot.
Politics: In most cases in virtualization initiatives, infighting between SAN teams, network teams, and server teams gives way to lost productivity, longer time-frames, or the death of the entire project.
Speculation: Much of the estimating the ROI of a traditional virtualization project is based on guesswork such as how much RAM will be needed per VM, how much storage, how many IOPs etc., however, guesswork might not always be correct. This may again lead to cancellation of the entire project.
Performance: In a traditional virtualization infrastructure, optimizing the separate server, storage, and network components requires a complex balancing act. However, there is no guarantee that this will satisfy user expectation.
In order avoid a virtualization initiative failure federal agencies are embracing a “hyper-converged infrastructure.” Hyper-convergence puts server and storage tiers in a single, small component, eliminating the need for separate servers, SANs and storage-network fabric. This helps lower costs, complexity, power, cooling, space, weight and politics significantly. It remains to be seen if hyper-convergence will steer government demands of a virtualization initiative effectively.
- 7 Lessons for Operators Moving to NFV
- Blockchain, Distributed Ledgers Will Take Time To Mature: Gartner
- Hybrid Cloud In The Era Of Hyper Conversion
- APAC To Lead In Cloud, IoT Adoption: Trend Micro EVP
- Why India Needs A Second Green Revolution
- Pi Datacenter's New Amaravati Facility To Employ 2,000 Staff
- Using Virtualization To Enhance Security, Save On Infrastructure
- Intel India To Invest Rs 1100 Crore To Expand R&D
- Harness IoT And Smart Appliances To Conserve Energy
- Weekly Rewind: Top 10 Stories On CXOToday (Apr 3-8)