EMC Outlines Blueprint Of Virtualization

by Julia Fernandes    May 02, 2005

With companies churning more and more amounts of data, effective storage management has assumed greater importance than ever before. In this scenario, can storage virtualization be the next killer application in storage technologies?

Ajaz Munsiff, regional practice leader, EMC Corporation, answers this and much more in an exclusive with CXOtoday.com

What is storage virtualization?
Virtualization enables the creation of logical (virtual) representations of physical IT resources such as memory, networks, servers, and storage — that perform as if they were actual resources. In virtualized storage environments, applications can see and interact with these logical components, which are independent from but able to interact with their physical counterparts including SANs, disk arrays, tape components, and other storage media. By freeing applications from physical resources, virtualization offers a simplified and more effective means for centrally accessing and managing infrastructure components and stored information.

What are the potential benefits?
A storage virtualization solution eases the task and cost of storage capacity planning. It allows usage of heterogeneous storage, empowering enterprises to leverage their current infrastructure and to make future purchases based on the best choices available rather than being tied to homogeneous proprietary storage offerings.

It can also provide enterprise-wide manageability, allowing storage systems to be constantly available and scalable to meet future needs. It allows easy storage space reallocation with minimal impact to application servers, diminishing downtime and allowing enterprises to do business at optimum intensity, 24×7.

Virtualization is also fundamental to enabling business continuity functionality, such as mirroring and remote backup. With proper implementation, storage virtualization can yield tremendous cost savings and other vital benefits to today’s enterprises.

What are the likely challenges?
In terms of scale, since, virtualization technology aggregates multiple devices it must scale in performance to support the combined environment. In terms of functionality, it masks existing storage functionality, therefore it must provide required functions, or enable existing functions. Also, in terms of management, it introduces a new layer of management so it must be integrated with existing storage-management tools. Lastly, in terms of support, since it adds new complexity into the storage network hence it requires vendors to perform additional interoperability tests.

Could you describe the virtualization architectures?
There are two types of architectures - In-band and out-of-band. In-band virtualization solutions introduce an appliance or device “in the data path.” However, there are inherent limitations to this approach in the areas of complexity, risk, investment protection, scalability and ease of deployment. All the data in the SAN has to be virtualized and pass through the virtual layer. Instead of more flexibility, you become constrained by a difficult-to-implement, non-scalable solution

The network-based virtualization (sometimes called the “out-of-band” approach) leverages the existing SAN infrastructure by employing the next generation of intelligent switch / director technology. The implementation in the switch utilizes an open and standard approach. Developing such a technology in an open manner means that storage vendors need to work as a team with switch vendors to ensure seamless functioning of a heterogeneous environment. This means developing standard interfaces and providing customers with a choice of switches from multiple suppliers.

An additional element that is added is a management appliance that sits “outside the data path” and is primarily focused on managing the overall virtual environment, including mapping the location of the data. No “state” or version of the data is ever held in the network. Until the data is properly stored on the array, the application is not made to believe that the job has been completed.

What is EMC’s stand on Virtualization?
While there are a variety of hardware architectures for virtualization, EMC feels that the most appropriate is one that is out-of-band and leverages intelligence switches in the storage network. Also, the policies should be driven by the management systems that are present in the environment. For example, EMC ControlCenter can provide a single set of policies for virtualization and on virtualized storage.

EMC is set to launch its Storage Router, which will be a high-performance virtualization software that enables increased flexibility of storage infrastructure moving data seamlessly while minimizing storage-related downtime, across multi-tiered, heterogeneous SANs.

What is the current state of virtualization technologies in the marketplace?
Market acceptance has been slow, but will begin to take off later this year as more enterprise-class storage virtualization solutions come to market. New implementations will be developed with an open approach that is highly compatible with industry efforts to develop a standard API for SAN-based applications.

Could you explain the Fabric Application Interface Standard?
The common link to the EMC, Brocade, McData and Cisco alliance is a low-key storage API standard called the Fabric Application Interface Standard (FAIS), which will gain a higher profile this year when it attains ratification by the American National Standard Institute’s (ANSI) T11 Committee.

Could you outline the future of storage virtualization and storage utilities?
Companies that have large, diverse and complex environments who are looking to simplify the management of these environments are all good candidates for implementing virtualization. Some of the verticals where we have seen early interest have been in the telecommunications, financial services and retail industries though it’s certainly not confined to these verticals.