Welcome!

Stefan Bernbo

Subscribe to Stefan Bernbo: eMailAlertsEmail Alerts
Get Stefan Bernbo via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Virtualization Magazine, Enterprise Architecture, Desktop Virtualization Journal, Infrastructure 2.0 Journal, Storage Journal, Network Virtualization

Article

Virtualization and the Next Generation of Storage Architecture

The trend towards greater virtualization of infrastructure is not slowing down anytime soon

Legendary Intel co-founder Gordon Moore's eponymous law holds that the number of transistors per square inch of chip will double every two years, a prediction that has held remarkably firm as incredibly tiny devices have been made increasingly powerful. In recent years, that assertion could be used to apply to the creation of data as well. An IDC study in 2011 showed that the amount of data created by every device and person in the world would double every two years, with a staggering 1.8 zettabytes (1 billion terabytes) created in 2011 alone.

A significant driver for this trend in the enterprise sector is the explosion of virtual machines. In virtualization, a software program is used to simulate the functions of physical hardware, offering new levels of flexibility and hardware cost savings as more and more hardware functionality is virtualized. Yet, as might be inferred, the rapid popularity of virtualization is allowing organizations to run substantially more applications at a given time, requiring near unheard-of levels of storage and renewing a focus on elegant management, flexibility and efficiency.

To paraphrase the red queen in Lewis Carroll's Through the Looking Glass, the entire storage ecosystem must therefore adapt to a virtualized world as fast as it can just to stay competitive.

The Rise of Virtualization
Virtualization has become a significant trend for a number of reasons, chief among them cost savings and flexibility. One main benefit of virtualization is its ability to make more efficient use of the data center's hardware. Typically, the physical servers in a data center are idling most of the time. By installing virtual servers inside the hardware, the organization can optimize the use of its CPU and make more use of the hardware, a solution that makes ideal use of virtualization's benefits. That drives a company to virtualize more and more of their physical server that is one aspect to save money.

The other main benefit of virtualization is its ability to allow for more flexibility. It is more convenient to have infrastructure as virtual machines rather than physical machines. For example, if the organization wants to change hardware, the data center administrator can easily migrate the virtual server to the newer, more powerful hardware, getting even better performance for a smaller expenditure. Before the use of virtual servers was an option, administrators would need to install the new server and then reinstall and migrate all the data stored on the old server. That is much trickier. It's much easier to migrate a virtual machine than it is to migrate a physical one.

Virtualization Popular at the Threshold
Not just any data center is interested in virtualization. Data centers with a significant number of servers - somewhere in the range of 20-50 or above - are seriously beginning to consider turning those servers into virtual machines. First, these organizations can reap substantial levels of the cost savings and flexibility benefits described above. In addition, virtualizing one's servers makes them far easier to manage. The sheer physical challenge of administrating a certain number of physical servers can become challenging for data center staff. Virtualization makes data center management easier by allowing administrators to run the same total number of servers on fewer physical machines.

New Storage Demands
Yet for all the clear benefits of virtualization, the trend toward greater adoption of virtual servers is placing stress on traditional data center infrastructure and storage devices.

In a sense, the problem is a direct result of the popularity of VMs in the first place. The very first virtual machines made use of the local storage found within the physical server. That made it impossible for administrators to migrate a virtual machine in one physical server to another physical server with a more powerful CPU. Introducing shared storage - either a NAS or a SAN - to the VM hosts solved this problem, and its success paved the way for stacking on more and more virtual machines, which all became located in shared storage. Eventually the situation matured to today's server virtualization scenario, where all physical servers and VMs are connected to the same storage.

The problem? Data congestion.

A single point of entry becomes a single point of failure very quickly, and with all data flow forced through a single gateway, data gets bogged down swiftly during periods of high demand. With the number of VMs and quantity of data only projected to grow to ever-dizzier levels, it's clear that this approach to storage architecture must be improved. The architecture must keep running to keep up with the pace of data growth.

Learning from Early Adopters
Early adopters of virtualized servers - such as major service providers or telcos - have already encountered this issue and are taking steps to reduce its impact. As other organizations start to make their data center virtualized, they'll run into this issue as well. It's a growing problem.

Yet there is hope. Organizations seeking to maximize the benefits of virtualization while avoiding the data congestion issues caused by traditional scale-out environments are able to ensure that their storage architectures are keeping pace with their rate of VM usage; specifically, by removing the single point of entry. NAS or SAN storage solutions today inevitably have just a single gateway that controls the flow of data, leading to congestion when demand spikes. Instead, organizations should seek solutions that have multiple data entry points and distribute load evenly among all servers. That way the system retains optimal performance and reduces lag time, even when being accessed by several users at once.

While this approach represents the most straightforward fix, the next generation of storage architecture is suggesting another alternative as well.

Merging Computing and Storage
Arising to meet the storage challenge of scale-out virtual environments, the practice of actually running VMs inside the storage node themselves (or running the storage inside the VM hosts) - thereby turning it into a compute node - is fast becoming next generation in storage architectures.

Essentially, in this approach, the entire architecture is flattened out. For example, if the organization is using shared storage in a SAN, typically the VM hosts from the top of the storage layer, essentially turning it into one huge storage system with a single entry point. To solve the data congestion problems this approach creates, some organizations are moving away from the traditional two-layer architecture that has both the virtual machines and the storage running out of the same layer.

Looking Ahead
The trend towards greater virtualization of infrastructure is not slowing down anytime soon. Indeed, more and more companies will adopt virtualization, and will run into the performance lag issues described above. Yet by taking a cue from the early adopters who have created the best practices above, organizations can develop a successful scale-out virtual environment that maximizes performance while keeping infrastructure costs low.

More Stories By Stefan Bernbo

Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, he has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Swedish giant Ericsson, the world-leading provider of telecommunications equipment and services to mobile and fixed network operators.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.