Welcome!

Stefan Bernbo

Subscribe to Stefan Bernbo: eMailAlertsEmail Alerts
Get Stefan Bernbo via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Cloud Computing, Virtualization Magazine, Storage Journal

Article

New Storage Solutions: Hyperconverged and Hyperscale | @CloudExpo #Cloud

Enterprises today need flexible, scalable storage approaches if they hope to keep up with rising data demands

Mobile devices. Cloud-based services. The Internet of Things. What do all of these trends have in common? They are some of the factors driving the unprecedented growth of data today. And where data grows, so does the need for data storage. The traditional method of buying more hardware is cost-prohibitive at the scale needed. As a result, a new storage paradigm is required.

Enterprises today need flexible, scalable storage approaches if they hope to keep up with rising data demands. Software-defined storage (SDS) offers the needed flexibility. In light of the varied storage and compute needs of organizations, two SDS options have arisen: hyperconverged and hyperscale. Each approach has its distinctive features and benefits, which are explored below.

Making Important Distinctions
To appreciate these new SDS options, it is helpful to look back at what came before them. Converged storage combines storage and computing hardware to increase delivery time and minimize the physical space required in virtualized and cloud-based environments. This was an improvement over the traditional storage approach, where storage and compute functions were housed in separate hardware. The goal was to improve data storage and retrieval and to speed the delivery of applications to and from clients.

A "building block" model is the basis of converged storage infrastructure. That is, it uses a hardware-based approach comprised of discrete components, each of which can be used on its own for its original purpose. Converged storage is not centrally managed and does not run on hypervisors; the storage is attached directly to the physical servers.

However, a software-defined approach is the foundation of hyperconverged storage infrastructure. All components are converged at the software level and cannot be separated out. This model is centrally managed and virtual machine-based. The storage controller and array are deployed on the same server, and compute and storage are scaled together. Each node has compute and storage capabilities. Data can be stored locally or on another server, depending on how often that data is needed.

Today's data demands require agility and flexibility, and that is exactly what hyperconverged storage offers. It also promotes cost savings. Organizations are able to use commodity servers, since software-defined storage works by taking features typically found in hardware and moving them to the software layer. Organizations that need more 1:1 scaling would use the hyperconverged approach, and those that deploy VDI environments. The hyperconverged model is useful in many business scenarios, functioning as a building block that works exactly the same; it's just a question of how many building blocks a data center needs.

If hyperconverged storage is so flexible and efficient, why would anyone need hyperscale storage? It's a new storage approach created to address differing storage needs. Hyperscale computing is a distributed computing environment in which the storage controller and array are separated. As its name implies, hyperscale is the ability of an architecture to scale quickly as greater demands are made on the system. This kind of scalability is required in order to build Big Data or cloud systems. It's what Internet giants like Amazon and Google use to meet their vast storage demands. However, software-defined storage now enables many enterprises to enjoy the benefits of hyperscale.

For instance, organizations that choose hyperscale storage can reduce their total cost of ownership. That's because commodity servers are typically used and a data center can have millions of virtual servers without the added expense that this number of physical servers would require. Data center managers want to get rid of refrigerator-sized disk shelves that use NAS and SAN solutions, which are difficult to scale and very expensive. With hyper solutions, it is easy to start small and scale up as needed. Using standard servers in a hyper setup creates a flattened architecture. Less hardware needs to be bought, and it's less expensive. Hyperscale enables organizations to buy commodity hardware. Hyperconverged goes one step further by running both elements - compute and storage - in the same commodity hardware. It becomes a question of how many servers are necessary.

The Best of Both Worlds
In a hyperconverged approach, there is basically one box with everything in it; hyperscale has two sets of boxes, one set of storage boxes and one set of compute boxes. It just depends what the architect wants to do, according to the needs of the business. A software-defined storage solution would take over all the hardware and turn it into a type of appliance, or it could be run as a virtual machine - which would make it a hyperconverged configuration.

In light of the exponential increase in storage demand, it is comforting to know that data center architects don't have to choose between one or the other of these solutions. These architectures can be combined to accommodate specific needs at specific times. Storage needs are fluid, which makes flexible storage solutions ideal. In addition, hyper solutions save money by not requiring expensive hardware. Hyperconverged and hyperscale storage approaches represent the best of both worlds.

More Stories By Stefan Bernbo

Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, he has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Swedish giant Ericsson, the world-leading provider of telecommunications equipment and services to mobile and fixed network operators.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.