Winston Damarillo

Subscribe to Winston Damarillo: eMailAlertsEmail Alerts
Get Winston Damarillo: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: CEOs in Technology, Cloud Computing, Virtualization Magazine, Infrastructure On Demand, Desktop Virtualization Journal, Cloud Data Analytics

Article

The Practice of Virtualization

The evolution of virtualized resources from perceived slices of servers to perceived slices of full data center infrastructure

Amid the frenzy of moving to the cloud, virtualization is often treated as an item to check off the list - rather managing virtualization is an ongoing practice that often determines the success or failure of IT 3.0 initiatives. While the benefits of virtualization seem fairly clear, implementing cloud computing approaches simply trades one set of operational headaches for another. The management of virtualized resources is exponentially more complex than wrangling physical infrastructure. And, contrary to the oft-cited example of how the Amazon outage is proof that cloud is unreliable, all it actually proves is that poorly implemented cloud is unreliable. This puts much more onus on the IT organization to evaluate risks and to understand how to properly plan and monitor virtualized resources - from networking and security groups to proper load balancing across regions and setting appropriate Role Based Access Controls (RBAC). What are the most critical elements of planning when considering virtualizing your enterprise infrastructure?

First, a bit of context on the transition happening in virtualization as new technology solutions come online. IT organizations have been using virtualization of servers to move non-critical workloads into the cloud, but the security and control requirements have prevented many enterprises from expanding small private deployments due to the hefty costs of dedicated hardware. If we begin our analysis with private clouds, it should become obvious that even virtualized servers networked in a monolithic architecture are not reaping the benefits of truly efficient computing.

Early designs of cloud computing focused on blades with an independent Storage Area Network (SAN) architecture. This blueprint consolidated the CPU and Memory into dense blade server configurations connected via several high-speed networks (typically a combination of Fibre Channel and 10GBT) to large Storage Area Networks. This has been a typical blueprint delivered by traditional "off the shelf" pre-built virtualization Infrastructure, especially in the enterprise in private cloud configurations.

More recently, hardware vendors have been shipping modular commodity hardware in dense configurations also known as Hyperscale computing. The most noticeable difference is the availability of hard drives or SSDs within the modules. This enables the virtualization server to run the Virtual Machines natively on the Host without going out to the SAN; this results in a much less complex network leading to several advantages. This model not only dramatically changes the Price / Performance model of cloud computing but also substantially simplifies the deployment of cloud infrastructure by lowering the IT expertise requirement. The latest architecture also provides the implementation team more flexibility by affording a "Lego block" like model of combining compute nodes and storage nodes into optimal units within a VLAN grouping of a deployment. This allows for the management of a large resource pool of compute and storage into an individually controlled subset of the datacenter infrastructure.

Thus, the revolution in virtualization is happening at the level of the data center not the server. By implementing this modular architecture, the enterprise can take advantage of virtualized networking as well as storage and compute for maximum cost savings and optimized performance. If we go one step further, and this blueprint is deployed using best-of-breed commodity hardware, pricing can begin to rival Amazon Web Services (AWS) offerings for dedicated hardware. This innovation begins to suggest that the best enterprise cloud solution becomes the Virtual Private Data Center instead of the less-efficient virtual private server (VPS).

The issue that surfaces once a fully virtualized data center is architected is how do IT operations mediate, configure and monitor these complex virtual resources? Given that the cloud ecosystem is still relatively immature, staying abreast of the myriad technologies and best practices presents a challenge to the IT organization in the enterprise. There is a lack of broad experience in managing large-scale virtualized infrastructure, so many enterprises are faced with adjusting their staffing or outsourcing. A simple but powerful disruptor is evolving: the cloud orchestration layer and, in particular, Cloud User Interface (CUI).

In the last quarter of 2011, our industry has seen a spate of both traditional IT players like HP and Dell, and upstarts like NetApps, putting forth reference architectures for converged data center infrastructure, combining the software necessary to manage these complex virtualized systems with the hardware. Increasingly, the UI may be an underrated element in predicting the successful adoption and management of cloud computing in the enterprise. Good user interfaces in cloud orchestration can not only be transformative in maximizing the use of cloud features like load balancing and cross-regional failover planning but also are becoming a requirement when considering the right solution.

Technical users may appreciate intuitive web-based control panels and dashboards but might also demand additional controls afforded by Command Line Interface (CLI) tools and Application Programming Interface (API). Adding complexity to the interface may significantly impact the productivity of sysadmins and developers, as well as uptake and re-training in general. Virtualization by definition requires a level of abstraction involving the details of physical servers, virtual machines, and the networking. There are occasional complaints that cloud orchestration solutions appear to be a "black box" that removes control over the underlying technology and is therefore less trustworthy.

But much like the operating system, users rarely benefit from code-level access to complex cloud technology (though the CLI and APIs make this available from the more transparent providers) and can actually increase comprehension through the selective presentation of contextual features. In fact, there is less room for error when architecting and launching cloud resources if the interface prompts fault tolerant configurations for load balancing and keeping mission-critical applications running across security groups. The one overlooked call for vigilance in the practice of virtualization is the management and utilization of control mechanisms, which empower different groups of consumer access to the features they need. The more intuitive the interface, the more important it is to set up permissions correctly - so you can be sure the right cloud services are released into the enterprise.

Finally, assuming virtualization has empowered your IT organization to optimize utilization of hardware, save time in managing resources by allowing developers to self-provision sanctioned virtual machine images, and put more focus on decreasing the time-to-market, how should the enterprise manage growth, forecasting and elasticity of virtualized resources? Where is the overlap with physical hardware? The idea of cloud bursting has been discussed increasingly with the explosive success of Amazon's EC2. And the beauty of a virtual resource is how ephemeral it seems. If you have variable needs, you can just add more compute or storage on demand and scale back as the peak load diminishes. But the issue of security remains.

Recently, Morphlabs announced an innovation in virtual private cloud computing - mCloud Dynamic Resource Scaling technology - that allows enterprises to scale compute and storage on dedicated hardware either locally or via carrier Ethernet exchange. This introduces true cloud bursting potential, as Dynamic Resource Scaling elastically allocates physical resources while managing virtualized needs. (See sidebar on the trends in hybrid cloud computing.)

As we head into 2012, with cloud computing becoming more pervasive, the changes in architecture from monolithic configurations to more modular infrastructure will allow users greater control to compose their own data center resources with the precision afforded by cloud orchestration software. The evolution of virtualized resources from perceived slices of servers to perceived slices of full data center infrastructure will mean a significant shift for the overall utilization of computing resources, impacting time-to-market for new applications and the speed of innovation in addition to bottom-line savings. Each of these shifts should be explored in more detail, as they have implications ranging from energy usage to capacity planning practices. But identifying these as major forces in the process of managing virtualized resources is a big step in the right direction.

Sidebar: Hybrid Cloud Computing
According to a recent Forrester study, "Cloud-using companies are starting to accept cloud bursting as a means to help further reduce costs for their cloud environments and increase efficiency. The dynamic combination of external cloud resources with spare capacity on-premises is a key strategy to achieve this goal." [The Next Maturity Step for Cloud Computing Management, a commissioned study conducted by Forrester Consulting on behalf of Morphlabs, October, 2011] The findings of Forrester's Custom Technology Adoption Profile are now available for download at http://www.morphlabs.com/forrester-report-the-next-maturity-step-for-cloud-computing/.

More Stories By Winston Damarillo

Winston Damarillo is a proven serial entrepreneur with a track record of building successful technology start-ups. His past entrepreneurial endeavors include several companies which grew to profitability and subsequently sold to major IT players. He was elected as a Young Global Leader of the World Economic Forum, serves as a Commissioner in the TechAmerica Foundation Commission on the Leadership Opportunity in the U.S. Deployment of the Cloud (CLOUD2)/State and Local Government (SLG), which will enable state and local governments to better serve their citizens by optimizing their use of cloud technologies and solutions, and is a trustee of the Philippine Development Foundation (PhilDev), an organization committed to social development initiatives for Filipinos in the Philippines and around the world by developing programs and supporting partnership initiatives in science, technology, innovation and entrepreneurship.

Winston holds a BS in Industrial and Mechanical Engineering from the De La Salle University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.