Tag Archives: virtualization

Not all “virtualizations” are equal

This blog post is adapted from my book, titled VMware vRealize Operations Performance and Capacity Management. It is published by Packt Publishing. You can buy it at Amazon or Packt.

In the previous post, I shared that virtualization has large ramification in the IT industry. The reason why it can create such a massive impact is because it is a lot more complex than what may appear on surface.

There are plenty of misconceptions about the topic of virtualization, especially among nontechnical IT folks. The CIOs who have not felt the strategic impact of virtualization (be it a good or a bad experience) tend to carry this misconceptions. Although virtualization looks similar on the cover to a physical world, it is completely re-architected under the hood.

So let’s take a look at the first misconceptions: what exactly is virtualization? Because it is an industry trend, virtualization is often generalized to include other technologies that are not virtualization. One reason is the use cases are similar. They can address the same business requirements. A popular technology often branded under virtualization is Partitioning; once it is parked under the umbrella of virtualization, both should be managed in the same way. Since both are actually different, customers who try to manage both with a single piece of management software struggle to do well.

Partitioning and virtualization are two very different architectures in computer engineering, resulting in major differences in functionalities. They are shown in the following figure:

Virtualization is not partitioning

With partitioning, there is no hypervisor that virtualizes the underlying hardware. There is no software layer separating the VM and the physical motherboard. There is, in fact, no VM. You cannot have a VM because there is no virtual motherboard. This is why some technical manuals in the partitioning technology do not even use the term VM. The manuals use the term domain or partition instead.

There are two variants in the partitioning technology, the hardware level and the OS level, which are covered in the following bullet points:

  • In the hardware-level partitioning, each partition runs directly on the hardware. It is not virtualized. This is why it is more scalable and has less of a performance hit. Because it is not virtualized, it has to have an awareness of the underlying hardware. As a result, it is not fully portable. You cannot move the partition from one hardware model to another. The hardware has to be built for a purpose to support that specific version of the partition. The partitioned OS still needs all the hardware drivers and will not work on other hardware if the compatibility matrix does not match. As a result, even the version of the OS matters, as it is just like the physical server.
  • In the OS-level partitioning, there is a parent OS that runs directly on the server motherboard. This OS then creates an OS partition, where other “OS” can run. I use the double quotes as it is not exactly the full OS that runs inside that partition. The OS has to be modified and qualified to be able to run as a Zone or Container. Because of this, application compatibility is affected. This is very different to a VM, where there is no application compatibility issue as the hypervisor is transparent to the Guest OS.

We covered the difference from an engineering point of view. However, does it translate into different data center architecture and operations? Take availability, for example. With virtualization, all VMs become protected by HA (High Availability)—100 percent protection and that too done without VM awareness. Nothing needs to be done at the VM layer, no shared or quorum disk and no heartbeat network. With partitioning, the protection has to be configured manually, one by one for each LPAR or LDOM. The underlying platform does not provide that. With virtualization, you can even go beyond five 9s and move to 100 percent with Fault Tolerance. This is not possible in the partitioning approach as there is no hypervisor that replays the CPU instructions. Also, because it is virtualized and transparent to the VM, you can turn on and off the Fault Tolerance capability on demand. Fault tolerance is all defined in the software.

Another area of difference between partitioning and virtualization is Disaster Recovery (DR). With the partitioning technology, the DR site requires another instance to protect the production instance. It is a different instance, with its own OS image, hostname, and IP address. Yes, we can do a SAN boot, but that means another LUN is required to manage, zone, replicate, and so on. DR is not scalable to thousands of servers. To make it scalable, it has to be simpler. Compared to partitioning, virtualization takes a very different approach. The entire VM fits inside a folder; it becomes like a document and we migrate the entire folder as if the folder is are one object. This is what vSphere Replication in Site Recovery Manager does. It does a replication per VM; no need to worry about SAN boot. The entire DR exercise, which can cover thousands of virtual servers, is completely automated and with audit logs automatically generated. Many large enterprises have automated their DR with virtualization. There is probably no company that has automated DR for their entire LPAR or LDOM.

I’m not saying partitioning is an inferior technology. Every technology has its advantages and disadvantages. Before I joined VMware, I was a Sun Microsystems SE for five years, so I’m aware of the benefit of UNIX partitioning. I’m just trying to dispel the myth that partitioning equals virtualization.

As both technologies evolve, the gaps get wider. As a result, managing a partition is different than managing a VM. A management solution that claims to manage both needs to have functionality specific to each technology.

Journey into the Virtual World

This blog post is adapted from my book, titled VMware vRealize Operations Performance and Capacity Management. It is published by Packt Publishing. You can buy it at Amazon or Packt.

Who could have predicted many blue moons ago, that a seemingly simple technology, a hypervisor and its management console, would have such a large ramification for the IT industry? In fact, virtualization is turning a lot of things upside down and breaking down silos that have existed for decades in large IT organizations. It is also changing many large IT vendors strategy. It has made cloud computing, a bigger change in the industry, possible.

The change caused by virtualization is much larger than the changes brought forward by previous technologies. In the past several decades, we transitioned from mainframes to the client/server-based model to the web-based model. These are commonly agreed upon as the main evolution in IT architecture. However, all of these are just technological changes. It changes the architecture, yes, but it does not change the operation in a fundamental way. Both the client-server and web shifts did not talk about the “journey“. There was no journey to the client-server based model. It is just an architectural change. However, with virtualization, we talk about the virtualization journey. It is a journey because the changes are massive and involve a lot of people.

Gartner correctly predicted the impact of virtualization in 2007. That was 7 years ago, a long time in IT. 7 years later and we are still in the midst of the journey. This proves how pervasive the change is. Here is a summary on the article from Gartner:

“Virtualization will be the most impactful trend in infrastructure and operations through 2010, changing:

  • How you plan
  • How, what and when you buy
  • How and how quickly you deploy
  • How you manage
  • How you charge
  • Technology, process, culture”

Notice how Gartner talks about change in culture. So, virtualization has a cultural impact too. In fact, I think if your virtualization journey is not fast enough, look at your organization’s structure and culture. Most people I spoke with tell me the problem is not technology. It is politics and culture. Have you broken the silos? Do you empower your people to take risk and do things that have never been done before? Are you willing to flatten the organization chart?

So why exactly is virtualization causing such a fundamental shift? To understand this, we need to go back to the very basics, which is what exactly virtualization is. It’s pretty common that folks who are not hands-on on infrastructure have a misconception about what this is.

Take a look at the following comments. Have you seen them in your organization or your customers’ organization?

  • “VM is just Physical Machine virtualized. Even VMware said the Guest OS is not aware it’s virtualized and it does not run differently.”
  • “It is still about monitoring CPU, RAM, Disk, Network. No difference.”
  • “It is a technology change. Our management process does not have to change.”
  • “All of these VMs must still feed into our main Enterprise IT Management system. This is how we have run our business for decades and it works.”

The last one was actually told to me by a large customer.

If only life was that simple, we would all be 100 percent virtualized and have no headaches! Virtualization has been around for many years, and yet most organizations have not mastered it. In fact, most customers still struggle with the basic, such as performance and capacity management.

In the next post, I will explain what virtualization is. The post is useful for your colleagues who may mistaken virtualization with another technology, such as partitioning.

Looking back. Looking forward

Looking back, 2014 has been a good year for IT professionals earning a living with virtualization as their main skills. Virtualization and cloud computing continue to grow. While there are still a huge amount of physical servers installed base, x86 based VMs continue taking share in the datacenter. The physical servers in this context includes both x86 and non x86 (UNIX and Mainframe).

Non-x86 still accounts for around US$ 8 billions a year in 2014, so as virtualization professionals we still have a lot of target to work on. It will take many good years. The migration to X86 are certainly more complex than standard P2V, as application migration is required. But that’s also makes it more suitable for seasoned and experienced IT Pros. If it was easy, your job would have been replaced by a much more junior guy. There is already a post that drives that message, so I’m not going to elaborate.

2014 saw Virtualization entered the realm of Storage and Network. Yes, I know it certainly happened before 2014. In 2014 it became much clearer that Storage and Network will follow Server. They will all be virtualized. I think it won’t take Storage and Network very long, as the vendors & industry have learned from Server virtualization. This is a good news for us Virtualization Pros, as that means the job scope is widening and the game becomes more interesting.

Looking forward, I think in 2015, it does not take a prediction to see that NSX, VSAN, EVO will grow fast. It’s now a matter of projection. At my personal level, I’ve had customers buying NSX and VSAN in 2014. From my discussions with customers and partners, more will buy in 2015 as the technology becomes more common. The upcoming release will also help. I like what I see in the 2015 roadmap.

Related to virtualization, I see that Fibre Channel will continue its decline. 10 GE and SSD make the case for distributed storage compelling, especially at the entry level and midrange. Enhancements in vSphere in 2015 will also strengthen the case for IP Storage.

There are also other areas that virtualization can address better. One of them is Management. As datacenter becomes to have more VMs than physical servers, the management tools have to be built for virtualization. It has to know the differences between VM and physical servers well. In 2014, I’ve worked with quite a number of customers to help them operationalize vCenter Operations 5.x. Once the dashboards are tailored for each role, they found v5 useful. vRealize Operations 6 brings many enhancements that customers will find the product even more indispensable. I like what I see in 6.0. Knowing the roadmap for 2015, I think it’s going to be better (more complete features, easier to use, etc).

All in all, I am excited about 2015. The evolution continues, and I think the pace is accelerating. What started as server virtualization many years ago is now touching the entire datacenter.

What’s your take for 2015? How do you think it will impact your career? How do you plan to take advantage of the expansion of virtualization to Storage, Network, DR, Management, etc.? I’m keen to hear your thought. You can drop me an email at e1 at vmware dot com.

Have a great adventure in the bigger Virtual World in 2015!

[Update: I did a follow up post here]