Monthly Archives: August 2015

Multi-hypervisor consideration

My customer was considering adding a second hypervisor, because the Analysts say it is a common practice. My first thought as an IT Architect is: just because others are doing it, does not mean it is a good idea to do it. Even if it is a good idea, and it is also a best practice, does not mean it is good for you. There are many factors to consider that makes your situation and condition different to others.

Before I proceed further, we need to be clear on the scope of the discussion. This is about multi-hypervisors vs single-hypervisor. This is not hypervisor A vs B. To me, you are better off running Hyper-V or Acropolis or vSphere completely, then running >1. At least, you are not doubling complexity and need to master both. If you cannot even troubleshoot vSphere + NSX + VSAN properly, why add another platform into the mix?

To me, one needs to be grounded before making a decision. This allows us to be precise. Specific to hypervisor, we need to know which cluster should be running the 2nd hypervisor. Because of HA/DRS, a vSphere cluster is the smallest logical building block. I treat cluster as 1 unit of computing. I will make each member to have the same ESXi version and patch; hence running a totally different hypervisor in the same vSphere cluster is out of the question for me.

In order to pinpoint which cluster to run the 2nd hypervisor, you need to look at your overall SDDC architecture. This helps you ensure that the 2nd hypervisor fits well into your overall architecture. So start with your SDDC Architecture. You have that drawing right? 😉

I have created a sample for 500 server VM and 1000 VDI VM. Review that and see where you can fit the 2nd hypervisor. For those with larger deployment, the sample I provided scales to 2000 server VM and 5000 VDI VM. That’s large enough for most customers. If yours is larger, you can use that as Pod.

It’s a series of posts, and I go quite deep. So take your coffee and carefully review it.

I am happy to wait 🙂

Done reviewing? Great!

What you need to do now is to come up with your own SDDC Architecture. Likely, it won’t be as optimized and ideal as mine, as yours have to take into account brownfield reality.

You walk from where you stand. If you can't stand properly, don't walk.

Can you see where you can optimize and improve your SDDC? A lot of customers can improve their private cloud, better capability while lowering cost/complexity, by adding storage virtualization and network virtualization. If what you have is just server consolidation, then it is not even an SDDC. If you already have SDDC, but you’re far from AWS or Google level of efficiency and effectiveness, then adding a 2nd hypervisor is not going to get you closer. Focus first on getting to SDDC or Private Cloud.

Even if you have the complete architecture of SDDC, you can still lower cost by improving Operations. Review this material.

Have you designed your improved SDDC? If you have, there is a good chance that you have difficulty placing a 2nd hypervisor. The reason is a 2nd hypervisor de-optimize the environment. It actually makes the overall architecture more complex.

hypervisor

The hypervisor, as you quickly realized, is far from a commodity. Here is a detailed analysis on why it is not a commodity.

This additional complexity brings us the very point of the objective of a 2nd hypervisor. There are only 2 reasons why customer adds a second vendor to their environment:

  • The first one does not work
  • The first one too expensive

Optimizing SDDC Cost

In the case of VMware vSphere and SDDC, I think it is clear which one is the reason 🙂

So let’s talk about cost. With every passing year, IT has to deliver more with less. That’s the nature of the industry, hence your users expect it from you. You’re providing IT service. Since your vendors & suppliers are giving you more with less, you have to pass on this to the business.

If you look at the total IT cost, the VMware cost is a small component. If it were a big component, VMware revenue would equal to many IT giants. VMware revenue is much smaller than many IT giants, and I’m referring to just the Infrastructure revenue of these large vendors. For every dollar a CIO spends, perhaps <$0.1 goes to VMware. While you can focus on reducing this $0.1 by adding a second hypervisor, there is an alternative. You can use the same virtualization technology that you’ve applied to Server, and apply it to the other 2 pillars of Data Center. Every infrastructure consists of just 3 large pillars: Server, Storage, and Network. Use the same principles and experience, and extend virtualization to the rest of your infrastructure. Another word, evolve from Server Consolidation to SDDC.

What if Storage and Network are not something you can improve? In a lot of cases, you can still optimize your Compute. If you are running a 3-5 year old server, going to the latest Xeon will help you consolidate more. If your environment is small, you can consider single-socket. I wrote about it here. Reducing your socket counts mean less vSphere license. You can use the savings and improve your management capability with vRealize Operations Insight.

Even without this article, a lot of you realized that adding a 2nd hypervisor is not the right thing to do. I heard it directly from almost every VMware Architect/Engineers/Administrator at customers’ side. You’re trading cost from one bucket to another. This is because hypervisor is not merely a kernel that can run VMs. That small piece of software is at the core of your SDDC. Everything else on top depends on it, and leverages its API heavily. Everything else below is optimized for it. It is far from commodity. If you have peers who still think it’s a commodity, I hope this short blog article helps.

Have an enjoyable journey toward your SDDC, whichever hypervisor it maybe.

VMworld 2015 session MGT4973 preview

VMworld

Sunny Dua and I are sharing what we have learned next week at VMworld 2015. As you can see from our blogs and book, we focus on Performance and Capacity Management. Essentially, we are sharing what we have learned from our engagements and projects in the past few years.

We have presented the material a few times, and know it will not fit within the 1 hour time slot. So this blog serves as a deeper dive to the slide.

Sunny has provided a good overview on the topic in his blog here, so please read that first. You can find the session here. Title is MGT4973 – Mastering Performance Monitoring and Capacity Planning. I will provide additional details in this blog.

The material follows the following structure:

  1. A Technical Introduction, to set the focus and scope of discussion, and level set the knowledge.
  2. The Dining Area. We use a restaurant analogy to drive the message that you need to focus on the customer first, and your IaaS second. If you take care of them well, and they are happy with your service, the problem you have in your IaaS is secondary and internal matter.
  3. The Kitchen. This is your infrastructure layer, where VMware and the hardware resides.

Technical Introduction

The key component of this is the 2 distinct layer in your IaaS business. Please review that article before proceeding, as the rest of the material completely depends on this model.

If you are presenting this material back to your colleagues or management, who may not have deep technical knowledge on VMware, be prepared to whiteboard it. From experience, it took around 2 – 4 hours for those without vSphere vmkernel scheduling knowledge. In one of my customer, it took me all day as the audience kept on asking question.

The Dining Area 

Here, I share the actual dashboards you need to help you ensure a good IaaS business, where customers are happy. It focuses on the customers, not the Infrastructure.

Detail monitoring of a single VM

  • We start with a single VM, as we need to ensure we can handle 1 VM before we consider handling all VMs. A common use case here is a VM owner (your customer) complains that his VM is slow. You need to come up with a dashboard that enable help desk to quickly and easily identify where the problem is. Is it with Infrastructure or with the VM? Is it CPU, RAM, Disk or Network? How severe is the problem?

Large VMs Monitoring

  • We created this dashboard as over-provisioning is a common illness in virtual environment. If you want a healthy environment, you need to eradicate, or at least minimize, this bad practice. Reducing someone’s VM is a delicate and lengthy process, so you want to focus on the largest VM. Reducing one 16-vCPU VM to 4 vCPU gives you better return than reducing three 8-vCPU VM to 4 vCPU. The actual total vCPU reduction is the same (12 vCPU in this example), but ESXi vmkernel scheduler will have easier task in juggling the VMs as the 16 vCPU VM needs 16 physical cores (even though it’s running idle loop).
  • This dashboard visually tells you how deep and wide-spread the over-provisioning problem is. You get to see all the large VMs, and from here you can drill down into individual VM and see if it’s really using all those resources allocated.

VM Right Sizing

  • There are 2 ends of the spectrum:
    • downsize
    • upsize
  • Upsize is generally not your concern 🙂 . The VM owner will be the first to tell you his VM needs more resource. From your view point, as someone looking after all the VMs, you can use Log Insight to quickly tell which VM hit high CPU or RAM usage and when.
  • Downsize is definitely your concern. It is tough to get anyone to give back their resources, especially since it incurs downtime. From my experience, I learn that some application team want to see the actual utilization of each vCPU in the past 1 month. You can create a dashboard that automatically plots all the vCPU utilization. To see more details coverage, review Chapter 8 of my book.

Excessive Usage

  • One characteristic of virtual environment is sharing. The VMs share the physical resources. Excessive usage by 1-2 VM can impact the overall IaaS performance. This is especially true in component that you do not cap by default, which is Network throughput, Disk IOPS and Disk throughput.
  • This dashboard lets you see if there is excessive usage at any point in time. And if there is, you can drill down to find out which VM causes that.

Indeed, there are only 4 use cases you need. Do let Sunny or me if you think you need additional dashboards. Keep it simple, so you are not lost in the forest of screens and reports. From experience, customers who want more dashboard mistaken the Consumer Layer with the Provider Layer (the kitchen). So let’s cover the kitchen now.

The Kitchen

The IaaS layer is where you have, or should have, complete control and visibility. If you do not have, you need to fix it, as your customer assumes and expects you do.

There are 4 large areas to manage:

  • Performance
  • Capacity
  • Configuration
  • Availability

As you know well, the above 4 disciplines are inter-related. Among these 4, Performance is the most common issue, but Capacity is what you normally tell me you need. You will see in the session that Capacity depends heavily on Performance and Availability. Take Storage for example. Say your SAN array has 100 TB capacity left. That’s plenty of space, probably enough for 1000 VM. But existing VMs are already experiencing high latency. Should you add more VM? The answer is clearly no. Adding VM will make performance worse. For all practical purpose, the capacity is full.

The way you do capacity changes drastically, once you take into account Performance and Availability. See this for an in-depth explanation on how you can implement a more holistic capacity planning.

For Performance, the main requirement from your CIO or management is typically around your IaaS ability to deliver. They want your IaaS to be performing, as business runs on it. The question is how do you prove that… not a single VM… in the past 1 month or whatever the period is… suffers unacceptable performance hit because of non-performing IaaS?

That’s an innocent, but loaded, question. Very loaded, and you need to consider carefully.

If you have 1000 VMs, you need to answer for 1000 VM. For each VM, you need to answer CPU, RAM, Disk and Network. That’s 4000 metrics. If your management or customer agrees on a 5-minute sampling period, you have 12 samples in 1 hour. In 1 day you have 288 samples. In 1 month you have ~8750 samples (30.4 days on average). For 1000 VM, that means 4000 x 8750 = 35,000,000 chances where your IaaS can fail in serving the customer!

In the session, and in the book, you will see that if you implement Service Tiering, it drastically increases your chance in meeting the requirement. We introduce a concept called Performance SLA. Once you have it, you will know for sure if you fail or succeed in meeting the agreed performance.

I distinguish between monitoring and troubleshooting. To me, troubleshooting is a big topic by itself, and the steps vary depending on what you’re trying to troubleshoot. Monitoring, on the other hand, consists of repeatable steps that you perform regularly, preferably daily. You can create SOP (Standard Operating Procedure) out of it.

As you can see from the book and blog, my focus so far has been on Performance and Capacity. The reason is they are big topic and I need to reach the level that you can actually implement and operationalize. Once I’m done, I’d move to Configuration and Availability.

With that, see you at VMworld!

[15 Sep 2015: you can find the actual presentation in this link]

SDDC Architecture: Methodology

I have published a series of posts, detailing a sample VMware SDDC Architecture. You can find the first part here, where I covered the requirements and began the series. I covered a lot of ground in the blogs, showing you the end result. I also shared the items I considered during the design.

With that, how do we actually architect an SDDC? What’s the SDDC Architecture Methodology? A good example is VMware vCAT, which I highly recommend you review.

A picture is worth a thousand word. I can’t find a better way to explain it than how the great Sidney Harris does it. This cartoon basically summarizes how we architect a large scale SDDC. It’s classic! Please visit this site for more examples of his creative work. [Acknowledgement: the picture in the link comes from this blog post by Don Boudreaux].

To me, architecting a Private Cloud is not a sequential process. It does not follow a ‘water fall’ methodology. I use the following diagram to depict it.

Architecture

Why are there so many components? Because Enterprise IT is complex. I’m not here to paint it is simple. Chuck Hollis explains it very well here, so I won’t repeat it.

There are 8 components in the SDDC Architecture. They are all serving the Application. Yes, to me the Application drives the IaaS Architecture. An Infrastructure is always created for the Application. A cloud native applications will logically be best served with a different kind of infrastructure.

The above 8 components are inter-linked. Like a mash. I did not draw all the connecting lines, as that would make a complex diagram. Plus, I want to emphasize the Application. I have encountered many cases where Application Team are not being considered when architecting an IaaS.

Each component can impact other component. You can see in my sample architecture how they impact one another. This is why I recommend the architecture is jointly done.

Even the Bigger Picture is not sequential. Sometimes, you may even have to leave Design and go back to Requirements or Budgetting. So be prepared to be flexible.

Once you have passed the high level architecture, you are ready to architect the vSphere component. This serves as the building block, so make sure you get it right. The hypervisor layer is critical and far from being a commodity.

Architecture methodology 1

The above approach certainly depends on vSphere capability. vSphere continues to evolve. Also, the integration and dependency between NSX/VSAN to vSphere increases over time, as they form into a unified SDDC.

Let’s go over each component, starting from the Physical Data Center.

Define how many physical data centers are required. DR requirements normally dictate 2 physical data centers. You have 3 options here:

  1. Both on-prem
  2. Hybrid of on-prem and off-prem. The off-prem does not have to be pure cloud. It can be a managed vSphere environment, where you are not paying for VMware license. In fact, I see 3 sub-models here for VMware off-prem.
    1. Pure cloud. Multi-tenant, shared environment. You pay per VM per month.
    2. Managed vSphere (shared). You have dedicated vSphere cluster, but you don’t manage vSphere. You do not dictate the version of vCenter. You may or may not have your own LUN. You pay per cluster per month, and it gives you unlimited VM.
    3. Managed vSphere (dedicated). You have dedicated vCenter. You may or may not manage it. You can dictate your vCenter version, which can be useful as you know some features need the latest of vCenter.
  3. Both off-prem.
    1. You have choices here. You do not have to choose from the same provider. You can have 2 VMware vCAN partners.
    2. You can also choose the same vCAN partners, because you want to leverage their private network. I know of customers who do not want their data going over public network, even though it’s encrypted.

As you can see, even at that level you are already presented with choice and complexity. Your decision matters, as there are consequences downstream.

For each Physical DC, define how many vCenter Servers are required. Here are guidelines I used:

  • Minimize the number of vCenter.
    • I know 3 customers (all banks, as I serve mostly FSI) who have performed global vCenter consolidation. In general I see tendency for customers who create a separate vCenter where there is really no need to.
    • It is okay to have a large farm managed by a single vCenter.
    • In general, it is okay to have remote ESXi Hosts managed by a central vCenter.
  • Desktop and Server should be separated by vCenter
    • View comes with bundled vSphere so there is no license impact.
    • Ease of management, especially if the desktop team and the server team are different. I view VDI as an application that has tight infrastructure requirement. There is dependency between Horizon Composer and Horizon Connection Server to vCenter. I’d prefer to give the whole vSphere to the Desktop Team to manage, as I treat them as a single stack.
    • An exception here is a small environment, where the same IT team is running everything.

For each vCenter, define how many virtual data centers are required.

  • Virtual Data Center serve as name boundary. The distributed switch do not go across vCenter Data Center. Think of that before you create a new data center object.
  • Virtual Data Center does not have to be bound by physical data center. Think of the name boundary above.
  • Virtual Data Center is a good way to separate IT (Provider) and Business (Consumer). In large environment, you do want to make the separation clearer. Physically, they can and will be in the same building or even rack. Virtually, you can make the distinction.

For each vCenter DC object, define how many Cluster are required.

  • In large setup, there will be multiple clusters for each Tier.
  • In general, I see tendency for customers to create cluster where there is no need to. A popular example is DMZ cluster.
  • The Management VM should be placed in a separate cluster. It is a common practice, and we normally just call it the Management Cluster. How do you know what kind of VM should be placed here? For example, where do you place your MS AD and Print Server, if they are also managed by IT? To me, a good guideline is the network. Is the VM living in the Management Network, or in the Production network? If they live in the Production Network, then they should not be in the Management cluster. The Management cluster does not have VXLAN and its network is not virtualized. The reason it is not virtualized because NSX Manager lives here.

For each Cluster, define how many ESXi are required.

  • Preferably 4 – 12. To me, 2 is too small a size, so I’d try to avoid that. I’d go with single socket ESXi if cost is an issue.
  • Standardise the host spec across cluster. While each cluster can have its own host type, this adds complexity. I’d not use host type to distinguish the service tier. As you can see in this example, I’m using the same host. I’m using different attribute to separate the Gold cluster from Bronze cluster.