VMworld 2016 Session Vote

This year I decided to make my first application into the VMworld Call for Papers. I have been wanting to do this for some time but my time and commitments never aligned. It's been a learning experience and so we will see how it goes! So, if you have find interest in learning more about migrating workloads to VVols and all the details that go with this, then please vote for this session. 

Migrating Workloads to VMware VVols [9059]
Careful planning is needed for successful workload migration to VVol based storage. Depending on the scenario and the datacenter environment it’s important to understand what is expected and required when migrating Virtual Machines in a mixed environment of VMFS and VVols. We will look at the steps and options available in a virtualized heterogeneous storage infrastructure, in addition to available VMware partner solutions.

Storage Features Responses

In order to finish my Storage Features Survey, I have collected the responses from the community on how they view primary storage from a virtualization, enterprise worthy perspective. 

Most Popular Enterprise Storage Feature Responses (rounded):

Multiple Controllers - 70% of Respondents

Thin Provisioning - 65% of Respondents

Replication Capabilities - 60% of Respondents

Snapshot Capabilities - 60% of Respondents

High Availability - 60% of Respondents

Non-Disruptive Upgrades - 60% of Respondents

VAAI Support - 55% of Respondents

RAID Support - 50% of Respondents

Flash Technology - 50% of Respondents

Capacity/Shelf Expansion -50% of Respondents

There wasn't any big surprises, but there were a couple interesting responses. One of the higher responses came in for "Replication Capabilities", I knew this would be popular, however I did not envision it being as high on the list, since there are many other technologies in the market that can tackle replication from a different level in the stack. i.e. Zerto, Veeam. The other popular response that was interesting was "Flash Technology" at the array level. I know flash is very popular right now, but as you might know it's not something I agree with 100% of the time. PernixData can use flash at the host level for storage performance and thus can negate the use of flash at the array. I'm not saying this is the case for all environments, but it's enough to change the storage landscape. 

The biggest response was for multiple controller support! This isn't surprising, since high availability & non-disruptive upgrades was also listed high on the list from respondents. 

Did any of the responses surprise you or do you think this encompasses the most common requested enterprise features for primary storage? 

Thanks for participating in the Storage Features Survey! 

Poll: Storage Features

I was recently talking with some peers about what storage features fit into the category of enterprise worthy and are considered "must have’s" in regards to importance.

This got me thinking about market conditions today and how we are flooded with so many new storage systems/features. This not only can confuse consumers but I believe can change perceptions on what is really important or reality! It's my intent in this post to study what truly makes up a enterprise storage system and what are the must have features of today. 

The first part of this study is to publish a poll on what you believe to be an enterprise must have feature in the storage market. Please select only the must have’s and/or what is important to you and your company!

I will publish the anonymous results once everyone has had a chance to vote.

Keep in mind that these features are only intended to apply at the primary storage level for your array in a 100% VMware environment!  

This post is not sponsored nor endorsed by my employer! Personal Passion Only! 


The vSphere Pocketbook 2.0 Blog Edition

The vSphere Pocketbook 2.0 Blog Edition has been released!

If you were unfortunate to not receive your copy at VMworld, you can always order at Amazon 

I have also decided to post my contribution to the pocketbook here for all to read as well! Enjoy!  


Systems Thinking Impacting Design Decisions

As with most design decisions it becomes imperative that all steps are taken to prove out a design that meets and exceeds all expected goals. In the world of virtualization, the choices we make are the new foundations of the enterprise. This is why it’s more important to get the process right in making a good design decision. It’s much easier to correct the implementation than having to go back and start over with the entire process. It is with this mantra that I want to explore one idea on how to implement a Systems Thinking approach for good design decisions in virtualization.

As technologists we thrive on new products and services that tickle our inner ego. This is where it becomes imperative to implement a process that incorporates all inputs that can help drive a successful design decision. The old military adage of “7 P’s” (Proper Planning and Preparation Prevents Piss Poor Performance) can even be relevant in a virtualization type of project design. This preparation and planning can be realized in the collection of inputs to a project design, where they can be broken down into smaller pieces for ample analysis. This is called the Feedback loop – a causal path that leads from the discovery of a design gap to the subsequent modifications of the design gap.

It’s this reciprocal nature of Systems Thinking that provides a constant feedback loop to the design. The ultimate goal is a design that changes based on external parameters to meet a new set of goals or challenges. If you can get to a design like this, then you can become more agile and not have to implement patch solutions to accomplish a new forced or unforeseen change.

To illustrate how such a process can impact design decisions, let’s first look at a common problem in many environments. Many organizations are growing their virtualization environment at a rapid pace therefore, there is a constant pressure to provide enough storage capacity/performance as the VMware environment constricts. 


As you can see this is a simple, yet applicable example of a feedback loop that can help you break apart the pieces of a design to come up with an effective solution. Let’s now go through the pieces to understand the relationships and the effect they can have on the design.

As the “User Base” or as “Applications” are virtualized this puts added pressure to increase the number of ESXi hosts to support the compute requirements. As in most environments the number of ESXi hosts increase; this will increase the demand and contention on the storage systems to keep up with the I/O load. In order to keep up the capacity growth demands and I/O performance load, this pushes the admin to add more spindles to the storage environment. More spindles allow for more I/O processing which as a result could increase the demand for a faster storage fabric. This loop finally decreases the demand on the storage system in a response to the growth, but as you can see it’s only temporary, since there is still added pressure from new user and virtualization growth. The only way to turn the tide on the storage pressure is to instrument a negative reinforcing loop. An “I/O Offload” solution can help by decreasing the demand on the storage system and thus provide better consolidation back onto the ESXi Hosts.

What this illustrates is how a Systems Thinking approach can help overcome some of the complexity in a design decision. This is only a small subset of the possibilities so my intention is to provide more examples of this on my blog. If you want to learn more about Systems Thinking check this short overview to a larger context. http://www.thinking.net/Systems_Thinking/OverviewSTarticle.pdf

Chattanooga VMUG Restarted

I'm happy to report that the Chattanooga VMUG group has a planned event on August 19th from 5:30 - 8:00 p.m. It's been around 2 years since this group has had an event, so I was excited to work with Jim Wrenn & Curtis Gunderson to organize a new event for the Chattanooga chapter. 

PernixData and Veristor have agreed to co-sponsor the event!  

No registration is required, so if you are in the area come help support the Chattanooga VMUG. Food and Drink will be provided. 

Speaker: Todd Mace

Title: Server-Side Storage Intelligence


Unum Conference Room (downtown Chattanooga)
1 Fountain Square, (500 Walnut St.) East Building Presentation Room 151
Chattanooga, TN 37402



Breaking News: PernixData FVP Wins Big

That's right, How bout them apples! PernixData FVP just won TechTarget's Modern Infrastructure Bright Idea Impact Award! This is a new annual award focused on readership voting for the best, brightest and most impactful new product! This is not a analyst award or paid advertisement, it's an award that was given by you, the community. Thank You!

PernixData won over these other companies that were also contenders: 

HP Moonshot

Infinio Accelerator

Neverfail IT Continuity Architect

Red Hat Cloud Infrastructure

SwiftTest Workload Insight Manager

Unisys Forward

VMware NSX

We now have a growing list of awards to display proudly! 

infoworld      virtualization    vmvworld    crn

ESXi Runs in Memory - Boot Options

I hope the title of this post doesn’t surprise you! This is sometimes a forgotten design of ESXi when choosing your boot options. I have been increasingly talking with VMware admins that are deciding to mirror their local drives for ESXi. This seems to be a common design on blade architectures as well, where they use the two open drive bays for mirrored ESXi boot images.

The question to ask, is why do this? If ESXi runs entirely in memory what benefit do you have in mirroring two drives? Yes, you do have another copy of the image, in case of corruption, but wouldn’t it be easier and less wasteful to just store a copy of the image on removable media or use image builder for the resiliency!

Most server architectures are including internal SD cards or USB flash inputs to install ESXi on and there is of course the use of VMware’s Auto Deploy! The use of one of these methods for ESXi boot will not only save resources but will open up more opportunities for new technology usage.

There are many examples of converged storage architectures that would require you to use all available drive bays to maximize capacity usage. Then there is also the use of server-side flash technologies, like PernixData FVP. Having multiple options for local flash will provide more possibilities when you want to create tiers of flash for your differing workloads.

The point of this post is to hopefully illustrate that you don’t have to mirror ESXi for fault tolerance. There are many other alternatives to protect your image and why waste resources on something that could hinder the growth of your virtualized datacenter.

For added reading pleasure, here is a link to some entertaining conversations about installing ESXi on local disk or local USB.


Carolina VMware User Summit 2013

This coming Thursday the 2013 Carolina VMUG begins. This year will include a ton of great sessions and speakers. There will be a Expert Panel talking about the future of Virtualization; this panel will include William Lam, Scott Lowe, Chris Colotti and Chad Sakac. Check out the Schedule! 

There are also two sessions that I'm partial too that I want to highlight.

Education Session

Time: 10:15-11:00 am
Title: "The New Scale-Out Data Tier - A Storage Platform Paradigm Shift" (PernixData)
Speaker: Todd Mace 
Room: 217 A 

eGroup Lab Session

Time: 1:00-1:45 pm
Title: Free Style Session- Flashtastic – PernixData FVP Demo
Speakers: John Flisher & Todd Mace
Room: 211 AB 

It's never to late to register and come see some awesome IOP presentations!! 

A Few of My Favorite Things - Currently (Part 3)

Disclaimer: I’m not part of or receive any compensation or rewards from any of the organizations that I write about. My blog is solely a personal passion and nothing more. 

I currently have the privilege of being part of the Vidyo Advisory Council, which allows me to give feedback and listen to the great plans and successes within Vidyo. This means a lot when you’re a user and a customer!

Vidyo is a disrupter within the video conferencing space. They tackle Cisco, Polycom and Lifesize, when it comes to a clean, robust and cost effective software based video solution. The differentiator is in their use of H.264 Scalable Video Coding (SVC)-based compression technology and their patented Adaptive Video Layering. This allows Vidyo to scale very easily, while providing less video jitter and packet loss during a conference. Here is a video demonstrating the difference between traditional H.264 systems and Vidyo's H.264 with SVC. 

The shift to software allows Vidyo to introduce video conferencing to different platforms faster and more efficiently. This speed has led to huge adoption rates and happy customers. You actually may already be a customer and don’t even know it. Vidyo is the technology that Google uses for Google Hangouts and Nintendo uses for the Wii platform

In my opinion what really brings Vidyo into the mantra of this blogs theme is their use of Virtualization and Cloud technology. Vidyo is a strong partner with VMware and has recently showed some awesomeness at VMworld and PEX. In the next few months they are also introducing an all Vidyo hosted cloud solution for those that want to get their feet wet and try out Vidyo’s technology.


If you are looking for ways to cut down on travel and cost and implement a video conferencing solution within your organization, I highly recommend you look at Vidyo. 

CloudCred - Knowledge, Recognition, Access

VMware just recently launch a really cool community platform called CloudCred. The premise is to expand and expose the community to more VMware/Cloud knowledge, while building a personal portfolio of accomplishments. The current plan is to reward those that have completed assigned personal or team tasks that show your accomplishments and knowledge. 

I give kudos to VMware for developing a platform that is fun and uses team collaboration to enrich the community and expand VMware, Partner and Cloud technology. 

Charlie Gautreaux @chuckgman has created a Pernixdata team on CloudCred. I Joined this team as I wanted to be part of a team that will revolutionize the next cloud technology! 

Check out my profile: https://www.cloudcredibility.com/profile/788

Video Overview: 


VMware Support Assistant Appliance

 The new VMware "Support Assistant" open beta became available not long ago. This new OVA appliance seeks to simplify the support request (SR) process with VMware. I decided to create a post about my experience in the setup process as there was very little information on this new appliance. 

The VMware vCenter Support Assistant is deployed as a virtual appliance and Integrates with VMware vCenter Server as a plug‐in for either the new Web Client or the Classic vSphere Client. In addition to creating and modify existing support requests an authorized admin can upload diagnostic/performance information and system support logs from vCenter, and vSphere Hosts to VMware support. 
After installing the vSphere appliance, there were a couple items that I ran into when setting everything up. In order for you to upload system logs, make sure and use the FQDN of the vCenter server when using the vSphere client. If you normally just use the host name, then you will notice that the  "Support Assistant" doesn't fully pass the diagnostic tests. I also experienced that the "Support Asisstant" loads in the Web Client using Internet Explorer more consistently than using Google Chrome. This product is still in beta, but it works well and can be used in a production environment with no worries. I highly recommend giving this new product a try, it will make the support process go much smoother. 


Clustered VAAI?

In reading the recent updated VMware vSphere Storage API Array Integration (VAAI) White Paper, I noticed a statement that caught my eye.

"VMware does not support VAAI primitives on VMFS with multiple LUNs/extents if they all are on different arrays.."

I understand the difficulty in doing this, but it makes me wonder if the coming VMware vVOLs will be the technology that gives the vAdmin the capability of crossing array bounderies on a single LUN that supports VAAI primitives. 

If anybody has thoughts or insight to this, please tweet or comment.




The Cloud Storage System - vVOLS

At VMworld 2012, VMware announced their tech preview of vVOLS. As one thinks of cloud systems, one can see that vVOLs could be the transformation architecture that takes the disparity of traditional storage systems to a true cloud storage system for the future.

The NIST definition of Cloud Computing: 

"Cloud computing is a model for enabling convenient....that can be rapidly provisioned and released with minimal management effort or service provider interaction." 

VMware's vVOLs - One doesn't have to worry about whether to use NFS or Block. No more creating LUNS in the traditional sense and setting up storage polices on the array and then the hypervisor. vVOLs also provide the redundancy and multi-tenant access to the logical volumes of objects at scale. Similar to Cloud systems by definition, vVOLs will make it easy to provision and provide multi-tenant access while removing the complexity of the different software layers. 

In summation, and the point of this post, is to illustrate that vVOLs bring the storage layer and the hypervisor layer to a truly cloud state around management and provisioning. Hopefully vVOLs will make the impact on the industry in such ways that help cloud admins everywhere. 

Microsoft's V-Tax

Yes, that's right, after Microsoft called VMware's licensing a V-Tax. Microsoft has decided to follow VMware in licensing their Windows Server 2012 for Hyper-V similar to VMware's model. Now both VMware and Microsoft license their Hypervisors per processor. Now, I know to be completly fair, Microsoft doesn't have a memory entitlement like VMware does, but most users will never reach the memory limitations per processor that have been outlined by VMware.

Basically if you want to have more than 10 VM's running on Hyper-V, you will need the the new Datacenter Edition, which is licensed per processor and gives you unlimited VM's. If you are running under 10 VM's, then it's probably cheaper to purchase the standard edition that gives you 2 VM's per processor. This means most enterprise customers will need to purchase the datacenter edition which retails for a whopping $4809.00 plus you need to purchase any needed CALS.

This is really smart on Microsoft's part, because if you are a current VMware customer then it now costs a lot more to run VMware, if you are going to use Microsoft Server 2012 VM's. On the other end of spectrum, it's not really smart for Microsoft when considering their customers, because it now cost more to run Windows on vSphere.

So, in summary if you have a dual processor physical server and you purchase VMware vSphere Enterprise Plus at $4543.50 x2 and then want to have Microsoft Server 2012 with at least two VM's, you will need to fork out $4809.00 x 2 for datacenter edition to be properly licensed.

Total Retail cost: $27,720.00

As you can see this can get very expensive and really isn't good for Microsoft or VMware in the long run, because the customer is the one caught in the middle of this battle.

What say you...???

Download Microsoft's Server 2012 License Datasheet

Where is the VAAI Support?

Over the past year I have been fascinated by the lack of economical #block-level storage that utilizes industry API's like VMware's #VAAI. There are some examples where #NFS storage vendors have jumped into the pool of possibilities, but nothing to talk about when it comes to block-level storage. There have been promises made by all the big boy's out there, but nothing to show as of yet. #VMworld 2011 was littered with storage vendors that all said they couldn't do VAAI until the end of 2011 or the beginning of 2012. It's now the middle of March 2012 and not even #EMC as of yet supports there #VNXe's with VMware's VAAI API's

@Sakacc noted in September, 2011 - http://virtualgeek.typepad.com/virtual_geek/2011/09/vnx-and-vnxe-updates-and-vaai-hotfixes.html that we would have VNXe VAAI support by Q4 2011. 

Not that there are vendors that support VAAI on their storage, it's the lack of economical storage for the SMB market that is lacking. 

So after many hours of searching, I found a light at the end of the tunnel. A small startup that can give us storage with VAAI support for #Free!!!!! Stay Tuned!!!!!

VAAI Primer

Before I unleash the posts on my VAAI Array project, I thought it best to make sure the readers are somewhat famiar with VAAI

VAAI (vStorage APIs for Array Integration) is a feature first introduced in ESX/ESXi 4.1 and later expanded in ESXi 5.0. It is an API that was developed to enhance performance on the vSphere infrastructure by offloading several tasks to compliant Storage Arrays. *More on the compatible Storage Arrays later*

VAAI Benefits:

  • Atomic Test & Set (ATS), Atomically modifies sectors on a disk without having to use SCSI reservations. This means that LUN access from other hosts won't be the locked. This should increase performance many fold, depending on how many hosts access the same LUN. 

  • Clone Blocks/Fully Copy/XCOPY, directly on the supported Array without having to resort to ESX software data mover and moving data to/from the hosts and to/from the Array. If VAAI is enabled, copying/cloning of data will move at the speed of the hardware Array.
  • Zero Regional Blocks, Zeros out a large number of blocks on the Array for provisioning. This allows vSphere to speed up provisioning and do other tasks. 

  • Thin Disk Space Reclaim, this API uses SCSI UNMAP instead of SCSI Write and is based on VMFS 5.0. This basically tells the Array to write Zeros where something was deleted and then tells vSphere it's available. 

Keep in mind there are several things to be aware of when using VAAI, so I want to list two of them that I think most will need to be aware of. 


  1. If the source and destination VMFS volumes have different block sizes, then ESXi resorts to the default data mover. So suppose you used a 8MB block size on ESXi 4.1 and then upgraded to ESXi 5 and then as a result upgraded your file system to VMFS 5.0. You would think that the block size would either change or wouldn't matter, but the result is that it doesn't change the block size to the ESXi 5.0 default of 1MB. So if you add a VAAI enabled Array to the mix, then Hardware assisted offload won't work, until you recreate the datastores to be the same default block size. 
  2. If the source VMDK type is eagerzeroedthick and the destination VMDK type is thin, then VAAI offload won't work. 

This post was meant to be a short summary of VAAI. Feel free to comment on anything I missed or items that you think will benefit the readers. 



As you can see this is my first post on CloudJock. So I begin my journey talking about a current project that I'm working on. This will begin a new series surrounding VMware's VAAI API's. So Stay Tuned!!!