FVP Linked Clone Optimizations Part 2

In part 1 of this series, I talked about replica disk optimizations that FVP provides for your linked clone environment. In part 2 the focus will be on the different use cases for persistent and non-persistent disks and how it relates to the acceleration that FVP can provide to your VDI environment. 

I often hear confusing remarks about what some may call a persistent desktop and a non-persistent desktop. I have found that at times this terminology is based on confusion between a linked clone and a full clone. It also makes a difference what criteria one bases their understanding of a non-persistent or persistent desktop. For example, if you just look at linked clones, you will notice that several disks are non-persistent and persistent, depending on your design decisions. If one looks only at a dedicated linked clone with windows profile persistence then some may articulate this linked clone as a persistent desktop. 

The interesting thing is that Horizon View doesn’t refer to a linked clone in this context. The only time Horizon View refers to a persistent or non-persistent desktop is under the context of refreshing a cloned desktop. In other words, it doesn’t mean that just having a linked clone makes you a non-persistent or even persistent VDI environment. 

I also think some of the confusion revolves around the use of dedicated vs. floating assignment of linked clones. The dedicated configuration assigns each user a dedicated desktop, so, if the user has multiple sessions, they will always reconnect to the same desktop by default. In a floating configuration the user is assigned to a pool of desktops. This means they could login to a different desktop with each new session. The only way to keep windows profile persistence in the floating configuration scenario is use persona management solution outside the default configuration of view composer. 

So, when an admin decides to use a dedicated linked clone, view composer gives the option to redirect the windows profile to a persistent disk. This will provide user personalization persistence during refresh, recompose, and rebalance operations. This is an optional setting as seen in the screenshot. The default disk size is 2GB. 

When one chooses a floating assignment for linked clones, view composer does not provide an option for a persistent disk, this means that no user personalization will be retained after a refresh, recompose or rebalance operation. If you chose not to redirect the windows profile, then the data would be stored on the non-persistent delta disk. In either case, both read and write I/O will be accelerated with FVP. However, there will be a longer warm up time for read acceleration when using the non-persistent delta disk for user profiles, as this will depend on how frequent the refresh, recompose and rebalance cycles are. 

Whether you select floating or dedicated assignments and choose some level of windows profile persistence or not, FVP will automatically accelerate reads and writes for all disks that are part of the desktop VM. In the past, the choice on when to schedule a recompose, rebalance operation came with varied importance. Now with FVP offloading IO from the storage array, a refresh, recompose or rebalance operation can provide some breathing room for these tasks to finish without impact to the production environment.

Delta Disk:
The delta disk is probably where most desktop I/O will be seen from a linked clone. The delta disk becomes active as soon as the desktop is booted from the replica disk. Any desktop changes are stored on the delta disk and so depending on the user and the desktop use case, the I/O profile could vary drastically. This will not impact FVP negatively, as FVP will keep context on which disk is more active and thus provide the resource intelligence for acceleration no matter the use case. 

Disposable Disk:
A default configuration will have a separate non-persistent disposable disk 4GB in size. Having this as separate disk is recommended since it slows the growth of the delta disk between refresh, rebalance, and powered off tasks. This disk contains temp files, and the paging file, so FVP can help normalize OS operations by accelerating reads and writes associated with the disposable disk. If you choose not to redirect, then this data will reside on the delta disk. There is no negative impact to FVP on either option chosen. However it’s a best practice to help control the growth of the delta disk between refreshes, and so separating the non-persistent disk will help alleviate bloated delta disks. 

Internal Disk:
There is an Internal Disk that is created with each cloned desktop. This disk is Thick Provision Lazy Zeroed, with a default size of 20mb. This disk stores Sysprep, QuickPrep and AD account information, so very little IO will be realized from this disk. Keep in mind that this disk is not visible in windows, but it still has a SCSI address, so FVP will still recognize the disk and accelerate any I/O that comes from this disk. This is another advantage of being a kernel module, as FVP will recognize disks not mounted to the windows OS and yet FVP will still do its magic of acceleration. 

As you can see no matter the configuration, FVP will automatically capture all I/O from all disks that are part of a given desktop clone. Depending on the configuration, a desktop clone can have several disks and knowing when or which disks are active or are need of given resources at any given point, is not an easy task to determine. This is exactly why PernixData decided to develop FVP, a solution to take the guesswork out of each disks IO profile. This means the only item you are tasked with, is whether you accelerate the desktop or not! Talk about seamless and transparent, it doesn’t get any better than that!! 

FVP Linked Clone Optimizations Part 1

PernixData has some of the best minds in the industry working to provide a seamless experience for all differing types of workloads. Operational simplicity in a platform such as ours doesn’t mean there is a lack of complex functionality. Au contraire, FVP is truly a multiplex system that can dynamically adjust and enhance numerous workload attributes.  One popular example of this is the acceleration and optimization of Linked Clone technology by Horizon View.

The beauty of our linked clone optimizations is that it is completely seamless. This means you will not have to make any configuration changes to your existing VDI environment nor modify any FVP settings. No changes to the Horizon View Manager or other Horizon View products (e.g. ThinApp, Persona Management, Composer or client). It also doesn’t matter if you are using a persistent or non-persistent model for your linked clones, FVP will accelerate and optimize your entire virtual desktop environment.

It’s common to see a virtual desktop with many disks. Which may include an operating system disk, a disk for user/profile data, and a disk for temp data. No matter how many disks are attached to a virtual desktop, it doesn’t effect how FVP accelerates IO from the virtual machine. FVP’s intelligence automatically determines where IO is coming from and which disk is in need of additional resources for acceleration. The admin only decides which desktop (Linked or Full Clone) is part of the FVP cluster. This could comprise a mix of persistent, and non-persistent disks as seen in the diagram. FVP will automatically accelerate all IO coming from any persistent or non-persistent disk that is part of the desktop clone. 

As seen in the above diagram a linked clone environment can comprise several disks depending the configuration. This by far is the most confusing part IMHO. When you create a linked clone for the first time, you realize you have all these different disks attached to your clone and you may have no idea what they are for or even why the differing capacities.  Why are some persistent and some non-persistent, and which configuration works best for FVP acceleration? I will save these topics and more for part 2. However the replica (Base) disk is what I’m going to focus on in this post. 

In a Horizon View environment a full clone and a linked clone both have an OS disk, except that a linked clone will use a non-persistent delta disk for desktop changes and a replica (base) disk from the parent snapshot for image persistence. This delta disk will be active with any desktop operations, which makes it a prime candidate for acceleration. 
In addition to accelerating reads and writes for cloned desktops, FVP will automatically identify the replica disks in a linked clone pool and apply optimizations to leverage a single copy of data across all clones mapped to said replica disk. 

Note: Citrix's XenDesktop technology works essentially the same way with FVP instead of a replica disk, its called a master image. 

As seen in screenshot below, FVP will automatically place the replica disk (Linked Clone base disk) on an accelerated resource when using linked clones. In addition FVP only puts the active blocks of the desktop clone on the accelerated resource, which lowers the capacity required for the replica disk on the accelerated resources. It’s after the first desktops boot in the pool; that all ensuing clones will take advantage of reading the common blocks from the replica disk on the accelerated resource. If any blocks are requested that are not part of the replica disk, FVP will again fetch only the requested block and add it to already created replica disk. The same is true for any newly created linked cloned pools. A new replica disk will be added to the acceleration resource based on the new linked clone pool. This will be visible under FVP’s usage tab as a combined total of all active replica disks for a given acceleration resource. As you can imagine that only adding the active blocks of the replica disk provides a huge advantage for using memory as an acceleration resource. Windows 7 boot times in 5 -7 seconds are not uncommon in this scenario. 

FVP maintains these optimizations during clustered hypervisor operations such as vMotion. This means that if desktop clones are migrated from one host to another the desktop clones are able to leverage FVP’s Remote access clustering to read blocks from the replica disk on the host where the desktop clones migrated. This only happens on a temporary basis, as FVP will automatically create a new active block replica disk on the new primary host’s acceleration resource. It’s through FVP’s Remote Access clustering and any new desktop clone reboots that a new replica disk for the desktop clones on the new host will be created or updated for local read access efficiency. 
If the desktop clones are in Write Back mode, write acceleration continues automatically once the desktop clones migrate successfully to a new host, irrespective to the replica disk optimizations. 

The diagram below outlines the process where a replica disk is first created on a primary host and then the first desktop clones migrate to a new host. This process of creating a new replica disk on the new host happens only one time per linked clone pool, all subsequent cloned desktops matched to designated replica disk will gain the benefit of any future migrations. 

 When the desktop clones are booted, (1) the clones will request blocks from the assigned replica disk in the pool. FVP will intercept the requested blocks, which will be (2) copied into the acceleration resource that has been assigned to the FVP cluster. All future desktop clone boot processes will read the blocks from the acceleration resource instead of traversing to the assigned datastore where the full replica disk resides. If any changes are made to the original replica disk through a recompose or rebalance operation, then this process will start all over again for the linked clones.  (3) When the desktop clones are migrated to a new host through DRS, HA or a manual vMotion operation, (4) FVP will send read requests to the host where the desktop clones migrated. (5) The blocks are coped back to the new hosts’ acceleration resource, (6) so that any future requests are acknowledged from the new local host. A reboot of any linked clone during this time will also copy all common blocks into the new local host’s acceleration resource. 

As you can see from a VDI perspective FVP can truly make a difference in your datacenter. Gone are the days when one had to architect a separate environment to run virtual desktops. FVP can break down the technology silos and allow the virtual admin to truly scale performance on demand without the worry of storage performance constraints.
More Information on using PernixData FVP with your VDI Infrastructure.