FVP Color Blindness Accessibility

With much respect and detail our PernixData engineers are interested in every facet of the customer experience. Something that may seem small to others can be a big deal to some. It’s with this that PernixData thinks about every feature in a holistic manner that all can appreciate. One such feature that fits this model is providing visual accessibility to those with color blindness. With 1 in 12 men and 1 in 200 women having some form of color blindness, it becomes important that the FVP UI is readable and understandable no matter the impediment.

 It was in FVP 2.0 that we made modifications to the colors in our UI to deal with the most common forms of color blindness: Deuteranopia (~5% of males), Protanopia (~2.5% of males), and Tritanopia (~.3% of males and females). For example, the “Network Acceleration” line graph was made to be  lime green. In addition all colors were tested with “Color Oracle” and application that simulates different forms of color blindness.

In addition, we made it easy to recognize each line on a chart uniquely identifiable. This was accomplished by providing the ability to toggle lines on/off. For example, if you aren't sure which line is referring to the datastore, just toggle the others off or toggle the datastore selection off/on, and this will clearly show the datastore line.

When designing the FVP interface it was also recognized that color is used as a secondary source of information that provides further insight and impactful information to the primary source. For example, in the host/flash device visualization, the color of the tiles (red, green, yellow) indicates the state of the relevant object. If there is a problem however, alarms/warnings also show an exclamation point on the tile in addition to the coloring of the tiles.

 

FVP Management Database Design Decisions

When deciding which database model to use for FVP, it’s important to understand what the goals are in using FVP and the growth potential for the platform. Upon installation, FVP management service builds and connects to a “prnx” SQL database instance. This database is responsible for receiving, storing and presenting performance data. All time series data for all performance charts displayed in the FVP UI are stored in this database, in addition to management metadata as it relates to configurations. Keep in mind however neither the management server nor the FVP database needs to be operational for read/write acceleration to continue during downtime. 

PernixData management server is also responsible for managing fault domain configurations and the host peer selection process for write back fault tolerance. This information is also kept current in the “prnx” database so that any host or cluster changes can be kept accurate for FVP policy changes. This is why it’s imperative that FVP maintain a connection with the vCenter server, so that inventory information can be collected and maintained. 

It was decided early in the FVP design phase not to recreate the wheel and take advantage of already robust operations in SQL server. One of these decisions was to implement SQL rollup jobs into practice for FVP. The SQL rollup job is responsible for keeping only the current valuable data while providing an average for historical reference. Instituting the SQL rollup process lowers the latency and overhead of FVP having to implement the averaging operations. This means all data stored in SQL is not moved nor massaged outside the context of SQL, this provides the security and performance benefits to FVP as an acceleration platform. 

Since part of the SQL server responsibility is to store FVP performance data, it’s important to only store as much data that is relevant and useful. Currently FVP management server only requests 20-second performance samples on all FVP clustered VM’s on each enabled host. This is run using multiple threads so that multiple CPU cores can be utilized for efficiency. During a 24-hour period a large amount of data could be archived. In this case, FVP has a purging schedule that runs every hour to purge all 20-second samples older than 24 hours. This only happens after a SQL rollup has completed within each minute and hour time period averaging the 20-second samples. 

Every minute there are 3 samples (20 seconds each) that are averaged. At the 1 Hour mark, a SQL rollup job runs and at completion FVP will purge all 20-second samples older than 24 hours. In order to view the 20-second samples before the rollup, then look at the performance statistics that are 1 hour or less in the FVP performance UI.  After the 1-hour interval all 20-second samples are discarded after the first SQL rollup and then permanently removed after the purging operation 24 hours later. 

In order to determine a proper SQL capacity for this amount of data, one needs to know how many VM’s they plan to accelerate with FVP and what the potential is for continued expansion. Currently over 80% of the “prnx” database is used to store performance related metrics and this 80% also makes up the majority of data churn within the platform. This means calculating for the 80% will provide ample room for FVP’s operations. 

The PernixData Management Server will insert 1 row (record) in the DB table every 20 seconds for each VM. This can be approximated that each VM will store ~ 1.6KB amount of data every 20 seconds. This data also takes into account the index size for each VM that is referenced. 


If considering SQL Express with a 10GB limitation, knowing the effective data added each day becomes an important piece of information. This design decision could hamper long-term storage or the acceleration of a large number of VM’s. Whether SQL Express is chosen or not, it’s a best practice to either choose “Simple” Mode or have a regular scheduled SQL backups so that log truncation can help limit the continued growth of the SQL log. 

Knowing the approximate data added to the DB each day for said number of VM’s will provide the expectancy when one would reach a 10GB capacity for SQL Express. If for example you have 100 VM’s accelerated with FVP, it will take about 400 days, but for a 1000 VM’s the limitation will be reached in as little as 40 days! 

To understand how our UI displays the averages based on the samples and purging process, below is a chart that illustrates the number samples taken and the average based on the time displayed. Keep in mind whether choosing a custom time range or using the predefined time ranges in the FVP UI, all result in the same samples and averages as indicated in the chart below. 

As you can see it’s important to not only understand the metrics that you are referencing but design appropriately for database sizing and retention, taking into account PernixData FVP’s growth within your virtual environment. 

FVP Linked Clone Optimizations Part 2

In part 1 of this series, I talked about replica disk optimizations that FVP provides for your linked clone environment. In part 2 the focus will be on the different use cases for persistent and non-persistent disks and how it relates to the acceleration that FVP can provide to your VDI environment. 

I often hear confusing remarks about what some may call a persistent desktop and a non-persistent desktop. I have found that at times this terminology is based on confusion between a linked clone and a full clone. It also makes a difference what criteria one bases their understanding of a non-persistent or persistent desktop. For example, if you just look at linked clones, you will notice that several disks are non-persistent and persistent, depending on your design decisions. If one looks only at a dedicated linked clone with windows profile persistence then some may articulate this linked clone as a persistent desktop. 

The interesting thing is that Horizon View doesn’t refer to a linked clone in this context. The only time Horizon View refers to a persistent or non-persistent desktop is under the context of refreshing a cloned desktop. In other words, it doesn’t mean that just having a linked clone makes you a non-persistent or even persistent VDI environment. 

I also think some of the confusion revolves around the use of dedicated vs. floating assignment of linked clones. The dedicated configuration assigns each user a dedicated desktop, so, if the user has multiple sessions, they will always reconnect to the same desktop by default. In a floating configuration the user is assigned to a pool of desktops. This means they could login to a different desktop with each new session. The only way to keep windows profile persistence in the floating configuration scenario is use persona management solution outside the default configuration of view composer. 

So, when an admin decides to use a dedicated linked clone, view composer gives the option to redirect the windows profile to a persistent disk. This will provide user personalization persistence during refresh, recompose, and rebalance operations. This is an optional setting as seen in the screenshot. The default disk size is 2GB. 


When one chooses a floating assignment for linked clones, view composer does not provide an option for a persistent disk, this means that no user personalization will be retained after a refresh, recompose or rebalance operation. If you chose not to redirect the windows profile, then the data would be stored on the non-persistent delta disk. In either case, both read and write I/O will be accelerated with FVP. However, there will be a longer warm up time for read acceleration when using the non-persistent delta disk for user profiles, as this will depend on how frequent the refresh, recompose and rebalance cycles are. 

Whether you select floating or dedicated assignments and choose some level of windows profile persistence or not, FVP will automatically accelerate reads and writes for all disks that are part of the desktop VM. In the past, the choice on when to schedule a recompose, rebalance operation came with varied importance. Now with FVP offloading IO from the storage array, a refresh, recompose or rebalance operation can provide some breathing room for these tasks to finish without impact to the production environment.

Delta Disk:
The delta disk is probably where most desktop I/O will be seen from a linked clone. The delta disk becomes active as soon as the desktop is booted from the replica disk. Any desktop changes are stored on the delta disk and so depending on the user and the desktop use case, the I/O profile could vary drastically. This will not impact FVP negatively, as FVP will keep context on which disk is more active and thus provide the resource intelligence for acceleration no matter the use case. 

Disposable Disk:
A default configuration will have a separate non-persistent disposable disk 4GB in size. Having this as separate disk is recommended since it slows the growth of the delta disk between refresh, rebalance, and powered off tasks. This disk contains temp files, and the paging file, so FVP can help normalize OS operations by accelerating reads and writes associated with the disposable disk. If you choose not to redirect, then this data will reside on the delta disk. There is no negative impact to FVP on either option chosen. However it’s a best practice to help control the growth of the delta disk between refreshes, and so separating the non-persistent disk will help alleviate bloated delta disks. 

Internal Disk:
There is an Internal Disk that is created with each cloned desktop. This disk is Thick Provision Lazy Zeroed, with a default size of 20mb. This disk stores Sysprep, QuickPrep and AD account information, so very little IO will be realized from this disk. Keep in mind that this disk is not visible in windows, but it still has a SCSI address, so FVP will still recognize the disk and accelerate any I/O that comes from this disk. This is another advantage of being a kernel module, as FVP will recognize disks not mounted to the windows OS and yet FVP will still do its magic of acceleration. 

As you can see no matter the configuration, FVP will automatically capture all I/O from all disks that are part of a given desktop clone. Depending on the configuration, a desktop clone can have several disks and knowing when or which disks are active or are need of given resources at any given point, is not an easy task to determine. This is exactly why PernixData decided to develop FVP, a solution to take the guesswork out of each disks IO profile. This means the only item you are tasked with, is whether you accelerate the desktop or not! Talk about seamless and transparent, it doesn’t get any better than that!! 

FVP Linked Clone Optimizations Part 1

PernixData has some of the best minds in the industry working to provide a seamless experience for all differing types of workloads. Operational simplicity in a platform such as ours doesn’t mean there is a lack of complex functionality. Au contraire, FVP is truly a multiplex system that can dynamically adjust and enhance numerous workload attributes.  One popular example of this is the acceleration and optimization of Linked Clone technology by Horizon View.

The beauty of our linked clone optimizations is that it is completely seamless. This means you will not have to make any configuration changes to your existing VDI environment nor modify any FVP settings. No changes to the Horizon View Manager or other Horizon View products (e.g. ThinApp, Persona Management, Composer or client). It also doesn’t matter if you are using a persistent or non-persistent model for your linked clones, FVP will accelerate and optimize your entire virtual desktop environment.

It’s common to see a virtual desktop with many disks. Which may include an operating system disk, a disk for user/profile data, and a disk for temp data. No matter how many disks are attached to a virtual desktop, it doesn’t effect how FVP accelerates IO from the virtual machine. FVP’s intelligence automatically determines where IO is coming from and which disk is in need of additional resources for acceleration. The admin only decides which desktop (Linked or Full Clone) is part of the FVP cluster. This could comprise a mix of persistent, and non-persistent disks as seen in the diagram. FVP will automatically accelerate all IO coming from any persistent or non-persistent disk that is part of the desktop clone. 

As seen in the above diagram a linked clone environment can comprise several disks depending the configuration. This by far is the most confusing part IMHO. When you create a linked clone for the first time, you realize you have all these different disks attached to your clone and you may have no idea what they are for or even why the differing capacities.  Why are some persistent and some non-persistent, and which configuration works best for FVP acceleration? I will save these topics and more for part 2. However the replica (Base) disk is what I’m going to focus on in this post. 

In a Horizon View environment a full clone and a linked clone both have an OS disk, except that a linked clone will use a non-persistent delta disk for desktop changes and a replica (base) disk from the parent snapshot for image persistence. This delta disk will be active with any desktop operations, which makes it a prime candidate for acceleration. 
In addition to accelerating reads and writes for cloned desktops, FVP will automatically identify the replica disks in a linked clone pool and apply optimizations to leverage a single copy of data across all clones mapped to said replica disk. 

Note: Citrix's XenDesktop technology works essentially the same way with FVP instead of a replica disk, its called a master image. 

As seen in screenshot below, FVP will automatically place the replica disk (Linked Clone base disk) on an accelerated resource when using linked clones. In addition FVP only puts the active blocks of the desktop clone on the accelerated resource, which lowers the capacity required for the replica disk on the accelerated resources. It’s after the first desktops boot in the pool; that all ensuing clones will take advantage of reading the common blocks from the replica disk on the accelerated resource. If any blocks are requested that are not part of the replica disk, FVP will again fetch only the requested block and add it to already created replica disk. The same is true for any newly created linked cloned pools. A new replica disk will be added to the acceleration resource based on the new linked clone pool. This will be visible under FVP’s usage tab as a combined total of all active replica disks for a given acceleration resource. As you can imagine that only adding the active blocks of the replica disk provides a huge advantage for using memory as an acceleration resource. Windows 7 boot times in 5 -7 seconds are not uncommon in this scenario. 

FVP maintains these optimizations during clustered hypervisor operations such as vMotion. This means that if desktop clones are migrated from one host to another the desktop clones are able to leverage FVP’s Remote access clustering to read blocks from the replica disk on the host where the desktop clones migrated. This only happens on a temporary basis, as FVP will automatically create a new active block replica disk on the new primary host’s acceleration resource. It’s through FVP’s Remote Access clustering and any new desktop clone reboots that a new replica disk for the desktop clones on the new host will be created or updated for local read access efficiency. 
If the desktop clones are in Write Back mode, write acceleration continues automatically once the desktop clones migrate successfully to a new host, irrespective to the replica disk optimizations. 

The diagram below outlines the process where a replica disk is first created on a primary host and then the first desktop clones migrate to a new host. This process of creating a new replica disk on the new host happens only one time per linked clone pool, all subsequent cloned desktops matched to designated replica disk will gain the benefit of any future migrations. 

 When the desktop clones are booted, (1) the clones will request blocks from the assigned replica disk in the pool. FVP will intercept the requested blocks, which will be (2) copied into the acceleration resource that has been assigned to the FVP cluster. All future desktop clone boot processes will read the blocks from the acceleration resource instead of traversing to the assigned datastore where the full replica disk resides. If any changes are made to the original replica disk through a recompose or rebalance operation, then this process will start all over again for the linked clones.  (3) When the desktop clones are migrated to a new host through DRS, HA or a manual vMotion operation, (4) FVP will send read requests to the host where the desktop clones migrated. (5) The blocks are coped back to the new hosts’ acceleration resource, (6) so that any future requests are acknowledged from the new local host. A reboot of any linked clone during this time will also copy all common blocks into the new local host’s acceleration resource. 

As you can see from a VDI perspective FVP can truly make a difference in your datacenter. Gone are the days when one had to architect a separate environment to run virtual desktops. FVP can break down the technology silos and allow the virtual admin to truly scale performance on demand without the worry of storage performance constraints.
 
More Information on using PernixData FVP with your VDI Infrastructure. 
 

 

FVP Tip: Change Storage Device Display Name

As you might know, you have the ability to change a storage device display name on a particular ESXi host. This can be useful when you have several different devices installed on a given host and/or have different raid controllers backing the devices.

When you are wanting to test several different flash device models with different controllers and configurations with PernixData FVP, then it might become difficult to remember which identifier is which device. 

It's my reccomendation that you add the name of the controller as an extension to a friendlier device name. This way you can monitor performance by SSD with assigned controller.  An example could be: “Intel 520 – H310” The SSD model is represented and the controller is identified as a H310 for a Dell host.

 

 

vSphere Web Client Steps:

  1. Browse to the host in the vSphere Web Client navigator. Click the Manage tab and click Storage.
  2. Select the device to rename and click Rename. 
  3. Change the device name to a name that reflects your needs.

 

Now that you have renamed your flash device then you will see the changed device names show up in the FVP Plugin UI. 

PernixData FVP Hit Rate Explained

I assume most of you know that PernixData FVP provides a clustered solution to accelerate read and write I/O. In light of this I have received several questions around what the “Hit Rate” signifies in our UI. Since we commit every “write” to server-side flash then you obviously are going to have a 100% hit rate. This is one reason why I refrain calling our software a write caching solution!

However the hit rate graph in PernixData FVP as seen below is only referencing the read hit rate. In other words, every time we can reference a block of data on the server-side flash device it’s deemed a hit. If a read request cannot be acknowledged from the local flash device then it will need to be retrieved from the storage array. If a block needs to be retrieved from storage then it will not be registered in the hit rate graph. We do however copy that request into flash, so the next time that block of data is requested then it would then be seen as a hit.

Keep in mind that a low hit rate, doesn’t necessarily mean that you are not getting a performance increase. For example if you have a workload in “Write Back” mode and you have low hit rate, then this could mean that the workload has a heavy write I/O profile. So, even though you may have a low hit rate, all writes are still being accelerated because all the writes are served from the local flash device. 

Give Me Back My Capacity

Last week I was preaching the PernixData message in Tampa, Florida! While there I received a question that I believe is often overlooked when realizing the benefits of PernixData in your virtualized environment.

The question asked related to how PernixData FVP can give you more storage capacity to your already deployed storage infrastructure. There are actually several ways that FVP can give you more capacity for your workloads, but today I will focus on two examples. In order to understand how FVP makes this possible, it’s important to understand how Writes are accelerated. FVP intercepts all Writes from a given workload and then commits the Write on local server-side flash for fast acknowledgement. This obviously takes a gigantic load off the storage array since all Write I/O is being committed first to server-side flash. It’s with this new performance design that allows you to regain some of that storage capacity that you have lost to I/O performance architectures that are just to far from compute! 

If you are “Short Stroking” your drives, there is now no need to waste that space, use FVP to get even better performance without the huge costs associated with short stroking. Another example is when you have chosen to use RAID 10 (also known as RAID 1+0) in order to increase performance through striping the blocks and redundancy through block mirroring. Why not get up to 50% of your capacity back and move to RAID 6 or RAID 5 for redundancy and then use FVP for the performance tier. As you can see this opens up a lot of other possibilities and allows you to save money on disk and gain additional capacity for future growth.

Try this RAID calculator and see how much capacity you can get back when using an alternate RAID option with FVP! 

PernixData FVP & StorMagic SvSAN Use Case

In continuing to look at alternate ways to provide a good ROI capacity layer with PernixData FVP, Frank Denneman and I will be doing a couple posts on some unique designs with FVP. As I demonstrated in a previous post, FVP accelerates the reads and writes for virtual workloads, while a virtual storage appliance (VSA) can be a great technology to provide the primary storage and data services for virtual workloads.

With this post, I will focus on StorMagic and their iSCSi based VSA product named SvSAN. A couple interesting notes about SvSAN that might actually surprise you! StorMagic claims that they have one of the largest deployments of any VSA in the market. In 2013 alone they had over 800 percent growth! They currently also are the only VSA that can start with two nodes without needing a local 3rd host for a quorum response during host isolation situations. (More on this later)

A few interesting features:

-       vCenter plugin to manage all VSAs from a central point

-       Multi-Site Support  (ROBO/Edge) (Remote office / Branch office / Enterprise edge)

-       Active/Active Mirroring

-       Unlimited Storage & Nodes per Cluster

 

I think SvSAN and FVP combined can provide a great ROI for many environments. In order to demonstrate this, we need to go a little deeper to where each of these technologies fit into the virtualized stack.

Architecture:

SvSAN is deployed on a per host basis as a VSA. PernixData FVP however is deployed as a kernel module extension to ESXi on each host. This means that both architectures are not in conflict from an I/O path standpoint. The FVP module extension is installed on every host in the vSphere cluster, while SvSAN only needs to be installed on the hosts that have local storage. Hosts that don’t have access to local storage can still participate in FVP’s acceleration tier and also access SvSAN’s shared local storage presented from the other hosts via iSCSi.

Once both products have been fully deployed in the environment it’s important to understand how the I/O is passed from FVP to SvSAN. I have drawn a simple diagram to illustrate this process. 

You will notice that really the only difference between a traditional storage array design with FVP, is that you are now able to use local disks on the host. The SvSAN presents itself as iSCSi, so that the I/O passes through the local VSA to reach the local disk. Since virtual appliances have some overhead in processing I/O, it becomes advantageous with such a design to include PernixData FVP for the acceleration tier. This means that only unreferenced blocks need to be retrieved from the SvSAN storage and all other active blocks will be acknowledged from FVP’s local flash device. This will take a huge I/O load off of SvSAN and also provide lower latency to the application.

Fault Tolerance:

When any product is in the data path it becomes very important to provide fault tolerance and high availability for given workloads. SvSAN provides the data fault tolerance and high availability through its creation of a datastore mirror between two SvSAN VSA hosts.

This means if a host goes down or if the local storage fails, a VM can still continue with operations because SvSAN will automatically switch the local iSCSi connection to the mirrored host where there is consistent duplicated data.

The mirroring is done synchronously and guarantees data acknowledgement on both sides of the mirror. I think the really cool part is that the SvSAN can access any side of the mirror at any time without disrupting operations, even during FVP performance acceleration! The fault tolerance built-in to FVP is designed to protect those writes that have been committed and acknowledged on local/remote flash that haven’t yet been destaged to SvSAN layer. Once FVP has destaged the required writes to SvSAN at that point SvSAN’s mirrored datastore protection becomes relevant to the design.

Centralized Management in a Edge Environment:

As noted before, SvSAN only requires two hosts for a quorum during host isolation situations, where hosts or local storage is lost. This is accomplished through a separate service (NSH – Neutral Storage Host) that can be installed in a central location on either physical or virtual. It’s this centralization of a quorum service that can alleviate additional localized costs and management overhead. As it is with FVP, SvSAN can be managed from a vCenter plugin for centralized management. This means one can manage hundreds of enterprise edge sites for primary storage, while also providing centralized FVP management for each performance cluster using SvSAN. This is illustrated in the diagram below.

It’s with the low acquisition costs and simple management, where VSA usage has been popular in ROBO type of environments.  This can be great for primary storage at the enterprise edge but maybe not so great for those applications needing increased localized performance. The options to achieve a high performing cost effective storage solution for a virtualized remote environment have been limited in the past. It’s not until PernixData FVP that there was a solution where you can use inexpensive primary storage, like a VSA and also have a read/write performance tier that provides extremely low latency to the applications. The amazing part is that all this is accomplished through software and not another physical box. 

This post was just meant to be an introduction and high-level look at using a StorMagic’s VSA technology alongside PernixData FVP. I hope to go much deeper technically how each of these technologies work together in future posts.

This is a simple diagram showing centralized management with FVP and SvSAN in a single 2-host edge site. 

Server-Side Flash Presentation

At VMworld 2013 in San Francisco, I recorded a session at the vBrownBag Tech Talks. There were some technical difficulties during the process and so I thought I would re-record the same talk so that it would be easier to hear and see the presentation. 

This presentation is intended to illustrate why the storage fabric can not be overlooked when designing for storage performance and why server-side flash with PernixData completely solves I/O bottlenecks within the virtualized datacenter. 

I welcome your questions or feedback. 

 

 

Features of an Enterprise SSD

When looking for a flash device to use for PernixData FVP or other enterprise use cases, performance and reliability are important aspects to factor in. Just because a drive is spec’d with high IOPs and low latency numbers, doesn’t mean that it will keep up at that rate over time with enterprise workloads.

I would guess that most of you would prefer a consistent performing, reliable flash to higher IOPs or lower latency.  This is one reason why I like the Intel S3700 SSD. This drive does a good job at repeatable results and withstands heavy workloads over time. I’m not saying this drive or others are slow, these drives are still very fast, but they do favor consistency and reliability by design.

  

A little over a year ago Intel introduced a technology that enhanced the reliability of MLC flash. Intel called it HET – High Endurance Technology. This is basically an enhancement in firmware, controller and high-cycling NAND for endurance and performance. The optimization was in error avoidance techniques and write amplification reduction algorithms. The result is new enterprise SSD’s that are inexpensive and deliver good performance at predictable behavior. Keep in mind though that not all Intel drives have HET, this is what separates consumer from enterprise class drives.

This is one reason why Intel can claim “10 full drive writes per day over the 5-year life of the drive”. You will also notice that other manufactures/vendors OEM and incorporate Intel’s 25nm MLC HET NAND into their products. The incorporation of HET set’s Intel apart from the rest, but this doesn’t mean however that there are not others to choose from. It’s when you factor price, reliability, performance, and customer satisfaction that currently leads many to the S3700. 

The other important aspect to consider when looking for an enterprise SSD is read/write performance consistency. Some drives are architected just for read performance consistency. So if you have workloads that are balanced between read/write, or are write heavy then you want to look at a drive that provides consistency for both read and write.

As an example, the Intel S3500 gives better read performance consistency while the Intel S3700 gives consistency for both read and write. (Keep in mind that the Intel S3500 doesn't use HET)

 

Intel S3500 

 

Intel S3700

 

I reccomend taking a look at Frank Denneman's current blog Series that goes into some other aspects of flash performance with FVP. 

 


The First Flash Hypervisor

It's now official, the world has it's first Flash Hypervisor. PernixData has created a transformative technology that will have a resounding affect on the future datacenter. 

PernixData FVP ships today 1.0 of what will become omnipresent in the virtualized world. The growth of virtualization has created a need to accelerate today's applications and allow businesses to continue take advantage of virtualization. There is only one complete solution on the market that addresses this need and takes your datacenter to the next level! 

It's the Flash Hypervisor level in the virtualization stack that will create ubiquitously, because it's ability to scale, accelerate, and manage the world's modern workloads. Check out the details in our latest datasheet

So, join the revolution and download the 60 day trial today!! 

Capacity & Performance = VSA + FVP

A couple weeks ago Frank Denneman did a great post on why virtual appliances used for data path acceleration are not to be desired if you are trying to achieve low latency in your environment. Frank outlined why the use of a hypervisor kernel module provides a preferred way to accelerate I/O. I highly recommend you read his post before you go any further.

Even though virtual appliances are not the best at performance, there are still many reasons why you might want to deploy a VSA (Virtual Storage Appliance). For one reason it’s typically less cost and easier to manage. This is why you most likely see VSA’s in a smaller or test/dev environments. The ability to aggregate local storage into a shared pool is also a desired approach in using a VSA.

I recently did some testing with a well-known Virtual Storage Appliance along with PernixData’s Flash Virtualization Platform (FVP). I was amazed to find that this integration was truly a great way to implement storage capacity and performance. The VSA did what it does best; aggregate local storage into a capacity pool that can be easily managed, while FVP provided the performance required for the workloads.

Here is a simple diagram showing this use case… 

 

 

This use case provides several options to accelerate I/O. As an example if you choose a “Write Through” policy then all writes from a given workload will be acknowledged from the VSA storage pool, while FVP accelerates the read I/O. However if you choose a “Write Back” policy then writes will be accelerated from the local flash devices in the cluster and then de-staged appropriately to the VSA storage pool. In addition the workload that you choose to accelerate could be VM’s located on the VSA or even the VSA itself! As what to choose for your environment, I will have a separate post outlining what types of scenarios work best given a FVP design choice.

This use case provides low latency and increased IOPs not typically seen with just a virtual appliance. So, depending on what your objective is and environment this could be the winning ticket for storage capacity and performance.  Stay tuned for more ways to take advantage of FVP!!