The Server-Side Storage Intelligent System Revealed!

PernixData introduced yesterday a revolutionary step forward in storage performance with the release of PernixData FVP 2.0. Several innovative features were revealed and a technology first was dropped on the industry. Frank Denneman has already started a great series on some of the new features. As to not let him have all the fun, I will also be covering some aspects to this new version as well!

The first big reveal was FVP transforming itself into an all-encompassing platform for storage optimization. Adding NFS & DAS to the already supported iSCSi, FC, FCOE list; which now completes all available connectivity options for VMware environments.

NFS support is obviously a welcome treat for many. It’s the support of local disk that might actually surprise some. Optimizing DAS environments I think will provide some unique cases for customers. (Future Post Coming) However keep in mind that supporting DAS doesn’t mean it voids use cases for VSA (Virtual Storage Appliance) software. PernixData is only accelerating the reads and writes, so if you require data services, then you may need to look at a VSA type of solution for your underlying local data at rest tier.

The biggest news in my opinion that really dropped the mic on the industry was the reveal of the first ever-distributed fault tolerant solution utilizing server memory for read/write I/O acceleration. Yep, you heard it right; accelerate those very important writes without the potential of data loss on volatile server memory is a gigantic leap forward. Look for more details around DFTM (Distributed Fault Tolerant Memory) in the coming weeks!!

I’m excited for the future and look forward to telling you more about these new advancements!

 

 

 

 

PernixData FVP & StorMagic SvSAN Use Case

In continuing to look at alternate ways to provide a good ROI capacity layer with PernixData FVP, Frank Denneman and I will be doing a couple posts on some unique designs with FVP. As I demonstrated in a previous post, FVP accelerates the reads and writes for virtual workloads, while a virtual storage appliance (VSA) can be a great technology to provide the primary storage and data services for virtual workloads.

With this post, I will focus on StorMagic and their iSCSi based VSA product named SvSAN. A couple interesting notes about SvSAN that might actually surprise you! StorMagic claims that they have one of the largest deployments of any VSA in the market. In 2013 alone they had over 800 percent growth! They currently also are the only VSA that can start with two nodes without needing a local 3rd host for a quorum response during host isolation situations. (More on this later)

A few interesting features:

-       vCenter plugin to manage all VSAs from a central point

-       Multi-Site Support  (ROBO/Edge) (Remote office / Branch office / Enterprise edge)

-       Active/Active Mirroring

-       Unlimited Storage & Nodes per Cluster

 

I think SvSAN and FVP combined can provide a great ROI for many environments. In order to demonstrate this, we need to go a little deeper to where each of these technologies fit into the virtualized stack.

Architecture:

SvSAN is deployed on a per host basis as a VSA. PernixData FVP however is deployed as a kernel module extension to ESXi on each host. This means that both architectures are not in conflict from an I/O path standpoint. The FVP module extension is installed on every host in the vSphere cluster, while SvSAN only needs to be installed on the hosts that have local storage. Hosts that don’t have access to local storage can still participate in FVP’s acceleration tier and also access SvSAN’s shared local storage presented from the other hosts via iSCSi.

Once both products have been fully deployed in the environment it’s important to understand how the I/O is passed from FVP to SvSAN. I have drawn a simple diagram to illustrate this process. 

You will notice that really the only difference between a traditional storage array design with FVP, is that you are now able to use local disks on the host. The SvSAN presents itself as iSCSi, so that the I/O passes through the local VSA to reach the local disk. Since virtual appliances have some overhead in processing I/O, it becomes advantageous with such a design to include PernixData FVP for the acceleration tier. This means that only unreferenced blocks need to be retrieved from the SvSAN storage and all other active blocks will be acknowledged from FVP’s local flash device. This will take a huge I/O load off of SvSAN and also provide lower latency to the application.

Fault Tolerance:

When any product is in the data path it becomes very important to provide fault tolerance and high availability for given workloads. SvSAN provides the data fault tolerance and high availability through its creation of a datastore mirror between two SvSAN VSA hosts.

This means if a host goes down or if the local storage fails, a VM can still continue with operations because SvSAN will automatically switch the local iSCSi connection to the mirrored host where there is consistent duplicated data.

The mirroring is done synchronously and guarantees data acknowledgement on both sides of the mirror. I think the really cool part is that the SvSAN can access any side of the mirror at any time without disrupting operations, even during FVP performance acceleration! The fault tolerance built-in to FVP is designed to protect those writes that have been committed and acknowledged on local/remote flash that haven’t yet been destaged to SvSAN layer. Once FVP has destaged the required writes to SvSAN at that point SvSAN’s mirrored datastore protection becomes relevant to the design.

Centralized Management in a Edge Environment:

As noted before, SvSAN only requires two hosts for a quorum during host isolation situations, where hosts or local storage is lost. This is accomplished through a separate service (NSH – Neutral Storage Host) that can be installed in a central location on either physical or virtual. It’s this centralization of a quorum service that can alleviate additional localized costs and management overhead. As it is with FVP, SvSAN can be managed from a vCenter plugin for centralized management. This means one can manage hundreds of enterprise edge sites for primary storage, while also providing centralized FVP management for each performance cluster using SvSAN. This is illustrated in the diagram below.

It’s with the low acquisition costs and simple management, where VSA usage has been popular in ROBO type of environments.  This can be great for primary storage at the enterprise edge but maybe not so great for those applications needing increased localized performance. The options to achieve a high performing cost effective storage solution for a virtualized remote environment have been limited in the past. It’s not until PernixData FVP that there was a solution where you can use inexpensive primary storage, like a VSA and also have a read/write performance tier that provides extremely low latency to the applications. The amazing part is that all this is accomplished through software and not another physical box. 

This post was just meant to be an introduction and high-level look at using a StorMagic’s VSA technology alongside PernixData FVP. I hope to go much deeper technically how each of these technologies work together in future posts.

This is a simple diagram showing centralized management with FVP and SvSAN in a single 2-host edge site. 

Capacity & Performance = VSA + FVP

A couple weeks ago Frank Denneman did a great post on why virtual appliances used for data path acceleration are not to be desired if you are trying to achieve low latency in your environment. Frank outlined why the use of a hypervisor kernel module provides a preferred way to accelerate I/O. I highly recommend you read his post before you go any further.

Even though virtual appliances are not the best at performance, there are still many reasons why you might want to deploy a VSA (Virtual Storage Appliance). For one reason it’s typically less cost and easier to manage. This is why you most likely see VSA’s in a smaller or test/dev environments. The ability to aggregate local storage into a shared pool is also a desired approach in using a VSA.

I recently did some testing with a well-known Virtual Storage Appliance along with PernixData’s Flash Virtualization Platform (FVP). I was amazed to find that this integration was truly a great way to implement storage capacity and performance. The VSA did what it does best; aggregate local storage into a capacity pool that can be easily managed, while FVP provided the performance required for the workloads.

Here is a simple diagram showing this use case… 

 

 

This use case provides several options to accelerate I/O. As an example if you choose a “Write Through” policy then all writes from a given workload will be acknowledged from the VSA storage pool, while FVP accelerates the read I/O. However if you choose a “Write Back” policy then writes will be accelerated from the local flash devices in the cluster and then de-staged appropriately to the VSA storage pool. In addition the workload that you choose to accelerate could be VM’s located on the VSA or even the VSA itself! As what to choose for your environment, I will have a separate post outlining what types of scenarios work best given a FVP design choice.

This use case provides low latency and increased IOPs not typically seen with just a virtual appliance. So, depending on what your objective is and environment this could be the winning ticket for storage capacity and performance.  Stay tuned for more ways to take advantage of FVP!!