PernixData FVP 3.0 - What's New

I’m pleased to announce that PernixData FVP 3.0 has been released to the masses! This has been a combination of many long hours by our engineering and staff in order to reach this unprecedented milestone.

Some of the highlighted features in this release are a result of a seasoned approach to solving storage performance problems while keeping a keen outlook toward what the future holds! In this post I will mention at a high-level what some of the new features are but look for more detailed posts coming soon.

Support for vSphere 6.0
We now have support for vSphere 6.0 using FVP 3.0! If you are running a previous version of FVP, you will need to upgrade to this release in order to gain full vSphere 6 support. If you are in a process of migrating to vSphere 6, we now have support for a migration plan from previous versions of ESXi running FVP. For example, FVP will support mixed environments of vCenter 6.0 with hosts running ESXi 5.1 or newer.  However keep in mind that FVP 3.0 will no longer be supporting vSphere 5.0 as a platform.

New HTML5 based User Interface
FVP 3.0 offers a completely new user experience. FVP 3.0 introduces a brand new standalone webclient where you will be able to configure and monitor all your FVP clusters. In addition, the new standalone webclient now gives you visibility into other FVP clusters that may reside in a different vCenter or vSphere cluster!!

This doesn’t mean you won’t have visibility in the vSphere webclient; we still have a plugin available that will give you the basic FVP analytics. However, all configurations and detailed analytics will only be available in the new standalone webclient.


Some may ask why we built our own webclient which I think is a valid question. The truth is that in order for us to control the full user experience for FVP we had to grow our own while still supporting the vSphere client for those quick looksee’s. I think you will be pleasantly surprised how robust and extensible the new standalone webclient is.

New Audit Log

In addition to providing FVP actions and alarms through vCenter tasks/events, FVP 3.0 now has a separate audit log. This is where you can easily see all FVP related actions and alarms for a given FVP cluster. The part I like is the ease of just doing a quick review of what’s changed without having to visit each host in vCenter.

 

Redesigned License Activation Process

The license activation process has been streamlined to offer greater simplicity and ease of use.  You can now activate and manage all of your licensing online through the new PernixData UI. All you need is a license key while the new FVP licensing activation process will do the rest. You also have the ability to see more details on what is licensed and what isn’t in the new UI. 

As you can see a lot of innovation has gone into this new release. In fact there is so much to reveal, I'm going to do a series posts over the next few weeks. To learn more and download FVP 3.0 release please visit: http://www.pernixdata.com/products or start a trial at: https://get.pernixdata.com/FVPTrial

FVP Upgrades Using VUM

Starting with FVP version 2.5, a new upgrade process was introduced. As always the vSphere Update Manager could be used to deploy the FVP host extension in the perspective vSphere cluster of hosts. However prior to 2.5 the FVP upgrade process needed to be performed using the host CLI. This required the removal of the old host extension before the new host extension could be installed. Now we have a new supported method where VUM can be used to deploy a new FVP host extension and also upgrade an existing one already installed without the process of manually removing the host extension first! 

Before you begin the upgrade process for FVP, make sure you have downloaded the appropriate VIB from the PernixData support portal. These are VIBs signed and designed for only FVP upgrades using VUM. 

The upgrade also includes putting the host in maintenance mode as required for certified extension level installs. This becomes much more seamless since VUM can handle the transition in and out of maintenance mode. Additionally VUM needs to completely satisfy the compliance of the upgrade, this means a reboot is required for FVP upgrades when using the vSphere Update Manager. 

Using VUM for upgrades is different than using the simple uninstall and install method at a CLI prompt. Essentially VUM installations can not run /tmp/prnxuninstall.sh to uninstall a previous host extension version, as there is no API or scripting capabilities built-in to the VUM product. 

This is why there is a dedicated VIB strictly for the upgrade of FVP. There is no current way to perform a live installation on a running ESX boot partition. This means that a reboot is required since the backup boot partition /altbootbank is used to update the host extension. Then after a host reboot,  the new host extension will be installed to the primary boot partition /bootbank for a compliant running ESX host. 

Once the host extension has been uploaded into the patch repository, it then can be added to a custom VUM baseline, while making sure it’s selected for “Host Extension”, since any other selection would prevent the completion of the upgrade. 


Once VUM has finished scanning, staging against the custom “Host Extension” baseline, (I called mine PRNX) then remediation of the hosts can take place. This is based on each host that is labeled with an X as “non-compliant”. Once a reboot has finished the remediation process will check for host extension compliance, this will ensure that the new host extension has been fully deployed, and if that is the case VUM will report back a check mark for “compliancy” 
As you can see the new method of using VUM for not only new installations but upgrades has made it that much more seamless to have FVP start transforming your environment into an accelerated platform. 

Why I Decided Not To Put Flash In The Array

My story starts about 3 years ago, where at the time I was the Information Systems director for a large non-profit in Atlanta, GA. One of the initiatives at the time was to become 100% virtualized in 6 months; and there were obviously many tasks that needed to be accomplished before reaching that milestone. The first task was to upgrade the storage platform, as we had already surpassed the performance characteristics for the current workloads. As with any project, we looked at all the major players in the market, we ran trials, talked to other customers, and did our due-diligence for the project. It was not only important for us to be mindful of costs being a non-profit but we wanted also to be good stewards in everything we did. 

The current storage system that we were looking to upgrade was a couple 7.2K RPM, 24 TB chassis’. We had plenty of storage for our needs but latency was in the 50ms range encompassing only about 3000 IOPs. Obviously not the best to run a virtualized environment on as you can see!! We looked at the early All Flash Arrays that were just coming out and we also looked at the Hybrid Arrays, all of them promising increased IOPs and lower latency. The problem was that they were not an inexpensive proposition. So, the dilemma of being good stewards and at the same time needing single digit latency with more than 50K IOPs was a challenge to say the least. 

About the same I met a gentleman that told me some magical stories that sounded almost too good to be true! This man’s name is Satyam Vaghani, the PernixData CTO, creator of VVOLS, VAAI and VMFS.  Soon after meeting Satyam, I was given the privilege of getting my hands on an alpha build of PernixData FVP. I ran and tested the product during the alpha and beta stages at which I in turn immediately purchased and became PernixData’s first paying customer. I had never purchased a product in Beta before, but I felt this product was out of the ordinary. The value and the promise were proved even in beta, where I didn’t have to buy new storage just for performance reasons and thus saved the organization collectively over $100,000. This wasn’t a localized problem; it was an architecture problem that no array or multiple storage systems could solve. So, if I were in that position today, I’m sure the calculation over 3 years would be close to $500,000 worth of savings, do to the scale-out nature of the FVP solution. As the environment grew and became 100% virtualized I no longer would have had to think about storage performance in the same way. I no longer would have had to think about the storage fabric connections in the same way as well. Talk about a good feeling of not only being a good steward but also astonishing the CFO on what was achieved. 

This to me validated the waste and inefficiencies that occur when flash is being used at the storage layer. Disk is cheap when used for capacity and so it has never made sense to me to cripple flash performance by putting it behind a network in a monolithic box that can have it’s own constraints and bottlenecks. 

Fast forward to today where flash is now much more prominent in the industry. The story is even stronger today, how can anyone not be conscientious about spending over 100K on a single array that can only achieve 90,000 IOPs with single digit millisecond latency? When someone can buy a single enterprise flash drive for $500 that does over 50K IOPs with microsecond latency, then the question that must be asked, can you defend your decision from the CFO or CIO and feel good about it?

Don’t get me wrong; I’m not saying FVP replaces storage capacity, if you need storage capacity, then go and purchase a new storage array. However, this doesn’t mean that you have to buy an AFA for capacity reasons. There are many cost effective options out there that makes more economic sense, no matter what the dedupe or compression rates that are promised!  

My personal advice to everyone is to be a conscientious objector when deciding to put flash in the array. It didn’t make sense for me 3 years ago and still doesn’t make sense today. 

My Role Transition

On May 1st, I started a new role in my career path at PernixData. It is with great delight that I’m now a Product Manager within the Product Management group. I look forward to working closely with Bala Narasimhan VP of Product Management. He and I have worked together on many other projects since the early alpha stages of FVP. So, in some ways this transition will feel like the early days!

With that said, I will be running many different projects in my new role. A renewed energy around blogging and staying close to the community is at utmost importance for me. In addition I will be helping with product collateral, documentation, roadmap items, and much more. It’s with this that I thank the wonderful teams and individuals that I have had the privilege to work with! Becoming the 1st customer to Product Manager is something that I could have only dreamed of.

Feel free to reach out and connect with me anytime, as part of my focus is to stay connected to a relevancy in the field, as this can only help any product-focused endeavors!

On a side note, I also thought it was very ironic that on May 1st, my father officially retired. He has been a minister and marriage and family therapist for many years. The timing couldn’t have been predicted, so I humbly congratulate him on his promotion to retirement and thank him for the many years of taking care of his family!

 

The Server-Side Storage Intelligent System Revealed!

PernixData introduced yesterday a revolutionary step forward in storage performance with the release of PernixData FVP 2.0. Several innovative features were revealed and a technology first was dropped on the industry. Frank Denneman has already started a great series on some of the new features. As to not let him have all the fun, I will also be covering some aspects to this new version as well!

The first big reveal was FVP transforming itself into an all-encompassing platform for storage optimization. Adding NFS & DAS to the already supported iSCSi, FC, FCOE list; which now completes all available connectivity options for VMware environments.

NFS support is obviously a welcome treat for many. It’s the support of local disk that might actually surprise some. Optimizing DAS environments I think will provide some unique cases for customers. (Future Post Coming) However keep in mind that supporting DAS doesn’t mean it voids use cases for VSA (Virtual Storage Appliance) software. PernixData is only accelerating the reads and writes, so if you require data services, then you may need to look at a VSA type of solution for your underlying local data at rest tier.

The biggest news in my opinion that really dropped the mic on the industry was the reveal of the first ever-distributed fault tolerant solution utilizing server memory for read/write I/O acceleration. Yep, you heard it right; accelerate those very important writes without the potential of data loss on volatile server memory is a gigantic leap forward. Look for more details around DFTM (Distributed Fault Tolerant Memory) in the coming weeks!!

I’m excited for the future and look forward to telling you more about these new advancements!

 

 

 

 

PernixData FVP Hit Rate Explained

I assume most of you know that PernixData FVP provides a clustered solution to accelerate read and write I/O. In light of this I have received several questions around what the “Hit Rate” signifies in our UI. Since we commit every “write” to server-side flash then you obviously are going to have a 100% hit rate. This is one reason why I refrain calling our software a write caching solution!

However the hit rate graph in PernixData FVP as seen below is only referencing the read hit rate. In other words, every time we can reference a block of data on the server-side flash device it’s deemed a hit. If a read request cannot be acknowledged from the local flash device then it will need to be retrieved from the storage array. If a block needs to be retrieved from storage then it will not be registered in the hit rate graph. We do however copy that request into flash, so the next time that block of data is requested then it would then be seen as a hit.

Keep in mind that a low hit rate, doesn’t necessarily mean that you are not getting a performance increase. For example if you have a workload in “Write Back” mode and you have low hit rate, then this could mean that the workload has a heavy write I/O profile. So, even though you may have a low hit rate, all writes are still being accelerated because all the writes are served from the local flash device. 

Give Me Back My Capacity

Last week I was preaching the PernixData message in Tampa, Florida! While there I received a question that I believe is often overlooked when realizing the benefits of PernixData in your virtualized environment.

The question asked related to how PernixData FVP can give you more storage capacity to your already deployed storage infrastructure. There are actually several ways that FVP can give you more capacity for your workloads, but today I will focus on two examples. In order to understand how FVP makes this possible, it’s important to understand how Writes are accelerated. FVP intercepts all Writes from a given workload and then commits the Write on local server-side flash for fast acknowledgement. This obviously takes a gigantic load off the storage array since all Write I/O is being committed first to server-side flash. It’s with this new performance design that allows you to regain some of that storage capacity that you have lost to I/O performance architectures that are just to far from compute! 

If you are “Short Stroking” your drives, there is now no need to waste that space, use FVP to get even better performance without the huge costs associated with short stroking. Another example is when you have chosen to use RAID 10 (also known as RAID 1+0) in order to increase performance through striping the blocks and redundancy through block mirroring. Why not get up to 50% of your capacity back and move to RAID 6 or RAID 5 for redundancy and then use FVP for the performance tier. As you can see this opens up a lot of other possibilities and allows you to save money on disk and gain additional capacity for future growth.

Try this RAID calculator and see how much capacity you can get back when using an alternate RAID option with FVP! 

Where are you measuring your storage latency?

I often times hear from vendors, virtual & storage admins about where they see storage latency in a particular virtualized environment. The interesting part is that there is a wide disparity between what is communicated and realized.

If storage latency is an important part of your measurement of performance in your environment then where you measure latency really matters. If you think about it, the VM latency is really the end result of the realized storage latency. The problem is that everyone has a different tool or place where they measure latency. If you look at the latency at the storage array then you are only really seeing the latency at the controller and array level. This doesn’t always include the latency experienced on the network or in the virtualized stack.

What you really need is visibility into the entire I/O path to see the effective latency of the VM. It’s the realized latency at the VM level that is the end result and what the user or admin sees or experiences. It can be dangerous to only focus your attention on one part of the latency in the stack and then base decisions on what the latency to the application is.

To solve this problem, PernixData has provided visibility into what the VM is observing, and since FVP is a read/write acceleration tier, you can also show a breakdown of latency in regards to read/write acknowledgements. 

As an example using the new zoom function in the new release of FVP 1.5, I can see the latency breakdown for a particular SQL Write Back enabled VM.

 

 

As you can see in this graph, the “Datastore” on the array had a latency spike that attributed to 7.45 Milliseconds, while the “Local Flash” on the host is at 0.25 ms or (250 Microseconds). The “VM Observed” latency is what the actual VM is seeing and thus you have a realized latency of 0.30 ms or (300 Microseconds)!! The reason you may have a small difference between Local Flash latency and VM Observed latency can be do to system operations such as flash device population as well as having write redundancy enabled or not.

To see this from a read/write perspective, you can also go to the "Custom Breakdown" menu and choose "Read" and "Write" to see the "VM Observed" latency broken down into reads and writes. 

 

As you can see the latency for this application was for writes not reads and since this VM is in Write Back mode we are seeing a realized 0.44 ms or (440 Microseconds) latency committed acknowledgment back to the application!!

This is obviously not the only way to determine what the actual latency is for your application, but what is unique, is the fact that PernixData is not making another latency silo solution. In other words, there are plenty of storage products on the market that give a great view into their perfect world of latency, but it’s isolated and not the full picture of what is observed on what matters in your virtualized datacenter. 

 

PernixData FVP & StorMagic SvSAN Use Case

In continuing to look at alternate ways to provide a good ROI capacity layer with PernixData FVP, Frank Denneman and I will be doing a couple posts on some unique designs with FVP. As I demonstrated in a previous post, FVP accelerates the reads and writes for virtual workloads, while a virtual storage appliance (VSA) can be a great technology to provide the primary storage and data services for virtual workloads.

With this post, I will focus on StorMagic and their iSCSi based VSA product named SvSAN. A couple interesting notes about SvSAN that might actually surprise you! StorMagic claims that they have one of the largest deployments of any VSA in the market. In 2013 alone they had over 800 percent growth! They currently also are the only VSA that can start with two nodes without needing a local 3rd host for a quorum response during host isolation situations. (More on this later)

A few interesting features:

-       vCenter plugin to manage all VSAs from a central point

-       Multi-Site Support  (ROBO/Edge) (Remote office / Branch office / Enterprise edge)

-       Active/Active Mirroring

-       Unlimited Storage & Nodes per Cluster

 

I think SvSAN and FVP combined can provide a great ROI for many environments. In order to demonstrate this, we need to go a little deeper to where each of these technologies fit into the virtualized stack.

Architecture:

SvSAN is deployed on a per host basis as a VSA. PernixData FVP however is deployed as a kernel module extension to ESXi on each host. This means that both architectures are not in conflict from an I/O path standpoint. The FVP module extension is installed on every host in the vSphere cluster, while SvSAN only needs to be installed on the hosts that have local storage. Hosts that don’t have access to local storage can still participate in FVP’s acceleration tier and also access SvSAN’s shared local storage presented from the other hosts via iSCSi.

Once both products have been fully deployed in the environment it’s important to understand how the I/O is passed from FVP to SvSAN. I have drawn a simple diagram to illustrate this process. 

You will notice that really the only difference between a traditional storage array design with FVP, is that you are now able to use local disks on the host. The SvSAN presents itself as iSCSi, so that the I/O passes through the local VSA to reach the local disk. Since virtual appliances have some overhead in processing I/O, it becomes advantageous with such a design to include PernixData FVP for the acceleration tier. This means that only unreferenced blocks need to be retrieved from the SvSAN storage and all other active blocks will be acknowledged from FVP’s local flash device. This will take a huge I/O load off of SvSAN and also provide lower latency to the application.

Fault Tolerance:

When any product is in the data path it becomes very important to provide fault tolerance and high availability for given workloads. SvSAN provides the data fault tolerance and high availability through its creation of a datastore mirror between two SvSAN VSA hosts.

This means if a host goes down or if the local storage fails, a VM can still continue with operations because SvSAN will automatically switch the local iSCSi connection to the mirrored host where there is consistent duplicated data.

The mirroring is done synchronously and guarantees data acknowledgement on both sides of the mirror. I think the really cool part is that the SvSAN can access any side of the mirror at any time without disrupting operations, even during FVP performance acceleration! The fault tolerance built-in to FVP is designed to protect those writes that have been committed and acknowledged on local/remote flash that haven’t yet been destaged to SvSAN layer. Once FVP has destaged the required writes to SvSAN at that point SvSAN’s mirrored datastore protection becomes relevant to the design.

Centralized Management in a Edge Environment:

As noted before, SvSAN only requires two hosts for a quorum during host isolation situations, where hosts or local storage is lost. This is accomplished through a separate service (NSH – Neutral Storage Host) that can be installed in a central location on either physical or virtual. It’s this centralization of a quorum service that can alleviate additional localized costs and management overhead. As it is with FVP, SvSAN can be managed from a vCenter plugin for centralized management. This means one can manage hundreds of enterprise edge sites for primary storage, while also providing centralized FVP management for each performance cluster using SvSAN. This is illustrated in the diagram below.

It’s with the low acquisition costs and simple management, where VSA usage has been popular in ROBO type of environments.  This can be great for primary storage at the enterprise edge but maybe not so great for those applications needing increased localized performance. The options to achieve a high performing cost effective storage solution for a virtualized remote environment have been limited in the past. It’s not until PernixData FVP that there was a solution where you can use inexpensive primary storage, like a VSA and also have a read/write performance tier that provides extremely low latency to the applications. The amazing part is that all this is accomplished through software and not another physical box. 

This post was just meant to be an introduction and high-level look at using a StorMagic’s VSA technology alongside PernixData FVP. I hope to go much deeper technically how each of these technologies work together in future posts.

This is a simple diagram showing centralized management with FVP and SvSAN in a single 2-host edge site. 

A Look Back On 2013

It's been a very transformative year for me. On April 29th, I started working as a Systems Engineer for PernixData. The same day my friend Frank Denneman also made his announcement! Since this time I have had the privilege working with a very talented team and made some great friends! 

Certifications:

During 2013 I was awarded the VMware vExpert award and I also completed my VCP5-DCV certification adding it to my list

Community:

I also had the opportunity to contribute to the vSphere Design Pocketbook! 

 

2013 was also a very busy year for PernixData and I was happy to play a part in it's development. I'm proud that my suggestion was chosen for naming the PernixData elite group of IT professionals as "PernixPro"!

 

Instead of naming all the many achievements that PernixData accomplished in 2013, here is a great link to our 2013 newsletter

This Cloudjock blog was nominated for the 2013 favorite new blog on vSphere-land.com 

I was able to speak at the Charlotte VMUG, Cincinatti VMUG, and Southern California VMUG

At VMworld 2013 I presented at the vBrownbag Tech Talks and met the great crew behind scenes! 

A Look Ahead-2014:

I can tell you for sure that 2014 is going to be an exciting year for PernixData. Some big stuff is going to happen that I think will bring the industry to it's knees!! :)

I'm not going to make any predictions this year, like I did last year, but let's look how I did. PernixData had a big year coming out of stealth and Nimble just completed a big year with their IPO. The other two predictions gained more traction in the market, but I believe the clear winners are PernixData and Nimble. 

In 2014 I have plans to achieve at least two more certifications along the way and become even more active within the community as well! 

Happy New Year!!!

 

 

 

Breaking News: PernixData FVP Wins Big

That's right, How bout them apples! PernixData FVP just won TechTarget's Modern Infrastructure Bright Idea Impact Award! This is a new annual award focused on readership voting for the best, brightest and most impactful new product! This is not a analyst award or paid advertisement, it's an award that was given by you, the community. Thank You!

PernixData won over these other companies that were also contenders: 

HP Moonshot

Infinio Accelerator

Neverfail IT Continuity Architect

Red Hat Cloud Infrastructure

SwiftTest Workload Insight Manager

Unisys Forward

VMware NSX

We now have a growing list of awards to display proudly! 

infoworld      virtualization    vmvworld    crn

Server-Side Flash Presentation

At VMworld 2013 in San Francisco, I recorded a session at the vBrownBag Tech Talks. There were some technical difficulties during the process and so I thought I would re-record the same talk so that it would be easier to hear and see the presentation. 

This presentation is intended to illustrate why the storage fabric can not be overlooked when designing for storage performance and why server-side flash with PernixData completely solves I/O bottlenecks within the virtualized datacenter. 

I welcome your questions or feedback. 

 

 

Asking the Right Questions - New Storage Design

In the era of Software Defined Storage (SDS), the right questions need to be asked when the need for additional storage performance is required. The growth of virtualization in the enterprise has had a tremendous impact on storage performance and finances. It’s arguable that the biggest line item in most IT budgets is storage for performance reasons. In my opinion this has been a huge driver to the SDS movement today. The problem is that there are so many answers or solutions to what SDS is or isn’t. This only confuses the market and delays the true benefits of such a solution. It is with this notion that I believe we are on the cusp of something transformative.

There are several facets to SDS that could be discussed but the focus of this post surrounds the decoupling of storage performance from capacity.

It’s not uncommon to hear that storage performance is the biggest bottleneck for virtualization. This is one reason why we pay the big money for new arrays that promise a fix to this common problem. The reality is that it’s just a patch to the underlying design & architecture. In fact it has gotten so bad that some consumers have become blasé and a new normal has emerged. I often hear from virtual admins that they don’t have storage performance problems. It’s not till I dig deeper to find that their read/write latency is something of a surprise. An average of 15-20-Millisecond latency and spikes of more than 50-Millisecond is the reality!! How in the world did we get to this new normal? I personally believe we got to this because it’s either been cost prohibitive for many to do anything different and also until recently there hasn’t been a complete new architecture to answer storage performance problems in the market once and for all.

One analogy to this could be the smartphone phenomenon. Have you ever noticed how slow a previous smartphone generation seems when you pick it up after using your latest smartphone? It’s very easy to become accustom to something and have no idea what you’re missing. It’s with this that we need to recognize the new normal (microsecond latency) and understand what is possible!

Let’s breakdown 3 areas that makeup the storage framework at which we consume storage today in regards to virtualization. 

 

 

Characterized Attributes:

Performance = Read/Write Acceleration for I/O

Data Services = Replication, Dedupe, Snapshots, Tiering, Automation, Mgmt., etc.…

Capacity = Data-at-Rest, Storage Pool, Disk Size, Disk Type

 

Looking at each of these 3 areas that make up a typical storage array, where do you spend the most money today? What if you could separate these from each other, what possibilities could emerge?

As you know it’s the decoupling of Performance from Capacity that brings the biggest ROI and not surprisingly the most difficult to separate. It’s with this separation that allows us to move the performance of disk I/O to the compute cluster, close to the application. This means write acknowledgments happen very quickly and low latency can be achieved by leveraging local flash devices in the hosts as a new clustered data-in-motion tier! Aka – PernixData FVP

 

 

This new design no longer eludes a need to purchase storage just for performance reasons. It opens up a lot of other vendors and possibilities to choose from. Do you really need to purchase expensive storage now?  Does the promise of SDS and commodity storage now become a reality? Do you really need to purchase a hybrid or all flash array? Doesn’t this mean that cheap rotating SATA is all I need for capacity sake? If the array provides the needed data services on top of the capacity required, what else do I need to accomplish a true scale-out architecture for this new design? These are all important questions to ask in this new era of storage disruption.

If all all performance needs can now be accomplished and realized from the host cluster, then I now have the ability to achieve the 90-100% virtualized datacenter. This is a realization that often happens when this new design has had time to sink in. So, I challenge each of you to investigate not only how this can make you a hero in your environment but radically disrupt the time you save working on storage performance problems!

 

- Disclaimer: This post is not sponsored or affiliated with PernixData or any other vendor -

 

The First Flash Hypervisor

It's now official, the world has it's first Flash Hypervisor. PernixData has created a transformative technology that will have a resounding affect on the future datacenter. 

PernixData FVP ships today 1.0 of what will become omnipresent in the virtualized world. The growth of virtualization has created a need to accelerate today's applications and allow businesses to continue take advantage of virtualization. There is only one complete solution on the market that addresses this need and takes your datacenter to the next level! 

It's the Flash Hypervisor level in the virtualization stack that will create ubiquitously, because it's ability to scale, accelerate, and manage the world's modern workloads. Check out the details in our latest datasheet

So, join the revolution and download the 60 day trial today!! 

Capacity & Performance = VSA + FVP

A couple weeks ago Frank Denneman did a great post on why virtual appliances used for data path acceleration are not to be desired if you are trying to achieve low latency in your environment. Frank outlined why the use of a hypervisor kernel module provides a preferred way to accelerate I/O. I highly recommend you read his post before you go any further.

Even though virtual appliances are not the best at performance, there are still many reasons why you might want to deploy a VSA (Virtual Storage Appliance). For one reason it’s typically less cost and easier to manage. This is why you most likely see VSA’s in a smaller or test/dev environments. The ability to aggregate local storage into a shared pool is also a desired approach in using a VSA.

I recently did some testing with a well-known Virtual Storage Appliance along with PernixData’s Flash Virtualization Platform (FVP). I was amazed to find that this integration was truly a great way to implement storage capacity and performance. The VSA did what it does best; aggregate local storage into a capacity pool that can be easily managed, while FVP provided the performance required for the workloads.

Here is a simple diagram showing this use case… 

 

 

This use case provides several options to accelerate I/O. As an example if you choose a “Write Through” policy then all writes from a given workload will be acknowledged from the VSA storage pool, while FVP accelerates the read I/O. However if you choose a “Write Back” policy then writes will be accelerated from the local flash devices in the cluster and then de-staged appropriately to the VSA storage pool. In addition the workload that you choose to accelerate could be VM’s located on the VSA or even the VSA itself! As what to choose for your environment, I will have a separate post outlining what types of scenarios work best given a FVP design choice.

This use case provides low latency and increased IOPs not typically seen with just a virtual appliance. So, depending on what your objective is and environment this could be the winning ticket for storage capacity and performance.  Stay tuned for more ways to take advantage of FVP!! 

It's true... I joined PernixData

If you haven’t already guessed, I now proudly work for PernixData. The past several months I have been an Alpha and Beta tester for PernixData’s Flash Virtualization Platform. I fell in love with the product early in my testing and soon realized that the co-founders had solved a critical storage pain point that many virtualization admins have struggled with. This inspired me to contribute my part to this grand endeavor! There is of course always a great story behind changes, but I will leave that for another day! 

I truly believe that PernixData is a paradigm shift in virtualized datacenter design. Having a new “data-in-motion” storage tier that scales is nothing more than revolutionary. I believe the market now has a complete scalable virtualization solution for CPU, Memory and now Storage performance. 

If you would like to learn more about PernixData, feel free to contact me. I’m currently residing in Atlanta, GA. as a Systems Engineer for PernixData.

If you also feel the passion burn for PernixData and its vision to transform the storage market, check out our growing list of open positions! 

Carolina VMware User Summit 2013

This coming Thursday the 2013 Carolina VMUG begins. This year will include a ton of great sessions and speakers. There will be a Expert Panel talking about the future of Virtualization; this panel will include William Lam, Scott Lowe, Chris Colotti and Chad Sakac. Check out the Schedule! 

There are also two sessions that I'm partial too that I want to highlight.

Education Session

Time: 10:15-11:00 am
Title: "The New Scale-Out Data Tier - A Storage Platform Paradigm Shift" (PernixData)
Speaker: Todd Mace 
Room: 217 A 

eGroup Lab Session

Time: 1:00-1:45 pm
Title: Free Style Session- Flashtastic – PernixData FVP Demo
Speakers: John Flisher & Todd Mace
Room: 211 AB 

It's never to late to register and come see some awesome IOP presentations!! 

PernixData - Validation

The past few weeks have been exciting for the new prized startup PernixData. A little over two weeks ago Frank Denneman decided to make a transition from VMware to PernixData! Then today, PernixData is endorsed for another round of financing, this time from Kleiner Perkins Caufield & Byers

This all points to a validation that PernixData is doing something different and is about to disrupt the storage market in a major way. PernixData is the only company that truly decouples your storage performance from capacity. This decoupling innovation is in their FVP (Flash Virtualization Platform) a downloadable software product. The ROI is immediately realized after a short 5 minute install, with no disruption of the current infrastructure. If you are not already a beta customer, then I highly recommend and try out this new revolutionary product

If you have further questions and would like to meet some of the visionaries, check out their new "Meet Us" page, listing events where they will be showcasing at!

Stay tuned as there will be more exciting announcements forthcoming!! 

 

    

PernixData - Opportunities

If you value Family, Transparency, Teamwork, Challenges, and opportunities to make a difference, then PernixData is for you!! PernixData is a fast growing company that is seeking individuals that want to make impact and have a love for virtualization! There are several openings around the country

Join an awesome team!!!

Contact for current openings: jobs@pernixdata.com

Marketing

Technical Support

Technical Staff

Account Executives

System Engineers

Product Snapshot: 

"PernixData Flash Virtualization Platform™ (FVP™) - an enterprise-class, high-speed, software-only data tier for application acceleration - enables a strategic shift in customers’ vision for virtualized data centers. PernixData FVP is created by virtualizing server-side flash via a scale out architecture. Virtualized applications transparently leverage FVP for unprecedented performance while requiring no changes to either the application or the underlying storage infrastructure. Clustered hypervisor features such as live migrations and distributed resource management continue to operate seamlessly with FVP. By virtualizing flash in servers, PernixData is picking up where hypervisors left off after virtualizing CPU and memory."