FVP Freedom Edition Launch

As you know we shipped PernixData FVP 3.0 version yesterday, but what you might not know is that we also shipped PernixData FVP Freedom Edition. This in my opinion is an exciting addition to the product family and based on the feedback we have already received it’s taking off in a major way!! Keep in mind this is totally free software with no restrictions or time limits. 

braveheart-freedom.jpg

For those unfamiliar with the Freedom Edition I have outlined the supported features that come with this release. 

Supported Features
•    vSphere 5.1, 5.5 and 6.0
•    Maximum 128GB of Memory (DFTM) per FVP Cluster
•    Unlimited VM’s and Hosts
•    Write Through Configuration

If you want DFTM-Z (Memory Compression) or the ability to configure Write Back for your virtual machines then you can easily upgrade to our standard and enterprise licensing options. 

Freedom Community Forum
We are launching with the Freedom edition a brand new community forum. This is to provide support and collaboration among the Freedom users. As you might guess, we are planning to add a lot of content over the next few weeks, so the more questions or interaction you have on the forum, the more it will make it useful for the Freedom community. In order to access this forum, you can visit https://community.pernixdata.com and click sign-in. We have enabled SSO support, so all you have to use is your same PernixData download portal account and we will redirect you back into the community forum. 



If you haven’t already requested the Freedom edition, you can request access here. Once registered you will automatically receive an email with instructions on how to gain access to the software and portal. This is totally an automated process, so you will get your Freedom license key the same day you request it!!

PernixData FVP 3.0 - What's New

I’m pleased to announce that PernixData FVP 3.0 has been released to the masses! This has been a combination of many long hours by our engineering and staff in order to reach this unprecedented milestone.

Some of the highlighted features in this release are a result of a seasoned approach to solving storage performance problems while keeping a keen outlook toward what the future holds! In this post I will mention at a high-level what some of the new features are but look for more detailed posts coming soon.

Support for vSphere 6.0
We now have support for vSphere 6.0 using FVP 3.0! If you are running a previous version of FVP, you will need to upgrade to this release in order to gain full vSphere 6 support. If you are in a process of migrating to vSphere 6, we now have support for a migration plan from previous versions of ESXi running FVP. For example, FVP will support mixed environments of vCenter 6.0 with hosts running ESXi 5.1 or newer.  However keep in mind that FVP 3.0 will no longer be supporting vSphere 5.0 as a platform.

New HTML5 based User Interface
FVP 3.0 offers a completely new user experience. FVP 3.0 introduces a brand new standalone webclient where you will be able to configure and monitor all your FVP clusters. In addition, the new standalone webclient now gives you visibility into other FVP clusters that may reside in a different vCenter or vSphere cluster!!

This doesn’t mean you won’t have visibility in the vSphere webclient; we still have a plugin available that will give you the basic FVP analytics. However, all configurations and detailed analytics will only be available in the new standalone webclient.


Some may ask why we built our own webclient which I think is a valid question. The truth is that in order for us to control the full user experience for FVP we had to grow our own while still supporting the vSphere client for those quick looksee’s. I think you will be pleasantly surprised how robust and extensible the new standalone webclient is.

New Audit Log

In addition to providing FVP actions and alarms through vCenter tasks/events, FVP 3.0 now has a separate audit log. This is where you can easily see all FVP related actions and alarms for a given FVP cluster. The part I like is the ease of just doing a quick review of what’s changed without having to visit each host in vCenter.

 

Redesigned License Activation Process

The license activation process has been streamlined to offer greater simplicity and ease of use.  You can now activate and manage all of your licensing online through the new PernixData UI. All you need is a license key while the new FVP licensing activation process will do the rest. You also have the ability to see more details on what is licensed and what isn’t in the new UI. 

As you can see a lot of innovation has gone into this new release. In fact there is so much to reveal, I'm going to do a series posts over the next few weeks. To learn more and download FVP 3.0 release please visit: http://www.pernixdata.com/products or start a trial at: https://get.pernixdata.com/FVPTrial

FVP Color Blindness Accessibility

With much respect and detail our PernixData engineers are interested in every facet of the customer experience. Something that may seem small to others can be a big deal to some. It’s with this that PernixData thinks about every feature in a holistic manner that all can appreciate. One such feature that fits this model is providing visual accessibility to those with color blindness. With 1 in 12 men and 1 in 200 women having some form of color blindness, it becomes important that the FVP UI is readable and understandable no matter the impediment.

 It was in FVP 2.0 that we made modifications to the colors in our UI to deal with the most common forms of color blindness: Deuteranopia (~5% of males), Protanopia (~2.5% of males), and Tritanopia (~.3% of males and females). For example, the “Network Acceleration” line graph was made to be  lime green. In addition all colors were tested with “Color Oracle” and application that simulates different forms of color blindness.

In addition, we made it easy to recognize each line on a chart uniquely identifiable. This was accomplished by providing the ability to toggle lines on/off. For example, if you aren't sure which line is referring to the datastore, just toggle the others off or toggle the datastore selection off/on, and this will clearly show the datastore line.

When designing the FVP interface it was also recognized that color is used as a secondary source of information that provides further insight and impactful information to the primary source. For example, in the host/flash device visualization, the color of the tiles (red, green, yellow) indicates the state of the relevant object. If there is a problem however, alarms/warnings also show an exclamation point on the tile in addition to the coloring of the tiles.

 

FVP Upgrades Using VUM

Starting with FVP version 2.5, a new upgrade process was introduced. As always the vSphere Update Manager could be used to deploy the FVP host extension in the perspective vSphere cluster of hosts. However prior to 2.5 the FVP upgrade process needed to be performed using the host CLI. This required the removal of the old host extension before the new host extension could be installed. Now we have a new supported method where VUM can be used to deploy a new FVP host extension and also upgrade an existing one already installed without the process of manually removing the host extension first! 

Before you begin the upgrade process for FVP, make sure you have downloaded the appropriate VIB from the PernixData support portal. These are VIBs signed and designed for only FVP upgrades using VUM. 

The upgrade also includes putting the host in maintenance mode as required for certified extension level installs. This becomes much more seamless since VUM can handle the transition in and out of maintenance mode. Additionally VUM needs to completely satisfy the compliance of the upgrade, this means a reboot is required for FVP upgrades when using the vSphere Update Manager. 

Using VUM for upgrades is different than using the simple uninstall and install method at a CLI prompt. Essentially VUM installations can not run /tmp/prnxuninstall.sh to uninstall a previous host extension version, as there is no API or scripting capabilities built-in to the VUM product. 

This is why there is a dedicated VIB strictly for the upgrade of FVP. There is no current way to perform a live installation on a running ESX boot partition. This means that a reboot is required since the backup boot partition /altbootbank is used to update the host extension. Then after a host reboot,  the new host extension will be installed to the primary boot partition /bootbank for a compliant running ESX host. 

Once the host extension has been uploaded into the patch repository, it then can be added to a custom VUM baseline, while making sure it’s selected for “Host Extension”, since any other selection would prevent the completion of the upgrade. 


Once VUM has finished scanning, staging against the custom “Host Extension” baseline, (I called mine PRNX) then remediation of the hosts can take place. This is based on each host that is labeled with an X as “non-compliant”. Once a reboot has finished the remediation process will check for host extension compliance, this will ensure that the new host extension has been fully deployed, and if that is the case VUM will report back a check mark for “compliancy” 
As you can see the new method of using VUM for not only new installations but upgrades has made it that much more seamless to have FVP start transforming your environment into an accelerated platform. 

Why I Decided Not To Put Flash In The Array

My story starts about 3 years ago, where at the time I was the Information Systems director for a large non-profit in Atlanta, GA. One of the initiatives at the time was to become 100% virtualized in 6 months; and there were obviously many tasks that needed to be accomplished before reaching that milestone. The first task was to upgrade the storage platform, as we had already surpassed the performance characteristics for the current workloads. As with any project, we looked at all the major players in the market, we ran trials, talked to other customers, and did our due-diligence for the project. It was not only important for us to be mindful of costs being a non-profit but we wanted also to be good stewards in everything we did. 

The current storage system that we were looking to upgrade was a couple 7.2K RPM, 24 TB chassis’. We had plenty of storage for our needs but latency was in the 50ms range encompassing only about 3000 IOPs. Obviously not the best to run a virtualized environment on as you can see!! We looked at the early All Flash Arrays that were just coming out and we also looked at the Hybrid Arrays, all of them promising increased IOPs and lower latency. The problem was that they were not an inexpensive proposition. So, the dilemma of being good stewards and at the same time needing single digit latency with more than 50K IOPs was a challenge to say the least. 

About the same I met a gentleman that told me some magical stories that sounded almost too good to be true! This man’s name is Satyam Vaghani, the PernixData CTO, creator of VVOLS, VAAI and VMFS.  Soon after meeting Satyam, I was given the privilege of getting my hands on an alpha build of PernixData FVP. I ran and tested the product during the alpha and beta stages at which I in turn immediately purchased and became PernixData’s first paying customer. I had never purchased a product in Beta before, but I felt this product was out of the ordinary. The value and the promise were proved even in beta, where I didn’t have to buy new storage just for performance reasons and thus saved the organization collectively over $100,000. This wasn’t a localized problem; it was an architecture problem that no array or multiple storage systems could solve. So, if I were in that position today, I’m sure the calculation over 3 years would be close to $500,000 worth of savings, do to the scale-out nature of the FVP solution. As the environment grew and became 100% virtualized I no longer would have had to think about storage performance in the same way. I no longer would have had to think about the storage fabric connections in the same way as well. Talk about a good feeling of not only being a good steward but also astonishing the CFO on what was achieved. 

This to me validated the waste and inefficiencies that occur when flash is being used at the storage layer. Disk is cheap when used for capacity and so it has never made sense to me to cripple flash performance by putting it behind a network in a monolithic box that can have it’s own constraints and bottlenecks. 

Fast forward to today where flash is now much more prominent in the industry. The story is even stronger today, how can anyone not be conscientious about spending over 100K on a single array that can only achieve 90,000 IOPs with single digit millisecond latency? When someone can buy a single enterprise flash drive for $500 that does over 50K IOPs with microsecond latency, then the question that must be asked, can you defend your decision from the CFO or CIO and feel good about it?

Don’t get me wrong; I’m not saying FVP replaces storage capacity, if you need storage capacity, then go and purchase a new storage array. However, this doesn’t mean that you have to buy an AFA for capacity reasons. There are many cost effective options out there that makes more economic sense, no matter what the dedupe or compression rates that are promised!  

My personal advice to everyone is to be a conscientious objector when deciding to put flash in the array. It didn’t make sense for me 3 years ago and still doesn’t make sense today. 

How Can Database Users Benefit - PernixData FVP

I'm pleased to introduce you to Bala Narasimhan, VP of Products at PernixData. He has a wealth of knowledge around databases, and has authored 2 patents for memory management in relational databases. It's my pleasure to have him featured in today's post. He is officially my first guest blogger! Enjoy! 

Databases are a critical application for the enterprise and usually have demanding storage performance requirements. In this blog post I will describe how to understand the storage performance requirements of a database at the query level using database tools. I’ll then explain why PernixData FVP helps not only to solve the database storage performance problem but also the database manageability problem that manifests itself when storage performance becomes a bottleneck. Throughout the discussion I will use SQL Server as an example database although the principles apply across the board.

Query Execution Plans

When writing code in a language such as C++ one describes the algorithm one wants to execute. For example, implementing a sorting algorithm in C++ means describing the control flow involved in that particular implementation of sorting. This will be different in a bubble sort implementation versus a merge sort implementation and the onus is on the programmer to implement the control flow for each sort algorithm correctly.

In contrast, SQL is a declarative language. SQL statements simply describe what the end user wants to do. The control flow is something the database decides. For example, when joining two tables the database decides whether to execute a hash join, a merge join or a nested loop join. The user doesn’t decide this. The user simply executes a SQL statement that performs a join of two tables without any mention of the actual join algorithm to use. 

The component within the database that comes up with the plan on how to execute the SQL statement is usually called the query optimizer. The query optimizer searches the entire space of possible execution plans for a given SQL statement and tries to pick the optimal one. As you can imagine this problem of picking the most optimal plan out of all possible plans can be computationally intensive.

SQL’s declarative nature can be sub-optimal for query performance because the query optimizer might not always pick the best possible query plan. This is usually because it doesn’t have full information regarding a number of critical components such as the kind of infrastructure in place, the load on the system when the SQL statement is run or the properties of the data. . One example of where this can manifest is called Join Ordering. Suppose you run a SQL query that joins three tables T1, T2, and T3. What order will you join these tables in? Will you join T1 and T2 first or will you join T1 and T3 first? Maybe you should join T2 and T3 first instead. Picking the wrong order can be hugely detrimental for query performance. This means that database users and DBAs usually end up tuning databases extensively. In turn this adds both an operational and a cost overhead.

Query Optimization in Action

Let’s take a concrete example to better understand query optimization. Below is a SQL statement from a TPC-H like benchmark. 

select top 20 c_custkey, c_name, sum(l_extendedprice * (1 - l_discount)) as revenue, c_acctbal, n_name, c_address, c_phone, c_comment from customer, orders, lineitem, nation where c_custkey = o_custkey and l_orderkey = o_orderkey and o_orderdate >= ':1' and o_orderdate < dateadd(mm,3,cast(':1'as datetime)) and l_returnflag = 'R' and c_nationkey = n_nationkey group by c_custkey, c_name, c_acctbal, c_phone, n_name, c_address, c_comment order by revenue;

The SQL statement finds the top 20 customers, in terms of their effect on lost revenue for a given quarter, who have returned parts they bought. 

Before you run this query against your database you can find out what query plan the optimizer is going to choose and how much it is going to cost you. Figure 1 depicts the query plan for this SQL statement from SQL Server 2014 [You can learn how to generate a query plan for any SQL statement on SQL Server at https://msdn.microsoft.com/en-us/library/ms191194.aspx]

Figure 1
You should read the query plan from right to left. The direction of the arrow depicts the flow of control as the query executes. Each node in the plan is an operation that the database will perform in order to execute the query. You’ll notice how this query starts off with two Scans. These are I/O operations (scans) from the tables involved in the query. These scans are I/O intensive and are usually throughput bound. In data warehousing environments block sizes could be pretty large as well. 

A SAN will have serious performance problems with these scans. If the data is not laid out properly on disk, you may end up with a large number of random I/O. You will also get inconsistent performance depending on what else is going on in the SAN when these scans are happening. The controller will also limit overall performance.

The query begins by performing scans on the lineitem table and the orders table. Note that the database is telling what percentage of time it thinks it will spend in each operation within the statement. In our example, the database thinks that it will spend about 84% of the total execution time on the Clustered Index Scan on lineitem and 5% on the other. In other words, 89% of the execution time of this SQL statement is spent in I/O operations! It is no wonder then that users are wary of virtualizing databases such as these.

You can get even more granular information from the query optimizer. In SQL Server Management Studio, if you hover your mouse over a particular operation a yellow pop up box will appear showing very interesting statistics. Below is an example of data I got from SQL Server 2014 when I hovered over the Clustered Index Scan on the lineitem able that is highlighted in Figure 1.

Notice how Estimated I/O cost dominates over Estimated CPU cost. This again is an indication of how I/O bound this SQL statement is. You can learn more about the fields in the figure above here.

An Operational Overhead

There is a lot one can learn about one’s infrastructure needs by understanding the query execution plans that a database generates. A typical next step after understanding the query execution plans is to tune the query or database for better performance.  For example, one may build new indexes or completely rewrite a query for better performance. One may decide that certain tables are frequently hit and should be stored on faster storage or pinned in RAM. Or, one may decide to simply do a complete infrastructure redo.

All of these result in operational overheads for the enterprise. For starters, this model assumes someone is constantly evaluating queries, tuning the database and making sure performance isn’t impacted. Secondly, this model assumes a static environment. It assumes that the database schema is fixed, it assumes that all the queries that will be run are known before hand and that someone is always at hand to study the query and tune the database. That’s a lot of rigidity in this day and age where flexibility and agility are key requirements for the business to stay ahead.

A solution to database performance needs without the operational overhead 

What if we could build out a storage performance platform that satisfies the performance requirements of the database irrespective of whether query plans are optimal, whether the schema design is appropriate or whether queries are ad-hoc or not? One imagines such a storage performance platform will completely take away the sometimes excessive tuning required to achieve acceptable query performance. The platform results in an environment where SQL is executed as needed by the business and the storage performance platform provides the required performance to meet the business SLA irrespective of query plans.

This is exactly what PernixData FVP is designed to do. PernixData FVP decouples storage performance from storage capacity by building a server side performance tier using server side flash or RAM. What this means is that all the active I/O coming from the database, both reads and writes, whether sequential or random, and irrespective of block size is satisfied at the server layer by FVP right next to the database. You are longer limited by how data is laid out on the SAN, or the controller within the SAN or what else is running on the SAN when the SQL is executed.

This means that even if the query optimizer generates a sub optimal query plan resulting in excessive I/O we are still okay because all of that I/O will be served from server side RAM or flash instead of network attached storage. In a future blog post we will look at a query that generates large intermediate results and explain why a server side performance platform such as FVP can make a huge difference.

 

FVP Management Database Design Decisions

When deciding which database model to use for FVP, it’s important to understand what the goals are in using FVP and the growth potential for the platform. Upon installation, FVP management service builds and connects to a “prnx” SQL database instance. This database is responsible for receiving, storing and presenting performance data. All time series data for all performance charts displayed in the FVP UI are stored in this database, in addition to management metadata as it relates to configurations. Keep in mind however neither the management server nor the FVP database needs to be operational for read/write acceleration to continue during downtime. 

PernixData management server is also responsible for managing fault domain configurations and the host peer selection process for write back fault tolerance. This information is also kept current in the “prnx” database so that any host or cluster changes can be kept accurate for FVP policy changes. This is why it’s imperative that FVP maintain a connection with the vCenter server, so that inventory information can be collected and maintained. 

It was decided early in the FVP design phase not to recreate the wheel and take advantage of already robust operations in SQL server. One of these decisions was to implement SQL rollup jobs into practice for FVP. The SQL rollup job is responsible for keeping only the current valuable data while providing an average for historical reference. Instituting the SQL rollup process lowers the latency and overhead of FVP having to implement the averaging operations. This means all data stored in SQL is not moved nor massaged outside the context of SQL, this provides the security and performance benefits to FVP as an acceleration platform. 

Since part of the SQL server responsibility is to store FVP performance data, it’s important to only store as much data that is relevant and useful. Currently FVP management server only requests 20-second performance samples on all FVP clustered VM’s on each enabled host. This is run using multiple threads so that multiple CPU cores can be utilized for efficiency. During a 24-hour period a large amount of data could be archived. In this case, FVP has a purging schedule that runs every hour to purge all 20-second samples older than 24 hours. This only happens after a SQL rollup has completed within each minute and hour time period averaging the 20-second samples. 

Every minute there are 3 samples (20 seconds each) that are averaged. At the 1 Hour mark, a SQL rollup job runs and at completion FVP will purge all 20-second samples older than 24 hours. In order to view the 20-second samples before the rollup, then look at the performance statistics that are 1 hour or less in the FVP performance UI.  After the 1-hour interval all 20-second samples are discarded after the first SQL rollup and then permanently removed after the purging operation 24 hours later. 

In order to determine a proper SQL capacity for this amount of data, one needs to know how many VM’s they plan to accelerate with FVP and what the potential is for continued expansion. Currently over 80% of the “prnx” database is used to store performance related metrics and this 80% also makes up the majority of data churn within the platform. This means calculating for the 80% will provide ample room for FVP’s operations. 

The PernixData Management Server will insert 1 row (record) in the DB table every 20 seconds for each VM. This can be approximated that each VM will store ~ 1.6KB amount of data every 20 seconds. This data also takes into account the index size for each VM that is referenced. 


If considering SQL Express with a 10GB limitation, knowing the effective data added each day becomes an important piece of information. This design decision could hamper long-term storage or the acceleration of a large number of VM’s. Whether SQL Express is chosen or not, it’s a best practice to either choose “Simple” Mode or have a regular scheduled SQL backups so that log truncation can help limit the continued growth of the SQL log. 

Knowing the approximate data added to the DB each day for said number of VM’s will provide the expectancy when one would reach a 10GB capacity for SQL Express. If for example you have 100 VM’s accelerated with FVP, it will take about 400 days, but for a 1000 VM’s the limitation will be reached in as little as 40 days! 

To understand how our UI displays the averages based on the samples and purging process, below is a chart that illustrates the number samples taken and the average based on the time displayed. Keep in mind whether choosing a custom time range or using the predefined time ranges in the FVP UI, all result in the same samples and averages as indicated in the chart below. 

As you can see it’s important to not only understand the metrics that you are referencing but design appropriately for database sizing and retention, taking into account PernixData FVP’s growth within your virtual environment. 

Shuttle DS81 Review

This past VMworld I had the opportunity to learn about a new server platform that really piqued my interest. The Shuttle DS81 received a refresh last year that gave it some nice features that I personally think puts it in a class all of its own. 

Shuttle was nice enough to send me the Shuttle DS81 to demo. So over the past couple of months I put the unit through it’s paces and even did some FVP testing against it!! 

Here are some of the unique features that caught my eye for making this a great home lab machine.  

•    VMware Ready
•    Only 43 mm thick. 
•    4K Support (2 Display Ports & 1 HDMI Port) w/ dual monitor support
•    Certified to operate in harsh, hot environments up to 122 °F or 50 °C – Great for my hot Atlanta summers!! ☺
•    Dual Gigabit Ports – Nice for teaming and you still have the USB ports. 
•    Low Noise, with 90W power.  I could hardly hear it run, it was very quiet. 
•    2 x Mini PCIe Slots 
•    1 bay for a 6Gbps SATA SSD drive- I used the built-in SD card slot for ESXi 

The configuration that I requested came with an Intel Haswell i7 3.2GHZ Quad Core processor and 16GB PC3-12800 1600MHZ DDR3 Memory. There are ton of different configuration options available. See list

When I first received the DS81 I installed ESXi 5.1 without issue. Then I decided to test ESXi 5.5 support, which is where I had to make some NIC driver adjustments. After doing some troubleshooting, I found that 5.5 didn’t include the correct NIC driver, a common problem with certain adapters. Since ESXi 5.5 at the time of testing didn’t include the Realtek 8111 LAN adapter drivers built-in, I went through the outlined process on a new 5.5 build as outlined in this post and all has been fine and without hiccup over the past 3 months since. 

The part that I really wanted to test and see what was possible with a unit this size was how well it stood up against heavy I/O demands. Naturally I wanted to test the performance using FVP.  So, I decided to use 12GB of available memory for my acceleration resource. I could have added an SSD to the unit, but since I had memory available, it was the easiest to test against. To my amazement the 16K, 50% Read workload on the Shuttle DS81 kicked out some impressive results. 

Over 115,000 IOPs at 20 microseconds is nothing short of spectacular. You may think that I have some fancy stuff on the storage backend. Well, if you call my $800 Synology box fancy then yes! ☺ You can also see in the screenshot that datastore latency isn’t impressive!! Any workload that I haven’t pernix’d is feeling the pain!! 

I truly believe there are many use cases for the Shuttle DS81. I hope to be adding some of these units to my lab as it grows. The power, cooling and sound pollution savings pays for itself at under $800 per configuration chosen. 

FVP Linked Clone Optimizations Part 2

In part 1 of this series, I talked about replica disk optimizations that FVP provides for your linked clone environment. In part 2 the focus will be on the different use cases for persistent and non-persistent disks and how it relates to the acceleration that FVP can provide to your VDI environment. 

I often hear confusing remarks about what some may call a persistent desktop and a non-persistent desktop. I have found that at times this terminology is based on confusion between a linked clone and a full clone. It also makes a difference what criteria one bases their understanding of a non-persistent or persistent desktop. For example, if you just look at linked clones, you will notice that several disks are non-persistent and persistent, depending on your design decisions. If one looks only at a dedicated linked clone with windows profile persistence then some may articulate this linked clone as a persistent desktop. 

The interesting thing is that Horizon View doesn’t refer to a linked clone in this context. The only time Horizon View refers to a persistent or non-persistent desktop is under the context of refreshing a cloned desktop. In other words, it doesn’t mean that just having a linked clone makes you a non-persistent or even persistent VDI environment. 

I also think some of the confusion revolves around the use of dedicated vs. floating assignment of linked clones. The dedicated configuration assigns each user a dedicated desktop, so, if the user has multiple sessions, they will always reconnect to the same desktop by default. In a floating configuration the user is assigned to a pool of desktops. This means they could login to a different desktop with each new session. The only way to keep windows profile persistence in the floating configuration scenario is use persona management solution outside the default configuration of view composer. 

So, when an admin decides to use a dedicated linked clone, view composer gives the option to redirect the windows profile to a persistent disk. This will provide user personalization persistence during refresh, recompose, and rebalance operations. This is an optional setting as seen in the screenshot. The default disk size is 2GB. 


When one chooses a floating assignment for linked clones, view composer does not provide an option for a persistent disk, this means that no user personalization will be retained after a refresh, recompose or rebalance operation. If you chose not to redirect the windows profile, then the data would be stored on the non-persistent delta disk. In either case, both read and write I/O will be accelerated with FVP. However, there will be a longer warm up time for read acceleration when using the non-persistent delta disk for user profiles, as this will depend on how frequent the refresh, recompose and rebalance cycles are. 

Whether you select floating or dedicated assignments and choose some level of windows profile persistence or not, FVP will automatically accelerate reads and writes for all disks that are part of the desktop VM. In the past, the choice on when to schedule a recompose, rebalance operation came with varied importance. Now with FVP offloading IO from the storage array, a refresh, recompose or rebalance operation can provide some breathing room for these tasks to finish without impact to the production environment.

Delta Disk:
The delta disk is probably where most desktop I/O will be seen from a linked clone. The delta disk becomes active as soon as the desktop is booted from the replica disk. Any desktop changes are stored on the delta disk and so depending on the user and the desktop use case, the I/O profile could vary drastically. This will not impact FVP negatively, as FVP will keep context on which disk is more active and thus provide the resource intelligence for acceleration no matter the use case. 

Disposable Disk:
A default configuration will have a separate non-persistent disposable disk 4GB in size. Having this as separate disk is recommended since it slows the growth of the delta disk between refresh, rebalance, and powered off tasks. This disk contains temp files, and the paging file, so FVP can help normalize OS operations by accelerating reads and writes associated with the disposable disk. If you choose not to redirect, then this data will reside on the delta disk. There is no negative impact to FVP on either option chosen. However it’s a best practice to help control the growth of the delta disk between refreshes, and so separating the non-persistent disk will help alleviate bloated delta disks. 

Internal Disk:
There is an Internal Disk that is created with each cloned desktop. This disk is Thick Provision Lazy Zeroed, with a default size of 20mb. This disk stores Sysprep, QuickPrep and AD account information, so very little IO will be realized from this disk. Keep in mind that this disk is not visible in windows, but it still has a SCSI address, so FVP will still recognize the disk and accelerate any I/O that comes from this disk. This is another advantage of being a kernel module, as FVP will recognize disks not mounted to the windows OS and yet FVP will still do its magic of acceleration. 

As you can see no matter the configuration, FVP will automatically capture all I/O from all disks that are part of a given desktop clone. Depending on the configuration, a desktop clone can have several disks and knowing when or which disks are active or are need of given resources at any given point, is not an easy task to determine. This is exactly why PernixData decided to develop FVP, a solution to take the guesswork out of each disks IO profile. This means the only item you are tasked with, is whether you accelerate the desktop or not! Talk about seamless and transparent, it doesn’t get any better than that!! 

The Server-Side Storage Intelligent System Revealed!

PernixData introduced yesterday a revolutionary step forward in storage performance with the release of PernixData FVP 2.0. Several innovative features were revealed and a technology first was dropped on the industry. Frank Denneman has already started a great series on some of the new features. As to not let him have all the fun, I will also be covering some aspects to this new version as well!

The first big reveal was FVP transforming itself into an all-encompassing platform for storage optimization. Adding NFS & DAS to the already supported iSCSi, FC, FCOE list; which now completes all available connectivity options for VMware environments.

NFS support is obviously a welcome treat for many. It’s the support of local disk that might actually surprise some. Optimizing DAS environments I think will provide some unique cases for customers. (Future Post Coming) However keep in mind that supporting DAS doesn’t mean it voids use cases for VSA (Virtual Storage Appliance) software. PernixData is only accelerating the reads and writes, so if you require data services, then you may need to look at a VSA type of solution for your underlying local data at rest tier.

The biggest news in my opinion that really dropped the mic on the industry was the reveal of the first ever-distributed fault tolerant solution utilizing server memory for read/write I/O acceleration. Yep, you heard it right; accelerate those very important writes without the potential of data loss on volatile server memory is a gigantic leap forward. Look for more details around DFTM (Distributed Fault Tolerant Memory) in the coming weeks!!

I’m excited for the future and look forward to telling you more about these new advancements!

 

 

 

 

FVP Tip: Change Storage Device Display Name

As you might know, you have the ability to change a storage device display name on a particular ESXi host. This can be useful when you have several different devices installed on a given host and/or have different raid controllers backing the devices.

When you are wanting to test several different flash device models with different controllers and configurations with PernixData FVP, then it might become difficult to remember which identifier is which device. 

It's my reccomendation that you add the name of the controller as an extension to a friendlier device name. This way you can monitor performance by SSD with assigned controller.  An example could be: “Intel 520 – H310” The SSD model is represented and the controller is identified as a H310 for a Dell host.

 

 

vSphere Web Client Steps:

  1. Browse to the host in the vSphere Web Client navigator. Click the Manage tab and click Storage.
  2. Select the device to rename and click Rename. 
  3. Change the device name to a name that reflects your needs.

 

Now that you have renamed your flash device then you will see the changed device names show up in the FVP Plugin UI. 

Give Me Back My Capacity

Last week I was preaching the PernixData message in Tampa, Florida! While there I received a question that I believe is often overlooked when realizing the benefits of PernixData in your virtualized environment.

The question asked related to how PernixData FVP can give you more storage capacity to your already deployed storage infrastructure. There are actually several ways that FVP can give you more capacity for your workloads, but today I will focus on two examples. In order to understand how FVP makes this possible, it’s important to understand how Writes are accelerated. FVP intercepts all Writes from a given workload and then commits the Write on local server-side flash for fast acknowledgement. This obviously takes a gigantic load off the storage array since all Write I/O is being committed first to server-side flash. It’s with this new performance design that allows you to regain some of that storage capacity that you have lost to I/O performance architectures that are just to far from compute! 

If you are “Short Stroking” your drives, there is now no need to waste that space, use FVP to get even better performance without the huge costs associated with short stroking. Another example is when you have chosen to use RAID 10 (also known as RAID 1+0) in order to increase performance through striping the blocks and redundancy through block mirroring. Why not get up to 50% of your capacity back and move to RAID 6 or RAID 5 for redundancy and then use FVP for the performance tier. As you can see this opens up a lot of other possibilities and allows you to save money on disk and gain additional capacity for future growth.

Try this RAID calculator and see how much capacity you can get back when using an alternate RAID option with FVP! 

Where are you measuring your storage latency?

I often times hear from vendors, virtual & storage admins about where they see storage latency in a particular virtualized environment. The interesting part is that there is a wide disparity between what is communicated and realized.

If storage latency is an important part of your measurement of performance in your environment then where you measure latency really matters. If you think about it, the VM latency is really the end result of the realized storage latency. The problem is that everyone has a different tool or place where they measure latency. If you look at the latency at the storage array then you are only really seeing the latency at the controller and array level. This doesn’t always include the latency experienced on the network or in the virtualized stack.

What you really need is visibility into the entire I/O path to see the effective latency of the VM. It’s the realized latency at the VM level that is the end result and what the user or admin sees or experiences. It can be dangerous to only focus your attention on one part of the latency in the stack and then base decisions on what the latency to the application is.

To solve this problem, PernixData has provided visibility into what the VM is observing, and since FVP is a read/write acceleration tier, you can also show a breakdown of latency in regards to read/write acknowledgements. 

As an example using the new zoom function in the new release of FVP 1.5, I can see the latency breakdown for a particular SQL Write Back enabled VM.

 

 

As you can see in this graph, the “Datastore” on the array had a latency spike that attributed to 7.45 Milliseconds, while the “Local Flash” on the host is at 0.25 ms or (250 Microseconds). The “VM Observed” latency is what the actual VM is seeing and thus you have a realized latency of 0.30 ms or (300 Microseconds)!! The reason you may have a small difference between Local Flash latency and VM Observed latency can be do to system operations such as flash device population as well as having write redundancy enabled or not.

To see this from a read/write perspective, you can also go to the "Custom Breakdown" menu and choose "Read" and "Write" to see the "VM Observed" latency broken down into reads and writes. 

 

As you can see the latency for this application was for writes not reads and since this VM is in Write Back mode we are seeing a realized 0.44 ms or (440 Microseconds) latency committed acknowledgment back to the application!!

This is obviously not the only way to determine what the actual latency is for your application, but what is unique, is the fact that PernixData is not making another latency silo solution. In other words, there are plenty of storage products on the market that give a great view into their perfect world of latency, but it’s isolated and not the full picture of what is observed on what matters in your virtualized datacenter. 

 

PernixData FVP & StorMagic SvSAN Use Case

In continuing to look at alternate ways to provide a good ROI capacity layer with PernixData FVP, Frank Denneman and I will be doing a couple posts on some unique designs with FVP. As I demonstrated in a previous post, FVP accelerates the reads and writes for virtual workloads, while a virtual storage appliance (VSA) can be a great technology to provide the primary storage and data services for virtual workloads.

With this post, I will focus on StorMagic and their iSCSi based VSA product named SvSAN. A couple interesting notes about SvSAN that might actually surprise you! StorMagic claims that they have one of the largest deployments of any VSA in the market. In 2013 alone they had over 800 percent growth! They currently also are the only VSA that can start with two nodes without needing a local 3rd host for a quorum response during host isolation situations. (More on this later)

A few interesting features:

-       vCenter plugin to manage all VSAs from a central point

-       Multi-Site Support  (ROBO/Edge) (Remote office / Branch office / Enterprise edge)

-       Active/Active Mirroring

-       Unlimited Storage & Nodes per Cluster

 

I think SvSAN and FVP combined can provide a great ROI for many environments. In order to demonstrate this, we need to go a little deeper to where each of these technologies fit into the virtualized stack.

Architecture:

SvSAN is deployed on a per host basis as a VSA. PernixData FVP however is deployed as a kernel module extension to ESXi on each host. This means that both architectures are not in conflict from an I/O path standpoint. The FVP module extension is installed on every host in the vSphere cluster, while SvSAN only needs to be installed on the hosts that have local storage. Hosts that don’t have access to local storage can still participate in FVP’s acceleration tier and also access SvSAN’s shared local storage presented from the other hosts via iSCSi.

Once both products have been fully deployed in the environment it’s important to understand how the I/O is passed from FVP to SvSAN. I have drawn a simple diagram to illustrate this process. 

You will notice that really the only difference between a traditional storage array design with FVP, is that you are now able to use local disks on the host. The SvSAN presents itself as iSCSi, so that the I/O passes through the local VSA to reach the local disk. Since virtual appliances have some overhead in processing I/O, it becomes advantageous with such a design to include PernixData FVP for the acceleration tier. This means that only unreferenced blocks need to be retrieved from the SvSAN storage and all other active blocks will be acknowledged from FVP’s local flash device. This will take a huge I/O load off of SvSAN and also provide lower latency to the application.

Fault Tolerance:

When any product is in the data path it becomes very important to provide fault tolerance and high availability for given workloads. SvSAN provides the data fault tolerance and high availability through its creation of a datastore mirror between two SvSAN VSA hosts.

This means if a host goes down or if the local storage fails, a VM can still continue with operations because SvSAN will automatically switch the local iSCSi connection to the mirrored host where there is consistent duplicated data.

The mirroring is done synchronously and guarantees data acknowledgement on both sides of the mirror. I think the really cool part is that the SvSAN can access any side of the mirror at any time without disrupting operations, even during FVP performance acceleration! The fault tolerance built-in to FVP is designed to protect those writes that have been committed and acknowledged on local/remote flash that haven’t yet been destaged to SvSAN layer. Once FVP has destaged the required writes to SvSAN at that point SvSAN’s mirrored datastore protection becomes relevant to the design.

Centralized Management in a Edge Environment:

As noted before, SvSAN only requires two hosts for a quorum during host isolation situations, where hosts or local storage is lost. This is accomplished through a separate service (NSH – Neutral Storage Host) that can be installed in a central location on either physical or virtual. It’s this centralization of a quorum service that can alleviate additional localized costs and management overhead. As it is with FVP, SvSAN can be managed from a vCenter plugin for centralized management. This means one can manage hundreds of enterprise edge sites for primary storage, while also providing centralized FVP management for each performance cluster using SvSAN. This is illustrated in the diagram below.

It’s with the low acquisition costs and simple management, where VSA usage has been popular in ROBO type of environments.  This can be great for primary storage at the enterprise edge but maybe not so great for those applications needing increased localized performance. The options to achieve a high performing cost effective storage solution for a virtualized remote environment have been limited in the past. It’s not until PernixData FVP that there was a solution where you can use inexpensive primary storage, like a VSA and also have a read/write performance tier that provides extremely low latency to the applications. The amazing part is that all this is accomplished through software and not another physical box. 

This post was just meant to be an introduction and high-level look at using a StorMagic’s VSA technology alongside PernixData FVP. I hope to go much deeper technically how each of these technologies work together in future posts.

This is a simple diagram showing centralized management with FVP and SvSAN in a single 2-host edge site. 

The First Flash Hypervisor

It's now official, the world has it's first Flash Hypervisor. PernixData has created a transformative technology that will have a resounding affect on the future datacenter. 

PernixData FVP ships today 1.0 of what will become omnipresent in the virtualized world. The growth of virtualization has created a need to accelerate today's applications and allow businesses to continue take advantage of virtualization. There is only one complete solution on the market that addresses this need and takes your datacenter to the next level! 

It's the Flash Hypervisor level in the virtualization stack that will create ubiquitously, because it's ability to scale, accelerate, and manage the world's modern workloads. Check out the details in our latest datasheet

So, join the revolution and download the 60 day trial today!! 

Capacity & Performance = VSA + FVP

A couple weeks ago Frank Denneman did a great post on why virtual appliances used for data path acceleration are not to be desired if you are trying to achieve low latency in your environment. Frank outlined why the use of a hypervisor kernel module provides a preferred way to accelerate I/O. I highly recommend you read his post before you go any further.

Even though virtual appliances are not the best at performance, there are still many reasons why you might want to deploy a VSA (Virtual Storage Appliance). For one reason it’s typically less cost and easier to manage. This is why you most likely see VSA’s in a smaller or test/dev environments. The ability to aggregate local storage into a shared pool is also a desired approach in using a VSA.

I recently did some testing with a well-known Virtual Storage Appliance along with PernixData’s Flash Virtualization Platform (FVP). I was amazed to find that this integration was truly a great way to implement storage capacity and performance. The VSA did what it does best; aggregate local storage into a capacity pool that can be easily managed, while FVP provided the performance required for the workloads.

Here is a simple diagram showing this use case… 

 

 

This use case provides several options to accelerate I/O. As an example if you choose a “Write Through” policy then all writes from a given workload will be acknowledged from the VSA storage pool, while FVP accelerates the read I/O. However if you choose a “Write Back” policy then writes will be accelerated from the local flash devices in the cluster and then de-staged appropriately to the VSA storage pool. In addition the workload that you choose to accelerate could be VM’s located on the VSA or even the VSA itself! As what to choose for your environment, I will have a separate post outlining what types of scenarios work best given a FVP design choice.

This use case provides low latency and increased IOPs not typically seen with just a virtual appliance. So, depending on what your objective is and environment this could be the winning ticket for storage capacity and performance.  Stay tuned for more ways to take advantage of FVP!! 

Pernixdata – 5 Points of Differentiation

Since Pernixdata recently came out of stealth with their Flash Virtualization Platform, I thought it would be good to do a short breakdown of what makes Pernixdata so special and different from anything else in the industry.

1)   NO VSA – The Flash Virtualization Platform (FVP) from Pernixdata does not need or rely on any virtual appliance. It’s truly a Hypervisor based product that doesn’t have to deal with the latency of an appliance.

 2)   NO OS/Guest Agents – There is also no need to install any operating system or guest agent. Pernixdata is invisible to any workload. The operating system or application only sees increased performance and lower latency!

3)   Not Just Read  – Pernixdata is not like traditional caching solutions where the only performance gain is from read operations. The FVP can also leverage performance gains in write operations as well. (Think tiering instead of caching.)

 4)   No Proprietary Flash – Pernixdata does not need or require proprietary SSD devices or PCIe based flash solutions. The FVP can use any type of flash based device that is available.

 5)   No Single Point of Failure – Pernixdata is the first to build a truly scale out platform that can transparently leverage existing clusters and use local or remote server-side flash devices. This architecture is designed for read and write acceleration on local or remote hosts.

 As you can see these five “No’s” make Pernixdata different and revolutionary. Organizations can now say “Yes” to a platform that answers their perspective issues with performance without sacrificing on features and redundancy.