New Book Released

I’m proud to announce that I am a co-author on a new book just released. It’s available now for download and later via print edition. Frank Denneman announced on his blog the official release, as he was the main organizer and author for this project. This is not only my first published book, but also a first as a main co-author. It was indeed exciting and challenging at the same time. I can now at some level appreciate those that have tackled such a feat, as it wasn’t the easiest thing I have ever done! :) 

Being a 300-page book that talks about the many architectural decisions when designing for storage performance, is not something for the faint of heart. The focus and desire was to articulate a deeper technical understanding of the software layers that intersect and interact with the differing hardware designs.

I would like to thank Frank for giving me the opportunity to work with him on this project. It was truly a delight and a rewarding experience!

You can directly download the book here. Enjoy!

Time Scale: Latency References

In a world where speed is of the utmost importance, it has become apparent to me that there is a notion of relativity in latency. In other words, one needs a form of measurement - a time scale to understand how fast something really was and/or is.

With high frequency low latency trading as depicted in this video, milliseconds is the name of the game. A loss of 2 seconds, as an example, can be a matter of losing millions of dollars or prevention of a catastrophic financial event.  

In using this example, how can one feel what 2 milliseconds feels like? Can one tell the difference between 2 or 3 milliseconds? I find it fascinating that we as humans sometimes base what is fast or slow on what it feels like. In fact, how do you measure a feeling anyway? We usually have to compare (base line) to determine if something is really faster or slower. I would argue that it’s sometimes the results or effect of latency that we measure against. In low latency trading, the effect or result can be devastating, and so there is a known threshold to not go past. However, this threshold is constantly being lowered or being challenged via competitive pressure. This means it’s important to constantly have latency references to measure against in order to determine if the combined effect will have positive or negative results.

This is why testing synthetic workloads (in order to determine what the performance is) can result in inaccuracies of what is truly fast or slow. When one tests only one workload, it’s not depicting the combined effect of all disparate workloads and their interactions as a whole.  Another inaccurate way to measure is to base decisions solely on what the end users feel is faster or slower. I know it can be interesting to see what the end-user thinks, but it’s not an accurate way to look at the whole system, as a way to measure. The results seen for all work done (may be based on a project) is a better way to measure the effect. This can obviously complicate the process of measuring, but there are places to focus that will give a more accurate look on latency effects as a whole if one follows what I call the time scale reference.

Contrasting what we deem fast historically and what is on the horizon is not just interesting but important for baselines. Proper latency measurements become important milestones to feel the full effect of speed and acceleration.

Let’s say for example you had to take a trip from Atlanta, GA to San Francisco, CA in a truck carrying peaches. You had two routes to choose from. One would take 3 days and the other route would take 6 months. Now, if you wanted to take the scenic route, and you had tons of time, without the peaches, you might want to take the longer route. However, if you took 6 months, those peaches would smell quite bad by the time you got to San Francisco!! Using real world analogies like this on a time scale that we can decipher is important in order to see the differences and the effect it may have. Now why did I choose 3 days vs. 6 months for this example? A typical Solid State Disk has an average latency around 100 microseconds. Compare that to a rotational standard hard drive at about 5 milliseconds. If I scale these as to compare how drastic the time difference is between the two, it’s 3 days for the SSD and 6 months for the standard hard drive. Now, I can really see and feel the variance between these two mediums, and why a simple choice like this can make a gigantic difference in the outcome. Let’s now take it up to another level. What if we now had the capability to travel with our truckload of peaches to San Francisco in 6 minutes instead of 3 days or better yet how about 40 seconds? Today 6 minutes is possible as it applies to standard DRAM, but 40 seconds isn’t too far off, as this is representative of the Intel and Micron announcement for 3DXPoint NAND.

If I take these latency numbers and plug them into my datacenter, then I can start to see how simple choices can really have a negative or positive impact. You may now be saying to yourself, “Well if I go with SSD’s today, then tomorrow I basically need to rip and replace my entire stack to take advantage of the newer thresholds of latency, like the new 3DXPoint NAND, or even whatever is next!!” The exciting part is that you don’t have to replace your entire stack to take advantage of the latest and greatest. Your truck carrying those peaches just needs a turbo boost applied to the engine. You don’t need to buy a new truck, which is why choosing the proper platform becomes very important. Choosing the right truck the first time doesn’t tie your hands with vendor lock-in when it comes to performance.

In conclusion, I hope that one can now understand why proper base lines need to be established, and real world measurements need to be referenced. It’s not just a feeling of what is faster. We now are past the point of recognizing the true speed of something as a feeling. It’s the cause and effect or result that determines the effective threshold. Then tomorrow the threshold could lower with new innovations, which is all the more reason to have a time scale of reference. Without a reference point people become accustomed to the world around them, missing out on what it really is like to travel from Atlanta to San Francisco in 40 seconds or less. Don’t miss out on the latency innovations of today and tomorrow; choose your platform wisely

                        My Daughter was inspired to draw this for me based on this post! :) 


Why I Decided Not To Put Flash In The Array

My story starts about 3 years ago, where at the time I was the Information Systems director for a large non-profit in Atlanta, GA. One of the initiatives at the time was to become 100% virtualized in 6 months; and there were obviously many tasks that needed to be accomplished before reaching that milestone. The first task was to upgrade the storage platform, as we had already surpassed the performance characteristics for the current workloads. As with any project, we looked at all the major players in the market, we ran trials, talked to other customers, and did our due-diligence for the project. It was not only important for us to be mindful of costs being a non-profit but we wanted also to be good stewards in everything we did. 

The current storage system that we were looking to upgrade was a couple 7.2K RPM, 24 TB chassis’. We had plenty of storage for our needs but latency was in the 50ms range encompassing only about 3000 IOPs. Obviously not the best to run a virtualized environment on as you can see!! We looked at the early All Flash Arrays that were just coming out and we also looked at the Hybrid Arrays, all of them promising increased IOPs and lower latency. The problem was that they were not an inexpensive proposition. So, the dilemma of being good stewards and at the same time needing single digit latency with more than 50K IOPs was a challenge to say the least. 

About the same I met a gentleman that told me some magical stories that sounded almost too good to be true! This man’s name is Satyam Vaghani, the PernixData CTO, creator of VVOLS, VAAI and VMFS.  Soon after meeting Satyam, I was given the privilege of getting my hands on an alpha build of PernixData FVP. I ran and tested the product during the alpha and beta stages at which I in turn immediately purchased and became PernixData’s first paying customer. I had never purchased a product in Beta before, but I felt this product was out of the ordinary. The value and the promise were proved even in beta, where I didn’t have to buy new storage just for performance reasons and thus saved the organization collectively over $100,000. This wasn’t a localized problem; it was an architecture problem that no array or multiple storage systems could solve. So, if I were in that position today, I’m sure the calculation over 3 years would be close to $500,000 worth of savings, do to the scale-out nature of the FVP solution. As the environment grew and became 100% virtualized I no longer would have had to think about storage performance in the same way. I no longer would have had to think about the storage fabric connections in the same way as well. Talk about a good feeling of not only being a good steward but also astonishing the CFO on what was achieved. 

This to me validated the waste and inefficiencies that occur when flash is being used at the storage layer. Disk is cheap when used for capacity and so it has never made sense to me to cripple flash performance by putting it behind a network in a monolithic box that can have it’s own constraints and bottlenecks. 

Fast forward to today where flash is now much more prominent in the industry. The story is even stronger today, how can anyone not be conscientious about spending over 100K on a single array that can only achieve 90,000 IOPs with single digit millisecond latency? When someone can buy a single enterprise flash drive for $500 that does over 50K IOPs with microsecond latency, then the question that must be asked, can you defend your decision from the CFO or CIO and feel good about it?

Don’t get me wrong; I’m not saying FVP replaces storage capacity, if you need storage capacity, then go and purchase a new storage array. However, this doesn’t mean that you have to buy an AFA for capacity reasons. There are many cost effective options out there that makes more economic sense, no matter what the dedupe or compression rates that are promised!  

My personal advice to everyone is to be a conscientious objector when deciding to put flash in the array. It didn’t make sense for me 3 years ago and still doesn’t make sense today. 

Perception vs. Reality in Storage Performance

I’m sure we have all heard the saying that perception is reality, however this doesn’t mean it’s correct reality. In other words, it can be a false reality; the perceived notion doesn’t in of itself make it true.

 

A recent tweet response from Duncan Epping

I believe this assertion from Duncan is on the money. I talk to many customers that claim storage performance isn’t a problem. I then look at their storage latency and it averages around 20ms or more, even though we would most likely agree that 20ms is slow, the customer can seem perfectly happy with the current results.

This dichotomy I find fascinating as I think it also relates to the smartphone industry. For example, when I upgraded to the iPhone 6, I immediately noticed a performance increase; however after several weeks of use, I wasn’t aware of the performance increase, as I became accustom to the new effect. It wasn't until I picked up my daughters iPhone 4S and started using it, when I then realized again the app performance and usability that I gained from going with the newest device.

The old saying that you don’t know what you’re missing until you try it, can also apply when talking storage, and application latency.

Frank Denneman’s latest post on knowing the application I/O workload characteristics is one of the most difficult challenges I believe in today’s virtual datacenter. The true utopia as a virtual admin is to understand the full workload pattern over the application lifecycle as his post delves into. This is IMHO is an area where there is a false perception. Many believe that they understand their workloads, at least until something goes awry and it challenges them to think otherwise. There are a ton of reasons for this, i.e.. No good tools, App Dev. and/or Stakeholder communication disconnects, etc. As an application owner, how do you know what the app is capable of delivering? Do you know the perception of your end users or have they now accepted a false reality? 

Whether it’s a false perception or false reality the end result is the same. The admin doesn’t know what they are missing or has a capable tool to help them determine the actual reality and thus correct or challenge his or her perception.

It’s my hope that in 2015 more of us will test and challenge our perceptions, so that we can have a more accurate reality. Who knows what you may be missing!!!

 

 

Storage Features Responses

In order to finish my Storage Features Survey, I have collected the responses from the community on how they view primary storage from a virtualization, enterprise worthy perspective. 

Most Popular Enterprise Storage Feature Responses (rounded):

Multiple Controllers - 70% of Respondents

Thin Provisioning - 65% of Respondents

Replication Capabilities - 60% of Respondents

Snapshot Capabilities - 60% of Respondents

High Availability - 60% of Respondents

Non-Disruptive Upgrades - 60% of Respondents

VAAI Support - 55% of Respondents

RAID Support - 50% of Respondents

Flash Technology - 50% of Respondents

Capacity/Shelf Expansion -50% of Respondents

There wasn't any big surprises, but there were a couple interesting responses. One of the higher responses came in for "Replication Capabilities", I knew this would be popular, however I did not envision it being as high on the list, since there are many other technologies in the market that can tackle replication from a different level in the stack. i.e. Zerto, Veeam. The other popular response that was interesting was "Flash Technology" at the array level. I know flash is very popular right now, but as you might know it's not something I agree with 100% of the time. PernixData can use flash at the host level for storage performance and thus can negate the use of flash at the array. I'm not saying this is the case for all environments, but it's enough to change the storage landscape. 

The biggest response was for multiple controller support! This isn't surprising, since high availability & non-disruptive upgrades was also listed high on the list from respondents. 

Did any of the responses surprise you or do you think this encompasses the most common requested enterprise features for primary storage? 

Thanks for participating in the Storage Features Survey! 

Poll: Storage Features

I was recently talking with some peers about what storage features fit into the category of enterprise worthy and are considered "must have’s" in regards to importance.

This got me thinking about market conditions today and how we are flooded with so many new storage systems/features. This not only can confuse consumers but I believe can change perceptions on what is really important or reality! It's my intent in this post to study what truly makes up a enterprise storage system and what are the must have features of today. 

The first part of this study is to publish a poll on what you believe to be an enterprise must have feature in the storage market. Please select only the must have’s and/or what is important to you and your company!

I will publish the anonymous results once everyone has had a chance to vote.

Keep in mind that these features are only intended to apply at the primary storage level for your array in a 100% VMware environment!  

This post is not sponsored nor endorsed by my employer! Personal Passion Only! 

http://goo.gl/forms/9HDCkknbvO

Give Me Back My Capacity

Last week I was preaching the PernixData message in Tampa, Florida! While there I received a question that I believe is often overlooked when realizing the benefits of PernixData in your virtualized environment.

The question asked related to how PernixData FVP can give you more storage capacity to your already deployed storage infrastructure. There are actually several ways that FVP can give you more capacity for your workloads, but today I will focus on two examples. In order to understand how FVP makes this possible, it’s important to understand how Writes are accelerated. FVP intercepts all Writes from a given workload and then commits the Write on local server-side flash for fast acknowledgement. This obviously takes a gigantic load off the storage array since all Write I/O is being committed first to server-side flash. It’s with this new performance design that allows you to regain some of that storage capacity that you have lost to I/O performance architectures that are just to far from compute! 

If you are “Short Stroking” your drives, there is now no need to waste that space, use FVP to get even better performance without the huge costs associated with short stroking. Another example is when you have chosen to use RAID 10 (also known as RAID 1+0) in order to increase performance through striping the blocks and redundancy through block mirroring. Why not get up to 50% of your capacity back and move to RAID 6 or RAID 5 for redundancy and then use FVP for the performance tier. As you can see this opens up a lot of other possibilities and allows you to save money on disk and gain additional capacity for future growth.

Try this RAID calculator and see how much capacity you can get back when using an alternate RAID option with FVP! 

Where are you measuring your storage latency?

I often times hear from vendors, virtual & storage admins about where they see storage latency in a particular virtualized environment. The interesting part is that there is a wide disparity between what is communicated and realized.

If storage latency is an important part of your measurement of performance in your environment then where you measure latency really matters. If you think about it, the VM latency is really the end result of the realized storage latency. The problem is that everyone has a different tool or place where they measure latency. If you look at the latency at the storage array then you are only really seeing the latency at the controller and array level. This doesn’t always include the latency experienced on the network or in the virtualized stack.

What you really need is visibility into the entire I/O path to see the effective latency of the VM. It’s the realized latency at the VM level that is the end result and what the user or admin sees or experiences. It can be dangerous to only focus your attention on one part of the latency in the stack and then base decisions on what the latency to the application is.

To solve this problem, PernixData has provided visibility into what the VM is observing, and since FVP is a read/write acceleration tier, you can also show a breakdown of latency in regards to read/write acknowledgements. 

As an example using the new zoom function in the new release of FVP 1.5, I can see the latency breakdown for a particular SQL Write Back enabled VM.

 

 

As you can see in this graph, the “Datastore” on the array had a latency spike that attributed to 7.45 Milliseconds, while the “Local Flash” on the host is at 0.25 ms or (250 Microseconds). The “VM Observed” latency is what the actual VM is seeing and thus you have a realized latency of 0.30 ms or (300 Microseconds)!! The reason you may have a small difference between Local Flash latency and VM Observed latency can be do to system operations such as flash device population as well as having write redundancy enabled or not.

To see this from a read/write perspective, you can also go to the "Custom Breakdown" menu and choose "Read" and "Write" to see the "VM Observed" latency broken down into reads and writes. 

 

As you can see the latency for this application was for writes not reads and since this VM is in Write Back mode we are seeing a realized 0.44 ms or (440 Microseconds) latency committed acknowledgment back to the application!!

This is obviously not the only way to determine what the actual latency is for your application, but what is unique, is the fact that PernixData is not making another latency silo solution. In other words, there are plenty of storage products on the market that give a great view into their perfect world of latency, but it’s isolated and not the full picture of what is observed on what matters in your virtualized datacenter. 

 

QuadStor - An Update

It’s been awhile since I did a post on QuadStor, a now open source storage virtualization product. Since my last post, several new features and changes have been made to this unique storage product. 

QuadStor is now free and under the GPL v2 licensing. This opens up access to the product and allows more flexibility for those that want to use this in your home lab as an example! http://www.quadstor.com/open-source.html

QuadStor has also created a Google group for support questions that is monitored and supported by QuadStor support. They also have a paid support model as well, if interested! 

http://groups.google.com/group/quadstor-virt

 A couple notes that I found interesting. When installing Quadstor on a supported platform. (FreeBSD 8.2/9.0 release, RHEL/Centos 5.x/6.x, SLES 11 SP1/SP2, Debian squeeze 6.0x)

  • A resource recommendation is to add 2 GB of memory for every 1 TB of storage, or 4 VDisks, or 2 physical disks configured. Some of the reasoning behind this is that Quadstor assumes ownership of 80% of the total memory of the server. You have to keep this in mind, with regards to the base operating system requirements.
  • Storage pools are a new feature released in 3.0.5. Where different disks can be pooled together like a SSD pool for performance tiering!
  •  QuadStor does not do RAID management. If RAID is needed configure the RAID from the base system used.

I should also note, that I have tested QuadStor with PernixData in my virtual lab, where Pernix gave me the performance and QuadStor was my capacity and data service play!

Capacity & Performance = VSA + FVP

A couple weeks ago Frank Denneman did a great post on why virtual appliances used for data path acceleration are not to be desired if you are trying to achieve low latency in your environment. Frank outlined why the use of a hypervisor kernel module provides a preferred way to accelerate I/O. I highly recommend you read his post before you go any further.

Even though virtual appliances are not the best at performance, there are still many reasons why you might want to deploy a VSA (Virtual Storage Appliance). For one reason it’s typically less cost and easier to manage. This is why you most likely see VSA’s in a smaller or test/dev environments. The ability to aggregate local storage into a shared pool is also a desired approach in using a VSA.

I recently did some testing with a well-known Virtual Storage Appliance along with PernixData’s Flash Virtualization Platform (FVP). I was amazed to find that this integration was truly a great way to implement storage capacity and performance. The VSA did what it does best; aggregate local storage into a capacity pool that can be easily managed, while FVP provided the performance required for the workloads.

Here is a simple diagram showing this use case… 

 

 

This use case provides several options to accelerate I/O. As an example if you choose a “Write Through” policy then all writes from a given workload will be acknowledged from the VSA storage pool, while FVP accelerates the read I/O. However if you choose a “Write Back” policy then writes will be accelerated from the local flash devices in the cluster and then de-staged appropriately to the VSA storage pool. In addition the workload that you choose to accelerate could be VM’s located on the VSA or even the VSA itself! As what to choose for your environment, I will have a separate post outlining what types of scenarios work best given a FVP design choice.

This use case provides low latency and increased IOPs not typically seen with just a virtual appliance. So, depending on what your objective is and environment this could be the winning ticket for storage capacity and performance.  Stay tuned for more ways to take advantage of FVP!! 

Basic Primer: IOPS

If you are a Virtualization Admin then you most likely have had to get your feet wet when it came to learning the ever-expanding storage market. As the virtualization market has grown so has the amount of storage in the datacenter. This has put a renewed focus on understanding how storage operates in a virtualized cluster. 

In the past Memory and CPU has been something that most have gravitated toward when performance problems arose. While the ticking time bomb in the growth of the virtualized datacenter has been storage performance.

The goal of this post is to give a snapshot understanding why IOPs are one important metric to evaluate when looking at storage performance.

I’m not going to get into the detailed performance characteristics, as that’s not my intent of this post.

Basic Primer:

  • IOPS means: input/output operations per second (A way to measure storage performance on a disk, SAN, SSD, etc.)
  • As a general rule the higher IOPS the better or faster the storage is operating at.
  • The closer a disk is to CPU/Memory the faster the processing time. Network latency can be a huge factor in performance.
  • IOPs are not the only performance metric to look at: Throughput, and Latency is also very important and can affect performance. Best scenario is high IOPs, low latency and high throughput. 

One can classify the input or output operation as a chunk of data that needs to be written or read from disk. As an example, suppose an Exchange database needs to retrieve a list of mailbox objects (Get-MailboxDatabase). This transaction/workload requires information to be accessed from the disk/vmdk in the form of a set amount of inputs and outputs, which the CPU/Memory of the host system will process. How fast this happens is dependent on many things, but it can be measured through the number operations per second it takes for acknowledgement to the application.

This simple example hopefully gave you a better understanding of I/O and why it can be an area easily over looked in regards to application performance.

 If you would like to do a deep dive in understanding storage performance metrics, there is no need for me to recreate the wheel, as others have done an awesome job at telling this part of the story. 

http://www.petri.co.il/avoid-stroage-io-bottlenecks.htm

http://vmtoday.com/2009/12/storage-basics-part-ii-iops/

http://www.symantec.com/connect/articles/getting-hang-iops

http://storageioblog.com

 

 

 

Storage Field Day #3

Starting tomorrow, Storage Field Day #3 starts in Denver. This year I'm really excited about the presentations! I am a big proponent of the new era of storage tech that proposes to resolve many of the scale and performance dilemmas. One such company that I'm proud to be the first customer of, is Pernixdata

On April 25th, 1:30-3:30 Pernixdata will demonstrate their answer to a complex problem of scale and performance. 

You can catch the live stream here: 

 

Pernixdata – 5 Points of Differentiation

Since Pernixdata recently came out of stealth with their Flash Virtualization Platform, I thought it would be good to do a short breakdown of what makes Pernixdata so special and different from anything else in the industry.

1)   NO VSA – The Flash Virtualization Platform (FVP) from Pernixdata does not need or rely on any virtual appliance. It’s truly a Hypervisor based product that doesn’t have to deal with the latency of an appliance.

 2)   NO OS/Guest Agents – There is also no need to install any operating system or guest agent. Pernixdata is invisible to any workload. The operating system or application only sees increased performance and lower latency!

3)   Not Just Read  – Pernixdata is not like traditional caching solutions where the only performance gain is from read operations. The FVP can also leverage performance gains in write operations as well. (Think tiering instead of caching.)

 4)   No Proprietary Flash – Pernixdata does not need or require proprietary SSD devices or PCIe based flash solutions. The FVP can use any type of flash based device that is available.

 5)   No Single Point of Failure – Pernixdata is the first to build a truly scale out platform that can transparently leverage existing clusters and use local or remote server-side flash devices. This architecture is designed for read and write acceleration on local or remote hosts.

 As you can see these five “No’s” make Pernixdata different and revolutionary. Organizations can now say “Yes” to a platform that answers their perspective issues with performance without sacrificing on features and redundancy. 

Pernixdata - Solving the I/O Bottleneck

As some of you already know Pernixdata came out of stealth yesterday. I have been eagerly waiting for this time to share how I think Poojan Kumar (CEO) & Satyam Vaghani (CTO) and their great team plan to take the lead in a new market opportunity.

I have had the privilege and opportunity of testing the Flash Virtualization Platform from Pernixdata and I can honestly tell you that it works and it works well. The technology is truly revolutionary and thus plan to post several times over the coming weeks and months about this new innovation.

The best way to describe the “FVP –Flash Virtualization Platform” is to look at it from a data tier perspective instead of just a caching solution. It’s easy just to call it a caching solution, because there isn’t anything like it. Its the breadth of this new platform that masters the I/O workloads and commoditizes the use of Flash in compute. 

 

The I/O bottleneck between storage and compute has hampered the industry for sometime. This started to change when VMware released their VAAI API, but adoption was slow and expensive. Purchasing additional arrays was not the answer from a financial or technological perspective. There really needed to be a new technology to bring everything together. This is where Pernixdata comes in to play, solving the scale and performance problems that have plagued many in the industry. This is done without vendor lock in and architectural changes to the datacenter, saving organizations thousands of dollars.

To join the beta program, send an email request to beta@pernixdata.com

Congratulations Pernixdata for creating a solution for the SMB, and Enterprise market that solves a known virtualization/Cloud problem.