Where are you measuring your storage latency?

I often times hear from vendors, virtual & storage admins about where they see storage latency in a particular virtualized environment. The interesting part is that there is a wide disparity between what is communicated and realized.

If storage latency is an important part of your measurement of performance in your environment then where you measure latency really matters. If you think about it, the VM latency is really the end result of the realized storage latency. The problem is that everyone has a different tool or place where they measure latency. If you look at the latency at the storage array then you are only really seeing the latency at the controller and array level. This doesn’t always include the latency experienced on the network or in the virtualized stack.

What you really need is visibility into the entire I/O path to see the effective latency of the VM. It’s the realized latency at the VM level that is the end result and what the user or admin sees or experiences. It can be dangerous to only focus your attention on one part of the latency in the stack and then base decisions on what the latency to the application is.

To solve this problem, PernixData has provided visibility into what the VM is observing, and since FVP is a read/write acceleration tier, you can also show a breakdown of latency in regards to read/write acknowledgements. 

As an example using the new zoom function in the new release of FVP 1.5, I can see the latency breakdown for a particular SQL Write Back enabled VM.

 

 

As you can see in this graph, the “Datastore” on the array had a latency spike that attributed to 7.45 Milliseconds, while the “Local Flash” on the host is at 0.25 ms or (250 Microseconds). The “VM Observed” latency is what the actual VM is seeing and thus you have a realized latency of 0.30 ms or (300 Microseconds)!! The reason you may have a small difference between Local Flash latency and VM Observed latency can be do to system operations such as flash device population as well as having write redundancy enabled or not.

To see this from a read/write perspective, you can also go to the "Custom Breakdown" menu and choose "Read" and "Write" to see the "VM Observed" latency broken down into reads and writes. 

 

As you can see the latency for this application was for writes not reads and since this VM is in Write Back mode we are seeing a realized 0.44 ms or (440 Microseconds) latency committed acknowledgment back to the application!!

This is obviously not the only way to determine what the actual latency is for your application, but what is unique, is the fact that PernixData is not making another latency silo solution. In other words, there are plenty of storage products on the market that give a great view into their perfect world of latency, but it’s isolated and not the full picture of what is observed on what matters in your virtualized datacenter.