Synology NFS VAAI

A few days ago Synology released their latest DSM version 5.1-5004. I have been impressed with the amount of innovation and enterprise features Synology brings to the storage market. It’s with this new release that Synology adds NFS VAAI support to the already ISCSI VAAI support. The two currently supported primitives for NFS are Full File Clone and Reserve Space. I hope Synology also considers adding additional primitives for NFS support in the near future!

In order to take advantage of the two new primitives you need to install the NFS VAAI plugin. This is basically a VIB that can be installed on each host through update manager or esxcli commands.

1)    Copy the VIB to a temp location /tmp

2)    Run esxcli software vib install –v esx-nfsplugin.vib

3)    Reboot host

Once installation is complete you can check to see if the VIB was successfully installed by running:  esxcli software vib list | more

This will list all VIBs installed. As you can see I have the PernixData VIB installed in addition to the NFS VIB. 


You can also check in vCenter if “Hardware Acceleration” says “Supported” You can see that my ISCSI and NFS datastores from my Synology both are supported! Since PernixData now supports NFS, I can now start realizing IO accelerartion for VM's using NFS! 


Clustered VAAI?

In reading the recent updated VMware vSphere Storage API Array Integration (VAAI) White Paper, I noticed a statement that caught my eye.

"VMware does not support VAAI primitives on VMFS with multiple LUNs/extents if they all are on different arrays.."

I understand the difficulty in doing this, but it makes me wonder if the coming VMware vVOLs will be the technology that gives the vAdmin the capability of crossing array bounderies on a single LUN that supports VAAI primitives. 

If anybody has thoughts or insight to this, please tweet or comment.




QuadStor Update Released

QUADStor released Version 3.0.12 today, adding several important features. QuadStor is starting to become very refined and capable of competing with many enterprise storage solutions.

Feature List:

1. Synchronous Mirroring -
2. Disk partitions and Linux LVM volumes can be configured as physical storage
3. FreeBSD 9.0 support
4. Many bugs fixed in the HA and clustering code. Also support for node fencing.

Documentation has been updated in

Synology RS10613XS+ and RX1213sas

At VMworld 2012 this year, I was able to get a glimpse of the next generation SANS from Synology. The biggest excitement with these new SANS, is the ability to have SSD Read Cache. Since we now have VAAI available, these units have now become enterprise worthy. It's expected that they will also be available in a couple months for under $10k. 

Check out some of the specs and options: 

  • More than 2000 MB per sec., as well as ultra-high transmission efficiency of more than 200,000 IOPS
  • Synology RX1213sas - Up to expand capacity to more than 400TB
  • Supports two 10GbE network port (using the add-in network adapter compatible with PCI-E)
  • Compatible with VMware ® / Citrix ® / Microsoft ® / Hyper-V ®
  • Passively cooled CPU and system fan redundancy mechanism
  • The scalable ECC memory (up to up to 8GB)
  • The operating system Synology DiskStation Manager (DSM)


Synology DSM 4.1 Released

Synology has released their new firmware DSM 4.1.

"Agile storage for virtual infrastructure: DSM 4.1 fully supports VMware® vSphere 5 vStorage APIs for Array Integration (VAAI) features. By integrating Full Copy, Zero Blocks, and Atomic Test and Set (ATS) features, Synology DiskStation and RackStation boost the performance of ESXi servers and optimize storage utilization to empower business with an efficient virtualized data center."

Download at:

Synology RS3412RPxs Review

Recently I noticed that the only sub $10,000 SAN that is on VMware's VAAI HCL list is the Synology RS3412RPxs SAN. This SAN doesn't currently support all the VAAI API's, but it does support ATOMIC test and set.

Their DSM 4.0 is really refined and the setup was very easy. I bought the RPxs model, because it has dual power supplies. I thought about purchasing a 10G NIC card and then load this unit with SSD drives (which is supported), but then I thought that it might be better and cheaper to load the unit with Seagate Enterprise drives at 500GB a piece. This will give me 10 spindles with smaller drives sizes, and hopefully giving me more iOPS at a lower cost. This unit fully loaded with drives using my configuration was only about $6,000!!

To get "Hardware Supported"for VAAI on ESXi 5.0, remember to setup a iSCSi file level LUN. This will allow ESXi to use the Synology for VAAI.

I haven't done much in regards of in-depth testing, but I'm currently running our highly utilized production Exchange mailbox store on it and it's running circles around our EMC VNXe that also has Exchange server running on it.

So far I recommend Synology, it's giving us features that the big boys only give us at $25k and above. Give me your feedback and questions.

QuadStor Review

As promised, here is my review of very promising software that has the potential to disrupt the SAN market. I say this, because #Quadstor supports #VAAI and it's free. They are still in beta and you do have to build your own appliance but this is far cheaper than buying a $50k appliance just to get VAAI support. 

They are adding features every month like.... Deduplication, Compression, unified management, clustering, vdisk cloning, vdisk replication and much more.... 

They hope to be out of beta sometime in May or June, but I have been testing the beta version 2.0.45 in VMware Workstation and it works great and the best part is that I get to test VAAI. I'm running it on #FreeBSD 8.2, but you can use Debian, Redhat, SLES and soon Ubuntu. 

I hope to do a part 2 to this post and do a physical install of QuadStor. I will have more details as to performance and my reaction.

Check them out at....

 Here is a screenshot that shows VAAI (Hardware Acceleration Proof)

VAAI Proof

Where is the VAAI Support?

Over the past year I have been fascinated by the lack of economical #block-level storage that utilizes industry API's like VMware's #VAAI. There are some examples where #NFS storage vendors have jumped into the pool of possibilities, but nothing to talk about when it comes to block-level storage. There have been promises made by all the big boy's out there, but nothing to show as of yet. #VMworld 2011 was littered with storage vendors that all said they couldn't do VAAI until the end of 2011 or the beginning of 2012. It's now the middle of March 2012 and not even #EMC as of yet supports there #VNXe's with VMware's VAAI API's

@Sakacc noted in September, 2011 - that we would have VNXe VAAI support by Q4 2011. 

Not that there are vendors that support VAAI on their storage, it's the lack of economical storage for the SMB market that is lacking. 

So after many hours of searching, I found a light at the end of the tunnel. A small startup that can give us storage with VAAI support for #Free!!!!! Stay Tuned!!!!!

VAAI Primer

Before I unleash the posts on my VAAI Array project, I thought it best to make sure the readers are somewhat famiar with VAAI

VAAI (vStorage APIs for Array Integration) is a feature first introduced in ESX/ESXi 4.1 and later expanded in ESXi 5.0. It is an API that was developed to enhance performance on the vSphere infrastructure by offloading several tasks to compliant Storage Arrays. *More on the compatible Storage Arrays later*

VAAI Benefits:

  • Atomic Test & Set (ATS), Atomically modifies sectors on a disk without having to use SCSI reservations. This means that LUN access from other hosts won't be the locked. This should increase performance many fold, depending on how many hosts access the same LUN. 

  • Clone Blocks/Fully Copy/XCOPY, directly on the supported Array without having to resort to ESX software data mover and moving data to/from the hosts and to/from the Array. If VAAI is enabled, copying/cloning of data will move at the speed of the hardware Array.
  • Zero Regional Blocks, Zeros out a large number of blocks on the Array for provisioning. This allows vSphere to speed up provisioning and do other tasks. 

  • Thin Disk Space Reclaim, this API uses SCSI UNMAP instead of SCSI Write and is based on VMFS 5.0. This basically tells the Array to write Zeros where something was deleted and then tells vSphere it's available. 

Keep in mind there are several things to be aware of when using VAAI, so I want to list two of them that I think most will need to be aware of. 


  1. If the source and destination VMFS volumes have different block sizes, then ESXi resorts to the default data mover. So suppose you used a 8MB block size on ESXi 4.1 and then upgraded to ESXi 5 and then as a result upgraded your file system to VMFS 5.0. You would think that the block size would either change or wouldn't matter, but the result is that it doesn't change the block size to the ESXi 5.0 default of 1MB. So if you add a VAAI enabled Array to the mix, then Hardware assisted offload won't work, until you recreate the datastores to be the same default block size. 
  2. If the source VMDK type is eagerzeroedthick and the destination VMDK type is thin, then VAAI offload won't work. 

This post was meant to be a short summary of VAAI. Feel free to comment on anything I missed or items that you think will benefit the readers. 



As you can see this is my first post on CloudJock. So I begin my journey talking about a current project that I'm working on. This will begin a new series surrounding VMware's VAAI API's. So Stay Tuned!!!