Atlanta VMUG Homerun

Just wanted to share a short post on an awesome event coming up on June 1st. I will speaking at the next Atlanta VMUG in Suntrust Park. We rented out an awesome location to share, eat and watch the Atlanta Braves take on the Washington Nationals. I'm still working on my notes for the event, but I will sharing an overview of Datrium and how Blockchain, IoT and VMware vSphere intersects with Datrium's Distributed Log Structured filesystem. 

We are working with our close friends at Diversified, and TierPoint and so seating is extremely limited. We are expecting a very large turn out, so register early to get a seat and a ticket. 

 

 

 

Global Data Efficiency

In today’s modern datacenters and hybrid cloud deployments a new breed of data efficiency is required. As organizations scale their storage architecture it becomes imperative to take advantage of economies of more and more data as it enters into one’s data lake. Not only is it important to monitor the rising egress costs but also the amount active data for consistent performance even during ingest. In this next post of my series I will look at Datrium’s Global Deduplication technology and the power it brings to the infrastructure stack to handle data growth while achieving data efficiency at the same time.  

It’s now table stakes to have localized deduplication in storage architectures, however if customers want to go to the next level of efficiency then Global Deduplication is required to compete in a multi-cloud world. Datrium decided early on to build an architecture that was built on Global Deduplication as the foundation for data integrity and efficiency. It’s with this framework that customers can have assurance for their data, while not compromising on performance and stability for their applications. 

You may be asking, with all this Global Deduplication talk, what can I expect in regard to my own data efficiency. Well, every customer and data set can be different, however looking at our call home data, we average 4.5x data reduction across our customer base. We anticipate this even going higher when our new Cloud DVX service becomes more widely used.

Within the Datrium DVX dashboard UI, a customer can look at the Global Deduplication ratios and also compare that with what data reductions they are getting on a per host basis on flash. As new data is received in RAM on each DVX enabled host, we fingerprint each block inline for local deduplication on the flash in addition to compression and encryption. Then as we write new blocks to the Datrium data node for persistence, DVX will do the Global part of the deduplication by analyzing all data from all hosts, no matter if the data originated from Linux, ESXi, Docker, etc.… we will automatically compare all data blocks. In the small lab example above, we are getting 3.9x efficiency with Global Deduplication and Compression. Keep in mind as your DVX cluster continues to get larger and wider on a single site or multi-site the amount of referenceable data increases which further provides great duplicated efficiency.

Let’s now look at some additional points on what makes Datrium’s Global Deduplication technology so powerful for organizations. 

•    Datrium uses Blockchain (Crypto-Hashing) like technology to provide verifiable correctness when determining what needs to be transmitted is absolute and required. This level of data integrity ensures that all deduplicated data is in a correct state at rest and in transit. (This is a separate post all in of its self for a later date)

•    Built-in to the new Datrium Cloud DVX fully managed service, is a completely multi-site and cloud aware solution. Datrium fingerprints all data with a crypto-hash on ingest. Let’s say for example you have two sites – primary and DR and also an archive on AWS. When data needs to be moved between sites, first the DVX software exchanges fingerprints to figure out what data needs to be copied to the remote sites. Then, only unique data is sent over to the remote sites. This automatically provides WAN optimization and a decrease in RTO’s. This result is a dramatic savings especially on cloud egress costs. 

•    Always-On Global Deduplication. Datrium provides the software intelligence to handle all data types and workloads providing data efficiency locally and globally without having to determine whether dedupe should be on or turned off for a particular workload. 

•    Datrium can seed remote sites automatically and also use existing data sets for metadata exchange for optimized replication.  As an example, current DR environments no longer have to worry about pre-seeding and can instantly take advantage of Datrium’s Global Deduplication for replication savings. 

•    Datrium Cloud DVX can use multiple sites for replication to the cloud for backup/archival. In such a use case, deduplication of the data in the cloud is truly global, where all the data across all the sites is globally deduplicated when data is stored in Amazon S3.

•    Datrium does not need to send any full’s (VM's or Files) over the wire on a periodic basis. It’s an always forever incremental process, since we always compare with what has already been sent or received from other sites, we never send a full again, it's always incremental to what's already there.

Most solutions only look at duplication from an at-target perspective or only localized fingerprints. It’s nice to see that Datrium took the extra time to put together an always-on solution that provides, local, global and over the wire deduplication. 
 

Datrium Blanket Encryption

In part 1 of the Datrium Architecture series I discussed how a split architecture opens up a huge amount of flexibility into modern datacenter designs. In part 2 of this blog series, I will be talking about an industry first feature that everyone concerned with security in their datacenter will want to take notice. 

Datrium Blanket Encryption

Sure, there are products in the market that provide encryption for data at rest in a storage platform, but there are no converged products as of today that provide government grade (FIPS 140-2) data encryption end-to-end. Organizations must look for a solution that is FIPS 140-2 validated, not FIPS 140-2 certified. Validation is when NIST evaluated the encryption scheme. However, certified is technically meaningless and is mostly marketing, it may be done in the spirit of NIST's requirements, but it hasn't been validated.

It's only with the Datrium DVX software platform, that all I/O from an application/workload perspective is encrypted upon creation using AES-XTS-256 crypto algorithm and is a validated solution for FIPS 140-2 compliance. Using the underutilized AES-NI chipset built into modern day microprocessors, Datrium will encrypt data in-use and on access in RAM & Flash and in-flight when the second "write" is synchronously sent to the data node for block durability. This means you will have your data encrypted while in-use, in-flight and at rest, so that there is no risk for compromise at any level in the I/O stack. 

There is also no need to have SED's (Self-Encrypting Drives), this implementation is software based and is included at no added cost to the customer. The amount of savings this brings to customers is huge, since SED's are exorbitant upon procurement and on top of that you can't mix differing disk types in most systems today. It then becomes an all or nothing implementation when only using a data-at-rest encryption method based only on the drives. 

Blanket Encryption Use Cases: 

There are many use cases for Datrium's blanket encryption. The obvious ones are….

1) Drive or part replacement. 
2) Prevent network sniffing of I/O traffic. 
3) Rogue processes that tap into host memory. 
4) System theft. 
5) HIPPA & SLA compliance.

Today, Datrium uses an internal key management system for easy setup and management. With this, we support password rotation, startup locked and unlocked modes and in full disclosure, you can also be assured that encryption keys are not stored in swap or in the core dump of the Datrium system. 

Another cool feature is the shipping mode option, where the key is not stored persistently anywhere in the platform. So, during transport of the DVX platform there is no risk of a data breach during transit. When the system is powered up in this locked mode, the administrator must provide the encryption password before the system will serve any data again. 

Enabling encryption is extremely easy to do on any Datrium DVX system. Just issue the command: "datastore encryption set --fips-mode validated" this will enable the FIPS 140-2 validated mode for your data. In order to verify just issue the show command: "Datastore encryption show"
You can also verify in the DVX dashboard under durable capacity, where the green shield is. This will show that encryption is enabled with FIPS 140-2 compliance. 

Now some may ask but wait doesn't this mean if I enable encryption on Datrium that data reduction like dedupe and compression go away. Remember Datrium implemented an always-on system when it comes to data reduction. In so doing, Datrium became the first in a converged platform offering FIPS 140-2 validation without sacrificing data reduction using compression, dedupe or erasure coding. 

I'm blown away that Datrium has not only done the right thing when it comes to offering FIPS 14-2 validation out of the gate but also without a sacrificing performance nor any data reduction technology that customers love us for. 

Additional reading: Datrium Blanket Encryption Whitepaper

My next post in this series will about Datrium's Global Data efficiency. 

 

Datrium Architecture - Differentiator Series

Since coming to Datrium over a month ago, I have been amazed at the level of talent and product feature advancement that has been orchestrated in such a short time. This has kick started me to write a series of blog posts (in no particular order) on why the Datrium architecture makes so much sense for the modern datacenter. I will be discussing key differentiators and features that make up some of my favorite parts of the Datrium DVX software platform.  

Part 1: Split Architecture

One of the benefits of open convergence, is that you are no longer tied to the traditional two controller storage system, where scale is solely based on available storage and network resources. Datrium’s flexibility goes even beyond the traditional HCI stack, (where compute and data operations share resources in the same appliance). By decoupling performance and data services from capacity a whole new world of possibilities is introduced in the separation of compute and data storage. 

It’s with this that Datrium’s split architecture pioneers a new breed of datacenter designs where flexibility is at heart of what makes the DVX platform so remarkable. For example, you can... 

•    Use commodity-based Flash resources in the host/server where it’s less expensive and more flexible for I/O processing. 
•    Use your own x86 hosts/servers with ESXi, RHV, or Docker as a platform of choice and flexibility for growth.
•    Upgrade performance easily with the latest and greatest flash technology in the hosts/servers. 
•    Lower East/West chatter and remove the vast majority of storage traffic from the network where continued scale can be realized and application performance isolation provides true data locality. 
•    Scale Compute and Storage truly independently.
•    Take advantage of under-utilized CPU Cores for IO processing and data services (Compression, Dedupe, Erasure Coding, Encryption) on your hosts/servers.
•    Utilize stateless hosts while still achieving data integrity, fault and performance isolation between compute and data. No quorum needed, minimum host/servers needed is only one. 
•    Get Secondary Storage in the same platform for a lower TCO.
•    Use multiple based storage controllers by virtue of every host/server introduced into the DVX platform. 

Storage Operations in a Split Architecture:

One example of these benefits in a split architecture is realized when disks die or disk (bitrot or Latent sector) errors occur on a primary or secondary storage system. A rebuild (system) operation is the process that is needed for any data system to be in an healthy fault tolerant state again. Every storage system needs to have a process to rebuild data when data corruption occurs, however, there can be side effects when rebuilds are needed. One of the most common consequences with any rebuild operation is the resource utilization needed to complete successfully. During such an operation, your primary workloads could be slowed and hampered with higher latency.  
Data rebuilds will result in lower performance, and the time to finish the rebuild can be lengthy depending on resource availability, architecture and how much data is resident for a completed healthy state again.  
Another common storage operation is the rebalancing of data when additional capacity or nodes are added to a storage system. Proper rebalancing tasks can drain system resources based on the amount of data that needs to be rebalanced. This sometimes-timely task is important to keep the pool of data available and avoid hot spots from occurring. 

Datrium’s architecture is based on a Distributed Log Structure Filesystem. This filesystem provides a platform where workloads are not hindered from decreased performance during rebuilds or other system operations. The decoupling of performance and data services from capacity has given customers the freedom to finally realize the benefits of true cloud like experience right in their own datacenter. (More on this in a future post) By moving I/O operations to the hosts and keeping data at rest in what we call a data node, we achieve the best I/O latency for applications while keeping data safe and resilient on durable/secondary storage. So, when problems or systems operations occur on durable storage, our intelligent DVX software utilizes underused CPU cores on the hosts for these operations while not stealing any CPU resources from running workloads. You can think of your hosts/servers each as a separate storage controller all working together to facilitate storage operations and system processes. 

As one adds more hosts/nodes to the DVX system, the faster rebuild and/or rebalance operations ensue. During reconstruction of data, multiple hosts/servers can help with system operations. No host to host communication is needed. Each host has its own task and operation to facilitate a faster more efficient rebuild/rebalance on the Datrium data nodes. 
One very cool example of the intelligence in the DVX software is the built-in QOS during a rebuild operation on the data node(s). This is based on the amount of failures or number disks to rebuild. For example, if only one disk fails then less resources from the hosts are needed for the rebuild operation. This is a dynamic process to facilitate the amount of urgency needed for small or larger failures and errors.  

Datrium took the best of traditional 3-tier and HCI architectures into a modern world where performance and system operations are in disagreement. A customer can now utilize open convergence to achieve the performance they never had on-premises and will never get in the cloud. Think of it as future proofing your datacenter for years to come. 

In part two I will discuss Datrium’s Encryption uniqueness and powerful encryption capabilities. 

Joining Datrium

It’s with ebullience that I’m joining Datrium as their new Principal Systems Engineer. I’m a firm believer in God leading me to new endeavors and this is no different. He has led me undoubtedly throughout my life and so I know there is great things in store in this new opportunity. 

As some of you already know, it’s rare when an awe-inspiring company culture comes together with great technology! It’s with this combination of greatness that I look forward in exemplifying and also demonstrating how Datrium’s DVX platform can solve real business problems in the enterprise. There are so many exciting things to share about Datrium, so stay tuned!! 

For those that are unfamiliar with Datrium the company, here is a quick snapshot. 

Founded: 2012
Exited Stealth: 7/2015
Location: Sunnyvale, CA
Investors: NEA, LightSpeed, (Angel Investors: Diane Greene and Mendel Rosenblum, Frank Slootman, Kai Li, Ed Bugnion)
Funding to Date: 110M

•    Brian Biles, ex Data Domain founder / VP Product Mgmt. 
•    Hugo Patterson, ex Data Domain Original Chief Architect, EMC Fellow
•    Sazzala Reddy, 2nd Data Domain CTO (employee #15) 
•    Ganesh Venkitachalam, ex VMware Principal Engineer
•    Boris Weissman, ex VMware Principal Engineer

Datrium offers a new take on modern convergence called Open Converged Infrastructure. The (DVX) platform supports VMware, Linux KVM and bare metal Docker with host based flash and appliance based durable storage for cost-optimized secondary storage and archive to cloud capabilities.