How Can Database Users Benefit - PernixData FVP

I'm pleased to introduce you to Bala Narasimhan, VP of Products at PernixData. He has a wealth of knowledge around databases, and has authored 2 patents for memory management in relational databases. It's my pleasure to have him featured in today's post. He is officially my first guest blogger! Enjoy! 

Databases are a critical application for the enterprise and usually have demanding storage performance requirements. In this blog post I will describe how to understand the storage performance requirements of a database at the query level using database tools. I’ll then explain why PernixData FVP helps not only to solve the database storage performance problem but also the database manageability problem that manifests itself when storage performance becomes a bottleneck. Throughout the discussion I will use SQL Server as an example database although the principles apply across the board.

Query Execution Plans

When writing code in a language such as C++ one describes the algorithm one wants to execute. For example, implementing a sorting algorithm in C++ means describing the control flow involved in that particular implementation of sorting. This will be different in a bubble sort implementation versus a merge sort implementation and the onus is on the programmer to implement the control flow for each sort algorithm correctly.

In contrast, SQL is a declarative language. SQL statements simply describe what the end user wants to do. The control flow is something the database decides. For example, when joining two tables the database decides whether to execute a hash join, a merge join or a nested loop join. The user doesn’t decide this. The user simply executes a SQL statement that performs a join of two tables without any mention of the actual join algorithm to use. 

The component within the database that comes up with the plan on how to execute the SQL statement is usually called the query optimizer. The query optimizer searches the entire space of possible execution plans for a given SQL statement and tries to pick the optimal one. As you can imagine this problem of picking the most optimal plan out of all possible plans can be computationally intensive.

SQL’s declarative nature can be sub-optimal for query performance because the query optimizer might not always pick the best possible query plan. This is usually because it doesn’t have full information regarding a number of critical components such as the kind of infrastructure in place, the load on the system when the SQL statement is run or the properties of the data. . One example of where this can manifest is called Join Ordering. Suppose you run a SQL query that joins three tables T1, T2, and T3. What order will you join these tables in? Will you join T1 and T2 first or will you join T1 and T3 first? Maybe you should join T2 and T3 first instead. Picking the wrong order can be hugely detrimental for query performance. This means that database users and DBAs usually end up tuning databases extensively. In turn this adds both an operational and a cost overhead.

Query Optimization in Action

Let’s take a concrete example to better understand query optimization. Below is a SQL statement from a TPC-H like benchmark. 

select top 20 c_custkey, c_name, sum(l_extendedprice * (1 - l_discount)) as revenue, c_acctbal, n_name, c_address, c_phone, c_comment from customer, orders, lineitem, nation where c_custkey = o_custkey and l_orderkey = o_orderkey and o_orderdate >= ':1' and o_orderdate < dateadd(mm,3,cast(':1'as datetime)) and l_returnflag = 'R' and c_nationkey = n_nationkey group by c_custkey, c_name, c_acctbal, c_phone, n_name, c_address, c_comment order by revenue;

The SQL statement finds the top 20 customers, in terms of their effect on lost revenue for a given quarter, who have returned parts they bought. 

Before you run this query against your database you can find out what query plan the optimizer is going to choose and how much it is going to cost you. Figure 1 depicts the query plan for this SQL statement from SQL Server 2014 [You can learn how to generate a query plan for any SQL statement on SQL Server at https://msdn.microsoft.com/en-us/library/ms191194.aspx]

Figure 1
You should read the query plan from right to left. The direction of the arrow depicts the flow of control as the query executes. Each node in the plan is an operation that the database will perform in order to execute the query. You’ll notice how this query starts off with two Scans. These are I/O operations (scans) from the tables involved in the query. These scans are I/O intensive and are usually throughput bound. In data warehousing environments block sizes could be pretty large as well. 

A SAN will have serious performance problems with these scans. If the data is not laid out properly on disk, you may end up with a large number of random I/O. You will also get inconsistent performance depending on what else is going on in the SAN when these scans are happening. The controller will also limit overall performance.

The query begins by performing scans on the lineitem table and the orders table. Note that the database is telling what percentage of time it thinks it will spend in each operation within the statement. In our example, the database thinks that it will spend about 84% of the total execution time on the Clustered Index Scan on lineitem and 5% on the other. In other words, 89% of the execution time of this SQL statement is spent in I/O operations! It is no wonder then that users are wary of virtualizing databases such as these.

You can get even more granular information from the query optimizer. In SQL Server Management Studio, if you hover your mouse over a particular operation a yellow pop up box will appear showing very interesting statistics. Below is an example of data I got from SQL Server 2014 when I hovered over the Clustered Index Scan on the lineitem able that is highlighted in Figure 1.

Notice how Estimated I/O cost dominates over Estimated CPU cost. This again is an indication of how I/O bound this SQL statement is. You can learn more about the fields in the figure above here.

An Operational Overhead

There is a lot one can learn about one’s infrastructure needs by understanding the query execution plans that a database generates. A typical next step after understanding the query execution plans is to tune the query or database for better performance.  For example, one may build new indexes or completely rewrite a query for better performance. One may decide that certain tables are frequently hit and should be stored on faster storage or pinned in RAM. Or, one may decide to simply do a complete infrastructure redo.

All of these result in operational overheads for the enterprise. For starters, this model assumes someone is constantly evaluating queries, tuning the database and making sure performance isn’t impacted. Secondly, this model assumes a static environment. It assumes that the database schema is fixed, it assumes that all the queries that will be run are known before hand and that someone is always at hand to study the query and tune the database. That’s a lot of rigidity in this day and age where flexibility and agility are key requirements for the business to stay ahead.

A solution to database performance needs without the operational overhead 

What if we could build out a storage performance platform that satisfies the performance requirements of the database irrespective of whether query plans are optimal, whether the schema design is appropriate or whether queries are ad-hoc or not? One imagines such a storage performance platform will completely take away the sometimes excessive tuning required to achieve acceptable query performance. The platform results in an environment where SQL is executed as needed by the business and the storage performance platform provides the required performance to meet the business SLA irrespective of query plans.

This is exactly what PernixData FVP is designed to do. PernixData FVP decouples storage performance from storage capacity by building a server side performance tier using server side flash or RAM. What this means is that all the active I/O coming from the database, both reads and writes, whether sequential or random, and irrespective of block size is satisfied at the server layer by FVP right next to the database. You are longer limited by how data is laid out on the SAN, or the controller within the SAN or what else is running on the SAN when the SQL is executed.

This means that even if the query optimizer generates a sub optimal query plan resulting in excessive I/O we are still okay because all of that I/O will be served from server side RAM or flash instead of network attached storage. In a future blog post we will look at a query that generates large intermediate results and explain why a server side performance platform such as FVP can make a huge difference.

 

FVP Management Database Design Decisions

When deciding which database model to use for FVP, it’s important to understand what the goals are in using FVP and the growth potential for the platform. Upon installation, FVP management service builds and connects to a “prnx” SQL database instance. This database is responsible for receiving, storing and presenting performance data. All time series data for all performance charts displayed in the FVP UI are stored in this database, in addition to management metadata as it relates to configurations. Keep in mind however neither the management server nor the FVP database needs to be operational for read/write acceleration to continue during downtime. 

PernixData management server is also responsible for managing fault domain configurations and the host peer selection process for write back fault tolerance. This information is also kept current in the “prnx” database so that any host or cluster changes can be kept accurate for FVP policy changes. This is why it’s imperative that FVP maintain a connection with the vCenter server, so that inventory information can be collected and maintained. 

It was decided early in the FVP design phase not to recreate the wheel and take advantage of already robust operations in SQL server. One of these decisions was to implement SQL rollup jobs into practice for FVP. The SQL rollup job is responsible for keeping only the current valuable data while providing an average for historical reference. Instituting the SQL rollup process lowers the latency and overhead of FVP having to implement the averaging operations. This means all data stored in SQL is not moved nor massaged outside the context of SQL, this provides the security and performance benefits to FVP as an acceleration platform. 

Since part of the SQL server responsibility is to store FVP performance data, it’s important to only store as much data that is relevant and useful. Currently FVP management server only requests 20-second performance samples on all FVP clustered VM’s on each enabled host. This is run using multiple threads so that multiple CPU cores can be utilized for efficiency. During a 24-hour period a large amount of data could be archived. In this case, FVP has a purging schedule that runs every hour to purge all 20-second samples older than 24 hours. This only happens after a SQL rollup has completed within each minute and hour time period averaging the 20-second samples. 

Every minute there are 3 samples (20 seconds each) that are averaged. At the 1 Hour mark, a SQL rollup job runs and at completion FVP will purge all 20-second samples older than 24 hours. In order to view the 20-second samples before the rollup, then look at the performance statistics that are 1 hour or less in the FVP performance UI.  After the 1-hour interval all 20-second samples are discarded after the first SQL rollup and then permanently removed after the purging operation 24 hours later. 

In order to determine a proper SQL capacity for this amount of data, one needs to know how many VM’s they plan to accelerate with FVP and what the potential is for continued expansion. Currently over 80% of the “prnx” database is used to store performance related metrics and this 80% also makes up the majority of data churn within the platform. This means calculating for the 80% will provide ample room for FVP’s operations. 

The PernixData Management Server will insert 1 row (record) in the DB table every 20 seconds for each VM. This can be approximated that each VM will store ~ 1.6KB amount of data every 20 seconds. This data also takes into account the index size for each VM that is referenced. 


If considering SQL Express with a 10GB limitation, knowing the effective data added each day becomes an important piece of information. This design decision could hamper long-term storage or the acceleration of a large number of VM’s. Whether SQL Express is chosen or not, it’s a best practice to either choose “Simple” Mode or have a regular scheduled SQL backups so that log truncation can help limit the continued growth of the SQL log. 

Knowing the approximate data added to the DB each day for said number of VM’s will provide the expectancy when one would reach a 10GB capacity for SQL Express. If for example you have 100 VM’s accelerated with FVP, it will take about 400 days, but for a 1000 VM’s the limitation will be reached in as little as 40 days! 

To understand how our UI displays the averages based on the samples and purging process, below is a chart that illustrates the number samples taken and the average based on the time displayed. Keep in mind whether choosing a custom time range or using the predefined time ranges in the FVP UI, all result in the same samples and averages as indicated in the chart below. 

As you can see it’s important to not only understand the metrics that you are referencing but design appropriately for database sizing and retention, taking into account PernixData FVP’s growth within your virtual environment.