The Wayback Machine - https://web.archive.org/web/20161112183126/http://java.sys-con.com:80/node/3949817

Welcome!

Java IoT Authors: Liz McMillan, Pat Romanski, Tony Tomarchio, Kevin Jackson, TJ Randall

Related Topics: API Journal, Java IoT, Containers Expo Blog, @CloudExpo

API Journal: Blog Post

Optimizing VMware Environments for Peak SQL Server Performance | @CloudExpo #Cloud #Analytics #MachineLearning

What if it were possible to have both high availability and high performance without the high cost and complexity?

Optimizing VMware Environments for Peak SQL Server Performance

VMware configurations designed to provide high availability often make it difficult to achieve satisfactory performance required by mission-critical SQL Server applications. But what if it were possible to have both high availability and high performance without the high cost and complexity normally required?

This article explores two requirements for getting both for SQL applications, while reducing capital and operational expenditures. The first is to implement a storage architecture within VMware environments designed for both high availability and high performance; the second involves tuning the high availability (HA) and high performance (HP) HA/HP architecture for peak performance.

Building the Foundation with an HA/HP Architecture
SQL Server administrators have many options for implementing HA in a VMware environment. VMware offers vSphere HA, Microsoft offers Windows Server Failover Clustering as a general-purpose HA solution, and SQL Server has its own HA capabilities with AlwaysOn Failover Clusters and AlwaysOn Availability Groups. Then there are the many third party vendors that offer solutions purpose-built for HA and disaster recovery.

The problem is: many of these HA solutions lack full application availability protection, reduce operational flexibility or have an adverse impact on performance. Performance overhead is caused by the layers of abstraction in virtualized servers complicating the way virtual machines (VMs) interface with physical devices, including in a Storage Area Network (SAN) where the storage is also virtualized. Both VMware HA and AlwaysOn Availability Groups fall short in protecting the entire application stack and all application data during failover. And while Windows Server Failover Clustering is the ideal solution to fully address these issues, VMware imposes certain restrictions that reduces IT flexibility, highest performing configurations and the mobility of VMs configured in the cluster. Let's look at the issues.

To enable compatibility with certain SAN and other shared-storage features, such as I/O fencing and SCSI reservations, vSphere utilizes a technology called Raw Device Mapping (RDM) to create a direct link through the hypervisor between the VM and the external storage system. The requirement for using RDM with shared storage exists for layering any HA clustering technology on a VMware environment, including a SQL Server Failover Cluster using Windows Server Failover Clustering (WSFC).

RDM makes the storage appear to the guest operating system as if it were a virtual disk file in a VMware Virtual Machine File System (VMFS) volume. For this reason, mapping is able to maintain 100 percent compatibility with all SAN commands, making virtualized storage access seamless to both the operating system and applications.

RDM can be made to work effectively, but achieving the desired result is not always easy, and may not even be possible. For example, RDM does not support disk partitions, so it is necessary to use "raw" or whole LUNs (logical unit numbers), and mapping is not available for direct-attached block storage and certain RAID devices. And because RDM interferes with VMware features that employ virtual machine disk (VMDK) files, SQL Server administrators may be unable to fully utilize desirable features like snapshots, Virtual Consolidated Backups (VCBs), templates and vMotion.

But the real problem for transaction-intensive applications like SQL Server is the inability to utilize performance-enhancing Flash Read Cache when RDM is configured. Achieving both HA and HP for SQL Server applications in a VMware environment is best achieved using a SANless configuration that eliminates the need for shared SAN storage.  In SANless configurations both the compute and storage resources are fully redundant (with no single points of failure and automatic failover) and provide the additional flexibility to achieve disaster protection by geographically dispersing redundant resources.

SANless HA/HP architectures make it possible to create a shared-nothing, hardware-agnostic, single-site or multi-site cluster. Some solutions also make it possible to implement LAN/WAN-optimized, real-time block-level replication in either a synchronous or asynchronous manner. In effect, these solutions are capable of creating a RAID 1 mirror across the network, automatically changing the direction of the data replication (source and target) as needed after failover and failback.

Just as importantly, a SANless cluster is often easier to implement and operate with both physical and virtual servers. For example, for solutions that are integrated with WSFC, administrators are able to configure high-availability clusters using a familiar feature in a way that avoids the use of shared storage as a potential single point of failure. Once configured, most solutions then automatically synchronize the local storage in two or more servers (in one or more data centers), making them appear to WSFC as a local or shared storage device.

A well-designed SANless HA/HP solution can actually be less expensive than traditional HA configurations owing to savings in two areas. The first involves avoiding the high cost associated with creating a fully redundant SAN across the LAN and WAN. Simply put: HA configurations using local storage with hard disk drives (HDDs) and/or solid state drives (SSDs) are able to deliver superior performance at a lower cost. The second area involves licensing. Because these solutions are designed to deliver carrier-class HA for AlwaysOn Failover Clusters in SQL Server Standard Edition, there is no need for using the AlwaysOn Availability Groups in the more expensive Enterprise Edition.

The performance advantage of a SANless HA/HP solution is shown in diagram. Benchmark testing reveals the 60-70 percent performance penalty associated with using SQL Server AlwaysOn Availability Groups to replicate data in a SAN environment. These test results also show how the use of local storage in an HA configuration is able to perform nearly as well as an unprotected application. To provide an accurate comparison, each alternative utilized identically-performing HDDs. The use of SSDs can deliver an even more significant performance advantage over the SAN-based AlwaysOn Availability Group configuration.

Benchmark tests comparing SQL Server's AlwaysOn Availability Groups with SANless clusters shows the throughput advantage possible with replication techniques designed for HA/HP.

The SANless cluster tested is able to deliver this impressive performance with complete application and data transparency because its advanced architecture implements a low-level, high efficiency driver that sits immediately below NTFS. As writes occur on the primary server, the driver writes one copy of the block to the local VMDK and another copy simultaneously across the network to the VMDK on the remote secondary server.

Beyond performance, SANless clusters have many other advantages. For example, those that use block-level replication technology that is fully integrated with WSFC are able to protect the entire SQL Server application instance, including all databases, logons and agent jobs-all in an integrated fashion. Contrast this approach with AlwaysOn Availability Groups, which protects only the SQL databases, not including other disk resident data that may be application specific.

Tuning the Configuration for Peak Performance
Just as virtualization's layers of abstraction make accessing storage more complex, so too do they obscure how the physical resources are performing. This can make optimizing resources for peak performance a never-ending exercise in trial-and-error.

The trial-and-error process is nearly impossible to avoid with traditional application performance management tools that utilize thresholds of discrete events to isolate performance issues. But individual thresholds are unable account for the interrelated nature of resources in virtualized environments, where a change to one often has a significant impact on another. So even when these tools alert IT to a performance issue, they are incapable of providing meaningful insight into the issue or provide guidance for resolution.

Advanced machine learning analytics (MLA) software overcomes these and other limitations by automatically and continuously learning the many complex behaviors and interactions among all interrelated resources. Self-learning and automatic adaptation is what makes it possible for MLA-based solutions to provide a more accurate means of identifying the root cause(s) of performance issues and providing actionable recommendations for resolving these.

Most machine learning analytics systems work by aggregating, normalizing, and then correlating and analyzing hundreds of thousands of data points from numerous resources across network, storage, compute and application layers. While gathering and analyzing this wealth of data, the MLA system learns what constitutes normal behavior patterns, thereby establishing a baseline for detecting anomalies and finding root causes. Some MLA systems also enable human supervision to accelerate the learning process and improve results.

In addition to identifying root causes, some MLA systems are able to simulate and predict the impact of making changes in resources and configurations. This is key for anticipating and avoiding performance or reliability issues and avoiding real-time reaction to problems that occur. In contrast traditional monitoring tools are reactive by design and primarily designed to deliver alerts on current events within the infrastructure. These tools are manually intensive and involve time-consuming and error-prone approaches. They require IT administrators to run multiple reports, and then manually compare the results to find and fix under- and over-provisioning of vCPU and vMemory resources.

MLA systems can identify a wide range of performance issues, involving compute  or storage contention, or incorrectly configured VMs as well as problems arising from migrated VMs, newly provisioned VMs, "noisy neighbors," misconfigured applications or hardware degradation. Most MLA systems also help improve efficient use of resources by identifying idle VM's or wasted storage.

SQL administrators often employ host-based caching (HBC), all-flash arrays and/or hybrid storage to improve performance. In SAN environments, HBC normally delivers the greatest improvements in throughput performance by maximizing I/O operations per second (IOPS) for some, but not all applications. And therein lies the challenge.

The improvement in performance is best when the cache is able to contain sufficient "hot" data to have a meaningful increase in IOPS. But testing every application that might fit such criteria with different HBC configurations in an attempt to quantify the improvement is an arduous endeavor in organizations running hundreds or thousands of applications.

Because machine learning is able to evaluate the many variables involved, MLA systems make it possible to identify those applications that would benefit the most from host-based caching. Most systems are able to recommend a cost-effective HBC configuration, and some are even able to estimate the likely increase in IOPS, enabling SQL administrators to prioritize the implementation effort.

Conclusion
Peak performance is impossible to achieve on a shaky foundation, so it is critically important to make certain the infrastructure's architecture is designed for both high availability and high performance. But as with most things, the SQL Server performance devil is in the details of the many physical resource configurations throughout the HA/HP infrastructure. By taking the guesswork out of performance tuning, machine learning analytics makes it easier than ever to achieve peak performance.

Is your VMware infrastructure delivering satisfactory performance for all of your SQL Server applications? You're among good company if the answer is no. The recommendations made here are easy to implement in a development or pilot environment, so there is little to lose and much to gain by giving them a try. And because most vendors today offer free trials of their performance-tuning tools, there is also zero financial risk to trying.

More Stories By Tony Tomarchio

Tony Tomarchio is the Director of Field Engineering for SIOS Technology. He is responsible for defining and delivering technical pre-sales services, support and best practices to SIOS customers, prospects and partners. He has more than a decade of experience providing systems management and high availability solutions to enterprise customers. Prior to joining SIOS, he served as the Global Sales Engineering lead for the Oracle systems management practice. Tony joined Oracle through the acquisitions of Sun Microsystems and Aduva, Inc., where he served as the lead Sales Engineer / Technical Account Manager and played a critical role in product adoption and evolution.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
Data is an unusual currency; it is not restricted by the same transactional limitations as money or people. In fact, the more that you leverage your data across multiple business use cases, the more valuable it becomes to the organization. And the same can be said about the organization’s analytics. In his session at 19th Cloud Expo, Bill Schmarzo, CTO for the Big Data Practice at Dell EMC, introduced a methodology for capturing, enriching and sharing data (and analytics) across the organizat...
Skyworks Solutions, Inc., has launched a suite of new high performance, fully integrated front-end modules targeting the rapidly expanding Internet of Things market including the connected home, industrial automation and energy management, among others. Skyworks' newest modules are the first in a series of solutions powering multimode operation for next generation Bluetooth®, Thread and ZigBee® wireless networking protocols. When paired with System on a Chip (SoC) platforms, the devices deliver ...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, will discuss how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team a...
OnProcess Technology has announced it will be a featured speaker at @ThingsExpo, taking place November 1 - 3, 2016, in Santa Clara, California. Dan Gettens, OnProcess’ Chief Analytics Officer, will discuss how Internet of Things (IoT) data can be leveraged to predict product failures, improve uptime and slash costly inventory stock. @ThingsExpo is an annual gathering of IoT and cloud developers, practitioners and thought-leaders who exchange ideas and insights on topics ranging from Big Data in...
Donna Yasay, President of HomeGrid Forum, today discussed with a panel of technology peers how certification programs are at the forefront of interoperability, and the answer for vendors looking to keep up with today's growing industry for smart home innovation. "To ensure multi-vendor interoperability, accredited industry certification programs should be used for every product to provide credibility and quality assurance for retail and carrier based customers looking to add ever increasing num...
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
A completely new computing platform is on the horizon. They’re called Microservers by some, ARM Servers by others, and sometimes even ARM-based Servers. No matter what you call them, Microservers will have a huge impact on the data center and on server computing in general. Although few people are familiar with Microservers today, their impact will be felt very soon. This is a new category of computing platform that is available today and is predicted to have triple-digit growth rates for some ...
For basic one-to-one voice or video calling solutions, WebRTC has proven to be a very powerful technology. Although WebRTC’s core functionality is to provide secure, real-time p2p media streaming, leveraging native platform features and server-side components brings up new communication capabilities for web and native mobile applications, allowing for advanced multi-user use cases such as video broadcasting, conferencing, and media recording.
We are reaching the end of the beginning with WebRTC, and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will w...
For basic one-to-one voice or video calling solutions, WebRTC has proven to be a very powerful technology. Although WebRTC’s core functionality is to provide secure, real-time p2p media streaming, leveraging native platform features and server-side components brings up new communication capabilities for web and native mobile applications, allowing for advanced multi-user use cases such as video broadcasting, conferencing, and media recording.
SYS-CON Events announced today that SoftNet Solutions will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. SoftNet Solutions specializes in Enterprise Solutions for Hadoop and Big Data. It offers customers the most open, robust, and value-conscious portfolio of solutions, services, and tools for the shortest route to success with Big Data. The unique differentiator is the ability to architect and ...
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, will discuss the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
"Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
@ThingsExpo has been named the Top 5 Most Influential M2M Brand by Onalytica in the ‘Machine to Machine: Top 100 Influencers and Brands.' Onalytica analyzed the online debate on M2M by looking at over 85,000 tweets to provide the most influential individuals and brands that drive the discussion. According to Onalytica the "analysis showed a very engaged community with a lot of interactive tweets. The M2M discussion seems to be more fragmented and driven by some of the major brands present in the...
The Open Connectivity Foundation (OCF), sponsor of the IoTivity open source project, and AllSeen Alliance, which provides the AllJoyn® open source IoT framework, today announced that the two organizations’ boards have approved a merger under the OCF name and bylaws. This merger will advance interoperability between connected devices from both groups, enabling the full operating potential of IoT and representing a significant step towards a connected ecosystem.
@ThingsExpo has been named the Top 5 Most Influential Internet of Things Brand by Onalytica in the ‘The Internet of Things Landscape 2015: Top 100 Individuals and Brands.' Onalytica analyzed Twitter conversations around the #IoT debate to uncover the most influential brands and individuals driving the conversation. Onalytica captured data from 56,224 users. The PageRank based methodology they use to extract influencers on a particular topic (tweets mentioning #InternetofThings or #IoT in this ...
The Internet of Things (IoT), in all its myriad manifestations, has great potential. Much of that potential comes from the evolving data management and analytic (DMA) technologies and processes that allow us to gain insight from all of the IoT data that can be generated and gathered. This potential may never be met as those data sets are tied to specific industry verticals and single markets, with no clear way to use IoT data and sensor analytics to fulfill the hype being given the IoT today.
Join IBM November 2 at 19th Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how to go beyond multi-speed it to bring agility to traditional enterprise applications. Technology innovation is the driving force behind modern business and enterprises must respond by increasing the speed and efficiency of software delivery. The challenge is that existing enterprise applications are expensive to develop and difficult to modernize. This often results in what Gartner calls...