By Pete Whitney | Article Rating: |
|
October 26, 2012 09:30 AM EDT | Reads: |
1,612 |

Recently FireScope Inc. introduced the general availability of its Stratis product. Stratis brings all of the FireScope Unify capabilities to the cloud, with the added advantage of a new architecture that delivers near infinite scalability. Moreover, the new Stratis architecture provides scalability at all application layers including its back-end operations, which were newly designed to leverage the benefits of MongoDB. In this article we will discuss several of the architecture choices that were made as part of this effort with the hope that others might benefit from the research and analysis that was performed to bring this product to market.
As background a functioning FireScope deployment has the ability to gather metrics from all forms of existing IT assets, normalize the gathered metrics, provide historical analysis of the metrics, and most importantly provide service views for worldwide operations which is unparalleled in the IT industry. In the early phases of designing the Stratis product, FireScope undertook significant research into the scalable persistence architectures that were production ready at the time of this effort. FireScope ultimately chose MongoDB for its ability to scale and its flexibility in supporting an easy transition from a relational persistence model to a NoSQL model. While researching MongoDB FireScope took the time to understand the application impact of the following architecture facets:
- Data mapping technologies
- Minimal field retrieval vs full document retrieval
- Data aggregation
- Early space allocation
In this article we detail each of the above mentioned research efforts and discuss the impact that our subsequent choices had on the FireScope Stratis product.
Application performance was a key driver in all research activities. Even though we were deploying these new application elements to the cloud, ignoring the importance of performance would mean more resources would be needed to get the job done. It's also worth noting that not all applications have the same considerations, so what may be an appropriate technology or architecture choice for FireScope Stratis might not be the appropriate choice for your application. With that said, let's address these research efforts in more detail.
Data Mapping Technologies
The FireScope Stratis application accesses persistent storage via Java, and PHP. As a result, we needed to make persistence access choices that would be compatible between Java and PHP. While Java and PHP were both requirements the main performance driven consideration was access via Java. In considering how to get information into and out of the database with Java, FireScope researched access using the following two approaches:
- Java Mongo driver with an in-house developed DAO layer
- Spring Data
We built narrowly focused prototype access solutions using both of these options. We saved and retrieved the same large graph of objects and compared the relative performance for each approach. One of the key findings in this analysis was the performance impact of "single binding" versus "double binding" of retrieved data.
When data is returned via the MongoDB Java driver each document is returned in the form of a HashMap where the fields of the persisted document form the keys of the HashMap and the corresponding values associated with each field are stored as HashMap values. FireScope designed its domain model to use getters and setters that simply accessed the appropriate field in the HashMap and ensured that each corresponding field has the correct Java type. In this model there is no additional overhead to bind each field to a corresponding Java field, we simply referenced the data in the HashMap. We refer to this model as "single binding" because the only binding performed is that of the Mongo Java driver.
By contrast, when Spring Data is used to render a document from MongoDB all fields in the HashMap returned by the Mongo Java driver are subsequently bound to a member field in the appropriate Java object. This binding is performed using reflection during the object retrieval process. We refer to this model as "double binding" because the initial HashMap rendering is then reflectively bound to the appropriate Java object fields and the initial HashMap is subsequently discarded.
In our comparative analysis we found that the "double binding" process used by Spring Data carried with it a performance overhead of greater than 2X but less than 4X. These comparative results were derived from multiple runs using each technology retrieving and saving the same large data graph on the same hardware. Furthermore, we alternated between technology choices in order to prevent differences in class loading, network, CPU, disk, and garbage collection from obscuring the analysis results.
Please do not take from the above that I have some issue with Spring Data. I absolutely love Spring, and nearly everything they do is 100% top notch! It just so happens that in this instance our performance-centric considerations directed us away from the use of Spring Data for FireScope's Stratis back-end operations. We do however use Spring in nearly every other area of the FireScope Stratis product. As a final thought, we also briefly considered the use of Morphia, but due to time constraints we never completed a comparative analysis using Morphia.
Minimal Field Retrieval
One of the key performance impacting areas of the FireScope Stratis product is the data normalization engine. Every metric retrieved by FireScope passes through this engine and as a result the ability to do more with less is critically important to FireScope. In an effort to verify our architecture choices, FireScope performed another analysis comparing the relative performance of retrieving all fields of a queried document to an alternative scenario where only one-fourth of the full fields were retrieved. The intent here is that many use cases do not need all of the data for a given object. Of course we knew that reducing the bandwidth between the database servers and the application servers would be a good thing, but being new to Mongo we weren't sure if the overhead of filtering some fields from the document would outweigh the benefits of the reduced bandwidth between the servers.
In this analysis we setup long running retrieve / save operations. Once again, we alternated between retrieve / save operations where the full document was passed, and retrieve / save operations where the one-fourth populated document was passed. Alternation was used to prevent the impact of class loading, network, CPU, disk, and garbage collection from obscuring the analysis results. When the one-fourth populated document was used we specified a set of fields for Mongo to retrieve. For the full document no field specification was provided and as a result the full document was retrieved.
The analysis results indicated an overwhelming 9X performance benefit to using limited field retrieval. But be aware that using limited field retrieval also has its downside. If other developers on your team are not keenly aware that the object they just queried for might not have all of its fields populated, then application defects can easily result from using this approach. To avert possible defects, FireScope leverages an extensive unit testing, functional testing, and peer review / test process to ensure that such defects do not arise.
Data Aggregation
A portion of the section is based on ideas from this blog.
We acknowledge and thank Foursquare Labs Inc. for its contributions.
The suggestion offered in the blog is to aggregate a series of historical entries into a single document, rather than creating a separate document for each historical record. The motivation for aggregation is to improve the locality of associated information and as a result improve its future access time. While the FireScope system performance is not driven by user access, it does rely extensively on aggregated historical metrics collected throughout a day and we leveraged aggregation to achieve improved locality.
What was not discussed in the Foursquare Labs blog was a second and equally significant benefit of aggregation which is a huge reduction in the size of an index for the FireScope historical records. For those not familiar with Mongo it is important to understand that Mongo attempts to keep all indexes in memory for fast access. As a result any reduction in the size of an index allows Mongo to keep more data in memory which improves overall system performance.
For better understanding consider the following two data storage scenarios where a reference id, time stamp, and value of several collected metrics are stored using two alternative approaches:
- Collected metrics are simply added to a collection which is indexed on the ref_id + time fields.
{ ref_id : ABC123, time : 1336780800, value : XXX }
{ ref_id : ABC123, time : 1336780800, value : ZZZ } - All collected metrics for one day are added to an array. The document for the day is indexed on the ref_id and midnight fields.
{ ref_id : ABC123, midnight : 1336780800, values : [ time : 1336780805, value : XXX, ... ] }
Note that for option 1 both the ref_id and the time are two elements in an index. If the system collects this metric once every 5 minutes, then the system would collect 288 ref_id, time, value entries in one day. If each entry is added to an index then the corresponding index size will be significantly larger for option 1 above than for option 2, because option 2 does not index the actual collection time but only midnight of the current day. As a result, the index size is reduced nearly 300 to one due to the aggregation of data with no loss of information.
Early Space Allocation
If documents are created from metrics collected throughout the day, then both space allocation, as well as index updates are required throughout the day as a part of normal business operations. As discussed above if documents are nested then locality of accessed information is improved. But if normal operations append to an existing document then in most instances, the document must be moved and all associated indexes must be updated in order to accomplish the document append operations.
With FireScope Stratis optimal update operations are achieved by allocating a full days worth of history records for each expected metric. Each history record contains default values for the expected collection interval. The space for one day's worth of data is created in a scheduled operation that is run once per day. Then as metrics are collected throughout the day the appropriate bucket (array entry) is simply updated. Since the update does not change the size of the document no document movements are needed throughout the day nor are index updates needed. The end result is a system that achieves optimal performance. While I am unable to share actual performance metrics for this approach, I can share that the relative performance difference is significant. It is also worth noting that you would need to take great care in measuring the performance impact of this architecture choice because MongoDB has the inherent ability to queue update operations, thus masking the real performance benefit of this enhancement.
Conclusion
If you are undertaking a transition to MongoDB, or new development on MongoDB then choosing a data mapping technology wisely can have a significant impact on your application performance. Consider also the performance benefits of Minimal Field Retrieval, Data Aggregation, and Early Space Allocation as vehicles to optimize your applications' performance. You may also realize additional benefits, such as the reduced network bandwidth that comes with minimal field retrieval, and the reductions of index size that might result from data aggregation. We sincerely hope that you have benefited from the time invested in reading this article and wish you the best in all of your Mongo development endeavors.
References
Published October 26, 2012 Reads 1,612
Copyright © 2012 SYS-CON Media, Inc. — All Rights Reserved.
Syndicated stories and blog feeds, all rights reserved by the author.
More Stories By Pete Whitney
Pete Whitney serves as Vice President of Cloud Development at FireScope Inc. His primary role at FireScope is overseeing the architecture and development of FireScope Stratis and ensuring that its product line is the envy of the IT world. The architectural cornerstones of the Stratis product are unlimited scalability, built-in redundancy, and no single point of failure.
In the advertising industry Pete designed and delivered DG Fastchannel’s internet-based advertising distribution architecture. Pete also excelled in other areas including design enhancements in robotic machine vision systems for FSI International Inc. These enhancements included mathematical changes for improved accuracy, improved speed, and automated calibration. He also designed a narrow spectrum light source, and a narrow spectrum band pass camera filter for controlled machine vision imaging.
Pete graduated Cum Laude from the University of Texas at Dallas, and holds a BS in Computer Science. Pete can be contacted via Email at [email protected].
- Cloud People: A Who's Who of Cloud Computing
- Twelve New Programming Languages: Is Cloud Responsible?
- Agile Adoption – Crossing the Chasm
- TOGAF Foundation Level Certification – Another Practice Test
- TOGAF Foundation Level Certification – Practice Test
- Examining the True Cost of Big Data
- What Makes Agile Agile?
- Rackspace Lets Go of OpenStack
- Thanks to Big Data, Analytics Will Be a $51B Business by 2016: IDC
- Here Comes Rackspace & Amazon’s Latest Rival
- Cloud Expo Silicon Valley | Cloud Computing Adoption: Where Are We Really?
- Cloud Expo Silicon Valley: APIs – The Wiring Behind the Cloud
- Cloud People: A Who's Who of Cloud Computing
- Twelve New Programming Languages: Is Cloud Responsible?
- Agile Adoption – Crossing the Chasm
- TOGAF Foundation Level Certification – Another Practice Test
- TOGAF Foundation Level Certification – Practice Test
- Examining the True Cost of Big Data
- What Makes Agile Agile?
- Rapid7 Nexpose Introduces IPv6 Discovery and Scanning Capabilities, and Reduces Signal-to-Noise Ratio for Vulnerability Management, Enabling Security Professionals to Focus on Highest Priority Issues
- Rackspace Lets Go of OpenStack
- Thanks to Big Data, Analytics Will Be a $51B Business by 2016: IDC
- Here Comes Rackspace & Amazon’s Latest Rival
- Cloud Expo Silicon Valley | Cloud Computing Adoption: Where Are We Really?
- A Cup of AJAX? Nay, Just Regular Java Please
- Java Developer's Journal Exclusive: 2006 "JDJ Editors' Choice" Awards
- JavaServer Faces (JSF) vs Struts
- The i-Technology Right Stuff
- Rich Internet Applications with Adobe Flex 2 and Java
- Java vs C++ "Shootout" Revisited
- Bean-Managed Persistence Using a Proxy List
- Reporting Made Easy with JasperReports and Hibernate
- Creating a Pet Store Application with JavaServer Faces, Spring, and Hibernate
- Why Do 'Cool Kids' Choose Ruby or PHP to Build Websites Instead of Java?
- What's New in Eclipse?
- i-Technology Predictions for 2007: Where's It All Headed?
- ');
for(i = 0; i < google_ads.length; ++i)
{
document.write('
- ');
document.write('' + google_ads[i].line1 + '
'); document.write('' + google_ads[i].visible_url + '
'); document.write(google_ads[i].line2 + ' ' + google_ads[i].line3); document.write(' ');
}
document.write('