By Srinivasan Sundara Rajan | Article Rating: |
|
September 13, 2012 01:00 PM EDT | Reads: |
3,193 |

Big Data Predictions
In the recent release of '2012 Hype Cycle Of Emerging Technologies,' research analyst Gartner evaluated several technologies to come up with a list of technologies that will dominate the future . "Big Data" related technologies form a significant portion of the list, in particular the following technologies revolve around the concept and usage of Big Data.
- Social Analytics: This analytics allow marketers to identify sentiment and identify trends in order to accommodate the customer better.
- Activity Streams: Activity Streams are the future of enterprise collaboration, uniting people, data, and applications in real-time in a central, accessible, virtual interface. Think of a company social network where every employee, system, and business process exchanged up-to-the-minute information about their activities and outcomes
- Natural Language Question Answering: NLP is a technique for analysing naturally occurring texts as part of linguistic processing to achieve human like language processing.
- Video Analytics: forming a conceptual and contextual understanding of the content of the video .
- Context Enriched Services: Context-enriched services use information about a person or an object to proactively anticipate the user's need and serve up the content.
These areas are just representative but in general many of the emerging technologies revolve around the ability to process large amounts of data from hither to unconventional sources and extract meaning out of them.
Doing a general search on Big Data on Google, or any other technical forum, we find that Big Data is almost synonymous with Hadoop, the main reason being that the storage and computational operations in Hadoop are parallel by design. Hadoop is highly scalable. Data can be accessed and operated upon without interdependencies until the results need to be reduced - and even the reduction itself can be performed in parallel. The result is that large amounts of data can be processed concurrently by many servers, greatly reducing the time to obtain results.
MapReduce is the basic component of Hadoop. It is a parallel programming framework that processes unordered data. All data are composed of a key and an associated value and processing occurs mainly in two phases, the Map phase and the Reduce phase.
However, careful analysis of the above technologies like Social Analyics, Video Analytics, and NLP, reveal that all of them definitely needed the massively parallel processing power of Hadoop, but they also need the level of intelligence that extracts meaningful information such that the Hadoop's Map & Reduce functions are effective to provide the required insight.
UIMA and Big Data Analytics
Hadoop in its raw form will not solve all the insight required for Big Data processing like in the technologies mentioned earlier. This is evident from the fact that most of the tutorials and examples on Hadoop Big Data Processing are about counting words or parsing text streams for specific words, etc. Naturally these examples are good from an academic perspective but may not solve the real life needs of Big Data processing and the associated insight needed for the enterprises.
UIMA stands for Unstructured Information Management Architecture. It is a component software architecture for the development, discovery, composition and deployment of multi-modal analytics for the analysis of unstructured information and its integration with search and knowledge management technologies.
The UIMA architecture supports the development, discovery, composition and deployment of multi-modal analytics, including text, audio and video.
Once the initial processors understands Video, Audio and other media documents like email and creates textual meaning out of it like the string of tokens or other patterns, these values can be parsed by various annotators of UIMA pipe line.
The parser subcomponent is responsible for converting the crawled document in its native format .
UIMA is an architecture in which basic building blocks called Analysis Engines (AEs) are composed to analyze a document and infer and record descriptive attributes about the document as a whole, and/or about regions therein. This descriptive information, produced by AEs is referred to generally as analysis results. Analysis results typically represent meta-data about the document content. One way to think about AEs is as software agents that automatically discover and record meta-data about original content.
Analysis Engines are constructed from building blocks called Annotators. An annotator is a component that contains analysis logic. Annotators analyze an artifact (for example, a text document) and create additional data (metadata) about that artifact. It is a goal of UIMA that annotators need not be concerned with anything other than their analysis logic - for example the details of their deployment or their interaction with other annotators.
An Analysis Engine (AE) may contain a single annotator (this is referred to as a Primitive AE), or it may be a composition of others and therefore contain multiple annotators (this is referred to as an Aggregate AE).
Some of the examples of Annotators could be :
- Language Identification annotator
- Linguistic Analysis annotator
- Dictionary Lookup annotator
- Named Entity Recognition annotator
- Pattern Matcher annotator
- Classification Module annotator
- Custom annotators
While a detailed description of Annotators in processing unstructured data is beyond the scope of this article, you can appreciate the power of Annotator with one specific example below.
The OpenCalais Annotator component wraps the OpenCalais web service and makes the OpenCalais analysis results available in UIMA. OpenCalais can detect a large variety of entities, facts and events like for example Persons, Companies, Acquisitions, Mergers, etc.
Summary
As is evident from the above facts, frameworks like UIMA extend the Big Data processing towards much more meaningful insights and map them to real world scenarios. While the Massively Parallel Processing abilities of Hadoop will be a key factor in Big Data initiatives, it is not alone enough and frameworks like UIMA will be playing a much larger part.
Published September 13, 2012 Reads 3,193
Copyright © 2012 SYS-CON Media, Inc. — All Rights Reserved.
Syndicated stories and blog feeds, all rights reserved by the author.
More Stories By Srinivasan Sundara Rajan
Srinivasan Sundara Rajan works at Gavs Technologies as a Chief Architect. His primary focus is enabling Agile Enterprises by facilitating the adoption of Every Thing As A Service Model with particular concentration on BpaaS (Business Process As A Service). Srinivasan is currently writing a series of articles on Indutry SaaS/BpaaS use cases which enterprises can adopt.All the views expressed are Srinivasan's independent analysis of industry and solutions and need not necessarily be of his current or past organizations. Srinivasan would like to thank every one who augmented his Architectural skills with Analytical ideas.
- Cloud People: A Who's Who of Cloud Computing
- Here Comes Rackspace & Amazon’s Latest Rival
- Is Hondo AMD’s John Wayne?
- 11th Cloud Expo: A–Z of Big Data & Cloud Computing Topics
- Cloud Expo Silicon Valley: Big Data Is at the Heart of Cloud Computing
- Research and Markets: Hadoop & Big Data Market [Hardware, Software, Services, Hadoop-as-a-Service] - Trends, Geographical Analysis & Worldwide Market Forecasts (2012 - 2017)
- Configuring JMX in WebSphere 8.5
- Little Data, Big Data and Very Big Data (VBD) or Big BS?
- A Big Data Infographic
- Cloud Expo Silicon Valley: Elastic Cloud Infrastructure
- Cloud, Virtualization, Storage and Networking in an Election Year
- Oracle Revenues Light on Dim Sun; Firm Going IaaS
- Cloud People: A Who's Who of Cloud Computing
- Big Data: The ‘Perfect Storm’ Syndrome
- Examining the True Cost of Big Data
- Here Comes Rackspace & Amazon’s Latest Rival
- The Disruptor Framework: A Concurrency Framework for Java
- Is Hondo AMD’s John Wayne?
- The Rebirth of SOA on the Wings of SaaS and Cloud Computing
- EMC Reportedly Looking for Security (Acquisitions)
- The Age of Big Data: How to Gain Competitive Advantage
- 11th Cloud Expo: A–Z of Big Data & Cloud Computing Topics
- Big Data Analytics: Thinking Outside of Hadoop
- Cloud Expo Silicon Valley: Big Data Is at the Heart of Cloud Computing
- The Top 250 Players in the Cloud Computing Ecosystem
- Web Services Using ColdFusion and Apache CXF
- Cloud People: A Who's Who of Cloud Computing
- Red Hat Named "Platinum Sponsor" of Virtualization Conference & Expo
- Eclipse "Pollinate" Project to Integrate with Apache Beehive
- Cloud Expo New York Call for Papers Now Open
- An Introduction to Ant
- Apache's Tomcat 5.5 is First Release Ever to Use Eclipse JDT Java Compiler
- Beehive Code Now Available in Apache
- Cloud Expo 2011 East To Attract 10,000 Delegates and 200 Exhibitors
- 4th International Cloud Computing Conference & Expo Starts Today
- "Beehive" Now Officially an Open Source Project: Apache Beehive