
By Bill McColl | Article Rating: |
|
October 15, 2010 04:45 PM EDT | Reads: |
22,853 |

Big data is creating a massive disruption for the IT industry. Faced with exponentially growing data volumes in every area of business and the web, companies around the world are looking beyond their current databases and data warehouses for new ways to handle this data deluge.
Taking a lead from Google, a number of organizations have been exploring the potential of MapReduce, and its open source clone Hadoop, for big data processing. The MapReduce/Hadoop approach is based around the idea that what's needed is not database processing with SQL queries, but rather dataflow computing with simple parallel programming primitives such as map and reduce.
As Google and others have shown, this kind of basic dataflow programming model can be implemented as a coarse-grain set of parallel tasks that can be run across hundreds or thousands of machines, to carry out large-scale batch processing on massive data sets.
Google themselves have been using MapReduce for batch processing for over six years, and others, such as Facebook, eBay and Yahoo have been using Hadoop for the same kind of batch processing for several years now. So today, parallel dataflow is firmly established as an alternative to databases and data warehouses for offline batch processing of big data. But now the game is changing again...
In recent months, Google has realized that the web is now entering a new era, the realtime era, and that batch processing systems such as MapReduce and Hadoop cannot deliver performance anywhere near the speed required for new realtime services such as Google Instant. Google noted that
- "MapReduce isn't suited to calculations that need to occur in near real-time"
and that
- "You can't do anything with it that takes a relatively short amount of time, so we got rid of it"
Other industry leaders, such as Jeff Jonas, Chief Scientist for Analytics at IBM, have made similar remarks in recent weeks. In his recent video "Big Thoughts on Big Data", Jonas notes that with only batch processing tools to handle it, organizations grappling with a relentless avalanche of realtime data will get dumber over time rather than getting smarter.
- "The idea of waiting for a batch job to run doesn't cut it. Instead, how can an organization make sense of what it knows, as a transaction is happening, so that it can do something about it right then"
- "I'm not a big fan of batch processes... I've never seen a batch system grow up an become a realtime streaming system, but you can take a realtime streaming system and make it eat batches all day long"
- "I like Hadoop but it's meant for batch activities. That's not the kind of back-end you would use for realtime sense-making systems"
So coarse-grain dataflow architectures such as Hadoop are good for batch, but bad for realtime.
To power realtime big data apps we need a completely new type of fine-grain dataflow architecture. An architecture that can, for example, continuously analyze a stream of events at a rate of say one million events per second per server, and deliver results with a maximum latency of five seconds between data in and analytics out. At Cloudscale we set out to crack this major technical problem, and to build the world's first "realtime data warehouse". The linearly scalable Cloudscale parallel dataflow architecture not only delivers game-changing realtime performance on commodity hardware, but also, as Jeff Jonas notes above "can eat batches all day long" like a traditional MapReduce or Hadoop architecture. There isn't really an established name yet for such a system. I guess we could call it a "Redoop" architecture (Realtime Dataflow on Ordinary Processors, or Realtime Hadoop).
Published October 15, 2010 Reads 22,853
Copyright © 2010 SYS-CON Media, Inc. — All Rights Reserved.
Syndicated stories and blog feeds, all rights reserved by the author.
More Stories By Bill McColl
Bill McColl left Oxford University to found Cloudscale. At Oxford he was Professor of Computer Science, Head of the Parallel Computing Research Center, and Chairman of the Computer Science Faculty. Along with Les Valiant of Harvard, he developed the BSP approach to parallel programming. He has led research, product, and business teams, in a number of areas: massively parallel algorithms and architectures, parallel programming languages and tools, datacenter virtualization, realtime stream processing, big data analytics, and cloud computing. He lives in Palo Alto, CA.
![]() Dec. 27, 2017 10:15 AM EST Reads: 3,159 |
By Pat Romanski ![]() Dec. 27, 2017 09:15 AM EST Reads: 992 |
By Liz McMillan ![]() Dec. 27, 2017 08:45 AM EST Reads: 2,262 |
By Liz McMillan ![]() Dec. 27, 2017 07:30 AM EST Reads: 2,313 |
By Liz McMillan ![]() Dec. 24, 2017 01:45 PM EST Reads: 1,302 |
By Pat Romanski ![]() Dec. 24, 2017 05:30 AM EST Reads: 13,903 |
By Elizabeth White ![]() Dec. 23, 2017 10:00 AM EST Reads: 1,238 |
By Elizabeth White ![]() Dec. 22, 2017 11:00 AM EST Reads: 1,066 |
By Elizabeth White ![]() Dec. 21, 2017 06:00 PM EST Reads: 1,253 |
By Elizabeth White ![]() Dec. 18, 2017 03:45 PM EST Reads: 2,377 |
By Elizabeth White ![]() Dec. 18, 2017 01:30 PM EST Reads: 2,389 |
By Elizabeth White ![]() Dec. 18, 2017 01:00 PM EST Reads: 4,189 |
By Liz McMillan ![]() Dec. 17, 2017 04:00 PM EST Reads: 1,360 |
By Pat Romanski ![]() Dec. 17, 2017 02:00 PM EST Reads: 1,450 |
By Elizabeth White ![]() Dec. 17, 2017 10:00 AM EST Reads: 1,505 |
By Liz McMillan ![]() Dec. 15, 2017 11:00 AM EST Reads: 2,404 |
By Elizabeth White ![]() Dec. 14, 2017 04:00 PM EST Reads: 1,548 |
By Liz McMillan ![]() Dec. 14, 2017 11:45 AM EST Reads: 1,617 |
By Elizabeth White ![]() Dec. 14, 2017 11:00 AM EST Reads: 1,608 |
By Pat Romanski ![]() Dec. 13, 2017 02:00 PM EST Reads: 1,376 |