Project hoover: Auto-scaling streaming map-reduce applications Conference

Ramesh, R, Hu, L, Schwan, K. (2012). Project hoover: Auto-scaling streaming map-reduce applications . 7-12. 10.1145/2378356.2378359

cited authors

  • Ramesh, R; Hu, L; Schwan, K

abstract

  • Real-time data processing frameworks like S4 and Flume have become scalable and reliable solutions for acquiring, moving, and processing voluminous amounts of data continuously produced by large numbers of online sources. Yet these frameworks lack the elasticity to horizontally scale-up or scale-down their based on current rates of input events and desired event processing latencies. The Project Hoover middleware provides distributed methods for measuring, aggregating, and analyzing the performance of distributed Flume components, thereby enabling online configuration changes to meet varying processing demands. Experimental evaluations with a sample Flume data processing code show Hoover's approach to be capable of dynamically and continuously monitoring Flume performance, demonstrating that such data can be used to right-size the number of Flume collectors according to different log production rates. Copyright 2012 ACM.

authors

publication date

  • October 26, 2012

Digital Object Identifier (DOI)

start page

  • 7

end page

  • 12