Real-time Analytics with Open Source Technologies

zer0p0int

贡献于2015-02-25

字数:0 关键词:

REAL-TIME ANALYTICS WITH OPEN SOURCE TECHNOLOGIES KAFKA · HADOOP · STORM · DRUID FANGJIN YANG · GIAN MERLINO SOFTWARE ENGINEERS @ METAMARKETS PROBLEM DEALING WITH EVENT DATA MOTIVATION EVOLUTION OF A “REAL-TIME” STACK ARCHITECTURE THE “RAD”-STACK NEXT STEPS TRY IT OUT FOR YOURSELF OVERVIEW THE PROBLEM 2013Fangjin Yang 2013 2013Fangjin Yang 2013 2013Fangjin Yang 2013 2013Fangjin Yang 2013 Event Stream 2013Fangjin Yang 2013 Event Stream 2013Fangjin Yang 2013 ...AND WE ANALYZE DATA WE ARE METAMARKETS... 2013 THE PROBLEM ‣ Arbitrary and interactive exploration ‣ Multi-tenancy: thousands of concurrent users ‣ Recency matters! Alert on major changes ‣ Scalability & Availability 2013 A SOLUTION ‣ Load all your data into Hadoop. Query it. Done! ‣ Good job guys, let’s go home 2013 FINDING A SOLUTION Hadoop Event Streams Insight 2013 PROBLEMS OF THE NAIVE SOLUTION ‣ MapReduce can handle almost every distributed computing problem ‣ MapReduce over your raw data is flexible but slow ‣ Hadoop is not optimized for query latency ‣ To optimize queries, we need a query layer 2013 FINDING A SOLUTION Hadoop Query Engine Hadoop Event Streams Insight A FASTER QUERY LAYER 2013 MAKE QUERIES FASTER ‣ What types of queries to optimize for? • Revenue over time broken down by demographic • Top publishers by clicks over the last month • Number of unique visitors broken down by any dimension • Not dumping the entire dataset • Not examining individual events 2013 FINDING A SOLUTION Hadoop RDBMS Hadoop Event Streams Insight 2013 FINDING A SOLUTION Hadoop NoSQL K/V Stores Hadoop Event Streams Insight 2013 FINDING A SOLUTION Hadoop Commercial Databases Hadoop Event Streams Insight DRUID AS A QUERY ENGINE 2013 DRUID ‣ Druid project started in 2011 ‣ Open sourced in Oct. 2012 ‣ Growing Community • ~40 contributors from many different organizations ‣ Designed for low latency ingestion and aggregation • Optimized for the types of queries we were trying to make 2013 RAW DATA timestamp publisher advertiser gender country click price! 2011-01-01T01:01:35Z bieberfever.com google.com Male USA 0 0.65! 2011-01-01T01:03:63Z bieberfever.com google.com Male USA 0 0.62! 2011-01-01T01:04:51Z bieberfever.com google.com Male USA 1 0.45! ...! 2011-01-01T01:00:00Z ultratrimfast.com google.com Female UK 0 0.87! 2011-01-01T02:00:00Z ultratrimfast.com google.com Female UK 0 0.99! 2011-01-01T02:00:00Z ultratrimfast.com google.com Female UK 1 1.53 2013 ROLLUP DATA timestamp publisher advertiser gender country impressions clicks revenue! 2011-01-01T01:00:00Z ultratrimfast.com google.com Male USA 1800 25 15.70! 2011-01-01T01:00:00Z bieberfever.com google.com Male USA 2912 42 29.18! 2011-01-01T02:00:00Z ultratrimfast.com google.com Male UK 1953 17 17.31! 2011-01-01T02:00:00Z bieberfever.com google.com Male UK 3194 170 34.01 ‣ Truncate timestamps ‣ GroupBy over string columns (dimensions) ‣ Aggregate numeric columns (metrics) 2013 PARTITION DATA timestamp publisher advertiser gender country impressions clicks revenue! ! ! ! 2011-01-01T01:00:00Z ultratrimfast.com google.com Male USA 1800 25 15.70! 2011-01-01T01:00:00Z bieberfever.com google.com Male USA 2912 42 29.18! ! ! ! ! 2011-01-01T02:00:00Z ultratrimfast.com google.com Male UK 1953 17 17.31! 2011-01-01T02:00:00Z bieberfever.com google.com Male UK 3194 170 34.01! ! ‣ Shard data by time ‣ Immutable chunks of data called “segments” ! ! Segment 2011-01-01T02/2011-01-01T03 ! ! Segment 2011-01-01T01/2011-01-01T02 2013 IMMUTABLE SEGMENTS ‣ Fundamental storage unit in Druid ‣ Read consistency ‣ One thread scans one segment ‣ Multiple threads can access same underlying data ‣ Segment sizes -> computation completes in ms ‣ Simplifies distribution & replication 2013 COLUMN ORIENTATION timestamp publisher advertiser gender country impressions clicks revenue! ! ! ! 2011-01-01T01:00:00Z ultratrimfast.com google.com Male USA 1800 25 15.70! 2011-01-01T01:00:00Z bieberfever.com google.com Male USA 2912 42 29.18! ! ‣ Scan/load only what you need ‣ Compression! ‣ Indexes! 2013 ARCHITECTURE (EARLY DAYS) Broker Node Historical Node Historical Node Historical Node Broker Node QueriesHadoopData 2013 MORE PROBLEMS ‣ We’ve solved the query problem • Druid gave us arbitrary data exploration & fast queries ‣ What about data freshness? • Batch loading is slow! • We need “real-time” • Alerts, operational monitoring, etc. A FASTER DATA PIPELINE 2013 THE STORY SO FAR Hadoop Druid Hadoop Event Streams Insight 2013 THE STORY SO FAR ‣ Clients uploaded data to S3 ‣ We used Hadoop + Pig to clean it up, transform, and join it ‣ We loaded the result into Druid ‣ Typical turnaround time: 2–8 hours 2013 INGESTION DELAYS ‣ Let’s build a streaming version of this data pipeline ‣ Three obstacles ‣ Acquiring raw data ‣ Processing the data ‣ Loading processed data into Druid 2013 FAST DELIVERY WITH KAFKA ‣ High throughput event delivery service ‣ Straightforward, reliable design ‣ Buffers incoming data to give consumers time to process it ‣ We can place an HTTP API in front of this 2013 FAST DELIVERY WITH KAFKA Kafka Brokers Producer Producer Producer Consumer Consumer Consumer 2013 FAST PROCESSING WITH STORM ‣ Storm is a stream processor— one event at a time ‣ We can already process our data using Hadoop MapReduce ‣ Let’s translate that to streams ‣ “Load” operations stream data from Kafka ‣ “Map” operations are already stream-friendly ‣ “Reduce” operations can be windowed with partitioned state 2013 FAST LOADING WITH DRUID ‣ We have an indexing system ‣ We have a serving system that runs queries on data ‣ We can serve queries while building indexes! ‣ Real-time indexing workers do this 2013 FAST LOADING WITH DRUID Druid Realtime Workers Immediate Druid Historical Cluster Periodic Druid Query Broker Kafka Brokers Storm Workers 2013 THE STORY SO FAR Hadoop Druid Hadoop Event Streams Insight Kafka Storm Druid 2013 WHAT WE GAINED ‣ Druid queries reflect new events within seconds ‣ Systems are fully decoupled ‣ Brief processing delays during maintenance ‣ Because we need to restart Storm topologies ‣ But query performance and availability are not affected 2013 WHAT WE GAVE UP ‣ Stream processing isn’t perfect ‣ Difficult to handle corrections of existing data ‣ Windows may be too small for fully accurate operations ‣ Hadoop was actually good at these things 2013 THE RETURN OF HADOOP ‣ An open-source “lambda architecture” ‣ Batch re-processing runs for all data older than a few hours ‣ Batch segments replace real-time segments in Druid ‣ Query broker merges results from both systems “Fixed up,” immutable, historical data –by Hadoop Realtime data –by Storm & Realtime Druid 2013 THE STACK Event Streams Insight Kafka Hadoop Druid Storm ‣ Real-time ‣ Only on-time data ‣ Some hours later ‣ All data DO TRY THIS AT HOME 2013 CORNERSTONES ‣ Druid - druid.io - @druidio ‣ Storm - storm.incubator.apache.org - @stormprocessor ‣ Hadoop - hadoop.apache.org ‣ Kafka - kafka.apache.org - @apachekafka 2013 GLUE Event Streams Insight Kafka Hadoop Druid Storm storm-kafka Camus Druid Tranquility 2013 GET RADICAL ‣ Queries answered quickly, on fresh data ‣ Kafka provides fast, reliable event transport ‣ Storm and Hadoop clean and prepare data for Druid ‣ Druid handles queries and manages the serving layer ‣ “Real-time Analytics Data Stack” ‣ …a.k.a. RAD Stack ‣ https://metamarkets.com/2014/building-a-data-pipeline/ THANK YOU @DRUIDIO @METAMARKETS

下载文档,方便阅读与编辑

文档的实际排版效果,会与网站的显示效果略有不同!!

需要 6 金币 [ 分享文档获得金币 ] 0 人已下载

下载文档

相关文档