| 注册
请输入搜索内容

热门搜索

Java Linux MySQL PHP JavaScript Hibernate jQuery Nginx
liuyunws
8年前发布

Kafka开源:Chaperone-Uber 出品的 Kafka 集群监控工具

   <h2>Chaperone</h2>    <p>As Kafka audit system, Chaperone monitors the completeness and latency of data stream. The audit metrics are persisted in database for Kafka users to quantify the loss of their topics if any.</p>    <p>Basically, Chaperone cuts timeline into 10min buckets and assigns message to corresponding bucket according to its event time. The stats of the bucket are updated accordingly, like the total message count. Periodically, the stats are sent out to a dedicated Kafka topic, say 'chaperone-audit'. ChaperoneCollector consumes those stats from this topic and persists them into database.</p>    <p>Chaperone is made of several components:</p>    <ol>     <li>ChaperoneClient is a library that can be put in like Kafka Producer or Consumer to audit messages as they flow through. The audit stats are sent to a dedicated Kafka topic, say 'chaperone-audit'.</li>     <li>ChaperoneCollector consumes audit stats from 'chaperone-audit' and persists them into database.</li>     <li>ChaperoneService audits messages kept in Kafka. Since it's built upon <a href="/misc/goto?guid=4959736816061362014" rel="nofollow,noindex">uReplicator</a> , it consists of two subsystems: ChaperoneServiceController to auto-detect topics in Kafka and assign the topic-partitions to workers to audit; ChaperoneServiceWorker to audit messages from assigned topic-partitions. In particular, ChaperoneService and ChaperoneCollector together ensure each message is audited exactly once.</li>    </ol>    <h2>Chaperone Quick Start</h2>    <h2>Get the Code</h2>    <p>Check out the Chaperone project:</p>    <pre>  git clone git@github.com:uber/chaperone.git  cd chaperone</pre>    <p>This project contains everything you’ll need to run Chaperone.</p>    <h2>Build Chaperone</h2>    <p>Before you can run Chaperone, you need to build a package for it.</p>    <pre>  mvn clean package</pre>    <p>Or command below to skip tests</p>    <pre>  mvn clean package -DskipTests</pre>    <h2>Set Up Local Test Environment</h2>    <p>To test Chaperone locally, you need two systems: <a href="/misc/goto?guid=4958531873897074375" rel="nofollow,noindex">Kafka</a> , and <a href="/misc/goto?guid=4958192107847702038" rel="nofollow,noindex">ZooKeeper</a> . The script “grid” is to help you set up these systems.</p>    <ul>     <li>The command below will download, install, and start ZooKeeper and Kafka (named cluster1)</li>    </ul>    <pre>  bin/grid bootstrap</pre>    <h2>Start ChaperoneService</h2>    <ul>     <li>Start ChaperoneService Controller</li>    </ul>    <pre>  ./ChaperoneDistribution/target/ChaperoneDistribution-pkg/bin/start-chaperone-controller.sh</pre>    <ul>     <li>Start ChaperoneService Worker</li>    </ul>    <pre>  ./ChaperoneDistribution/target/ChaperoneDistribution-pkg/bin/start-chaperone-worker.sh</pre>    <h2>Generate Load</h2>    <ul>     <li>Create a dummyTopic in Kafka and produce some dummy data:</li>    </ul>    <pre>  ./bin/produce-data-to-kafka-topic-dummyTopic.sh</pre>    <ul>     <li>Check if the data is successfully produced to Kafka by console-consumer as below:</li>    </ul>    <pre>  ./deploy/kafka/bin/kafka-console-consumer.sh --zookeeper localhost:2181/cluster1 --topic dummyTopic</pre>    <p>You should get this data:</p>    <pre>  Kafka topic dummy topic data 1  Kafka topic dummy topic data 2  Kafka topic dummy topic data 3  Kafka topic dummy topic data 4  …</pre>    <h2>Check Audit Stats</h2>    <p>In this example, the topic dummyTopic will be auto-detected and assigned to worker to audit. Periodically, the audit stats are sent to a topic called 'chaperone-audit'.</p>    <pre>  ./deploy/kafka/bin/kafka-console-consumer.sh --zookeeper localhost:2181/cluster1 --topic chaperone-audit</pre>    <p>One can also manually add topic to audit by command below:</p>    <pre>  curl -X POST -d '{"topic":"dummyTopic", "numPartitions":"1"}' http://localhost:9000/topics</pre>    <h2>Start ChaperoneCollector</h2>    <p>To start ChaperoneCollector, MySQL is required and Redis is optional. MySQL is used to persist audit stats and Redis is used to deduplicate. Deduplication can be turned off. The configuration file for ChaperoneCollector is ./config/chaperonecollector.properties, which might be updated to connect to MySQL and Redis.</p>    <ul>     <li>Start ChaperoneCollector</li>    </ul>    <pre>  ./ChaperoneDistribution/target/ChaperoneDistribution-pkg/bin/start-chaperone-collector.sh</pre>    <p> </p>    <p> </p>    
 本文由用户 liuyunws 自行上传分享,仅供网友学习交流。所有权归原作者,若您的权利被侵害,请联系管理员。
 转载本站原创文章,请注明出处,并保留原始链接、图片水印。
 本站是一个以用户分享为主的开源技术平台,欢迎各类分享!
 本文地址:https://www.open-open.com/lib/view/open1486950556153.html
Kafka 消息系统 Apache Kafka