Kafka对接采集日志Flum的集群搭建与部署
目錄
- Kafka簡介
- 消息隊列
- Kafka的應用場景
- 消息隊列的兩種模型
- Kafka中的重要概念
- 消費者組
- 冪等性
- Kafka集群搭建
- kafka集群部署
- kafka啟動腳本
- Kafka命令行操作
- 1.查看Kafka Topic列表
- 2.創建Kafka Topic
- 3.刪除Kafka Topic
- 4.kafka消費信息
- 5.查看kafka Topic詳情
- 6.kafka壓力測試
Kafka簡介
消息隊列
- 消息隊列——用于存放消息的組件
- 程序員可以將消息放入到隊列中,也可以從消息隊列中獲取消息
- 很多時候消息隊列不是一個永久性的存儲,是作為臨時存儲存在的(設定一個期限:設置消息在MQ中保存10天)
- 消息隊列中間件:消息隊列的組件,例如:Kafka、Active MQ、RabbitMQ、RocketMQ、ZeroMQ
Kafka的應用場景
-
異步處理
- 可以將一些比較耗時的操作放在其他系統中,通過消息隊列將需要進行處理的消息進行存儲,其他系統可以消費消息隊列中的數據
- 比較常見的:發送短信驗證碼、發送郵件
-
系統解耦
- 原先一個微服務是通過接口(HTTP)調用另一個微服務,這時候耦合很嚴重,只要接口發生變化就會導致系統不可用
- 使用消息隊列可以將系統進行解耦合,現在第一個微服務可以將消息放入到消息隊列中,另一個微服務可以從消息隊列中把消息取出來進行處理。進行系統解耦
-
流量削峰
- 因為消息隊列是低延遲、高可靠、高吞吐的,可以應對大量并發
-
日志處理
- 可以使用消息隊列作為臨時存儲,或者一種通信管道
消息隊列的兩種模型
- 生產者、消費者模型
- 生產者負責將消息生產到MQ中
- 消費者負責從MQ中獲取消息
- 生產者和消費者是解耦的,可能是生產者一個程序、消費者是另外一個程序
- 消息隊列的模式
- 點對點:一個消費者消費一個消息
- 發布訂閱:多個消費者可以消費一個消息
Kafka中的重要概念
- broker
- Kafka服務器進程,生產者、消費者都要連接broker
- 一個集群由多個broker組成,功能實現Kafka集群的負載均衡、容錯
- producer:生產者
- consumer:消費者
- topic:主題,一個Kafka集群中,可以包含多個topic。一個topic可以包含多個分區
- 是一個邏輯結構,生產、消費消息都需要指定topic
- partition:Kafka集群的分布式就是由分區來實現的。一個topic中的消息可以分布在topic中的不同partition中
- replica:副本,實現Kafkaf集群的容錯,實現partition的容錯。一個topic至少應該包含大于1個的副本
- consumer group:消費者組,一個消費者組中的消費者可以共同消費topic中的分區數據。每一個消費者組都一個唯一的名字。配置group.id一樣的消費者是屬于同一個組中
- offset:偏移量。相對消費者、partition來說,可以通過offset來拉取數據
- group.id:消費者組的概念,可以在一個消費組中包含多個消費者。如果若干個消費者的group.id是一樣的,表示它們就在一個組中,一個組中的消費者是共同消費Kafka中topic的數據。
- Kafka是一種拉消息模式的消息隊列,在消費者中會有一個offset,表示從哪條消息開始拉取數據
消費者組
- 一個消費者組中可以包含多個消費者,共同來消費topic中的數據
- 一個topic中如果只有一個分區,那么這個分區只能被某個組中的一個消費者消費
- 有多少個分區,那么就可以被同一個組內的多少個消費者消費
冪等性
-
生產者消息重復問題
- Kafka生產者生產消息到partition,如果直接發送消息,kafka會將消息保存到分區中,但Kafka會返回一個ack給生產者,表示當前操作是否成功,是否已經保存了這條消息。如果ack響應的過程失敗了,此時生產者會重試,繼續發送沒有發送成功的消息,Kafka又會保存一條一模一樣的消息
-
在Kafka中可以開啟冪等性
- 當Kafka的生產者生產消息時,會增加一個pid(生產者的唯一編號)和sequence number(針對消息的一個遞增序列)
- 發送消息,會連著pid和sequence number一塊發送
- kafka接收到消息,會將消息和pid、sequence number一并保存下來
- 如果ack響應失敗,生產者重試,再次發送消息時,Kafka會根據pid、sequence number是否需要再保存一條消息
- 判斷條件:生產者發送過來的sequence number 是否小于等于 partition中消息對應的sequence
Kafka集群搭建
前期準備:zookeeper必須搭建完畢
kafka集群部署
上傳并解壓安裝包
[lili@hadoop102 software]$ tar -zxvf kafka_2.11-0.11.0.0.tgz -C /opt/module/修改解壓后的文件名稱
[lili@hadoop102 software]$ mv kafka_2.11-0.11.0.0/ kafka在/opt/module/kafka目錄下創建logs文件夾
作為kafka運行日志存放的文件
[lili@hadoop102 kafka]$ mkdir logs修改kafka配置文件
[lili@hadoop102 kafka]$ vim config/server.properties # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.# see kafka.server.KafkaConfig for additional details and defaults############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker. #broker唯一編號,不能重復 broker.id=0# Switch to enable topic deletion or not, default value is false #刪除topic功能 delete.topic.enable=true############################# Socket Server Settings ############################## The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured. # FORMAT: # listeners = listener_name://host_name:port # EXAMPLE: # listeners = PLAINTEXT://your.host.name:9092 #listeners=PLAINTEXT://:9092# Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value # returned from java.net.InetAddress.getCanonicalHostName(). #advertised.listeners=PLAINTEXT://your.host.name:9092# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL# The number of threads that the server uses for receiving requests from the network and sending responses to the network #處理網路請求的線程數量 num.network.threads=3# The number of threads that the server uses for processing requests, which may include disk I/O #用來處理磁盤IO的線程數量 num.io.threads=8# The send buffer (SO_SNDBUF) used by the socket server #Socket(套接字)可以看成是兩個網絡應用程序進行通信時,各自通信連接中的端點,這是一個邏輯上的 #概念。它是網絡環境中進程間通信的API(應用程序編程接口),也是可以被命名和尋址的通信端點,使 #用中的每一個套接字都有其類型和一個與之相連進程。通信時其中一個網絡應用程序將要傳輸的一段信 #息寫入它所在主機的 Socket中,該 Socket通過與網絡接口卡(NIC)相連的傳輸介質將這段信息送到另 #外一臺主機的 Socket中,使對方能夠接收到這段信息。 Socket是由IP地址和端口結合的,提供向應 #用層進程傳送數據包的機制 [2] 。 #發送套接字的緩沖區大小 socket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket server #接收套接字的緩沖區大小 socket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM) #請求套接字的緩沖區大小 socket.request.max.bytes=104857600############################# Log Basics ############################## A comma seperated list of directories under which to store log files #kafka運行日志存放的路徑 log.dirs=/opt/module/kafka/logs# The default number of log partitions per topic. More partitions allow greater # parallelism for consumption, but this will also result in more files across # the brokers. #topic在當前broker上的分區個數 num.partitions=1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. # This value is recommended to be increased for installations with data dirs located in RAID array. #用來恢復和清理data下數據的線程數量 num.recovery.threads.per.data.dir=1############################# Internal Topic Settings ############################# # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state" # For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3. offsets.topic.replication.factor=1 transaction.state.log.replication.factor=1 transaction.state.log.min.isr=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync # the OS cache lazily. The following configurations control the flush of data to disk. # There are a few important trade-offs here: # 1. Durability: Unflushed data may be lost if you are not using replication. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks. # The settings below allow one to configure the flush policy to flush data after a period of time or # every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk #log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush #log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can # be set to delete segments after a period of time, or after a given size has accumulated. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens # from the end of the log.# The minimum age of a log file to be eligible for deletion due to age #segment文件保留的最長時間,超時將被刪除 log.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining # segments don't drop below log.retention.bytes. Functions independently of log.retention.hours. #log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created. log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according # to the retention policies log.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. #配置連接Zookeeper集群地址 zookeeper.connect=hadoop102:2181,hadoop103:2181,hadoop104:2181# Timeout in ms for connecting to zookeeper zookeeper.connection.timeout.ms=6000############################# Group Coordinator Settings ############################## The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance. # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms. # The default value for this is 3 seconds. # We override this to 0 here as it makes for a better out-of-the-box experience for development and testing. # However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup. group.initial.rebalance.delay.ms=0分發kafka文件夾到三臺服務器
[lili@hadoop102 module]$ xsync kafka/分發之后,分別在其他服務器上修改配置/opt/module/kafka/config/server.properties中的broker.id=1、broker.id=2
配置環境變量
分別在三臺服務器上配置環境變量
[lili@hadoop102 module]$ vim /etc/profile.d/env.sh #KAFKA_HOME export KAFKA_HOME=/opt/module/kafka export PATH=$PATH:$KAFKA_HOME/bin [lili@hadoop102 module]$ source /etc/profile.d/env.sh啟動集群
[lili@hadoop102 kafka]$ bin/kafka-server-start.sh config/server.properties & [lili@hadoop103 kafka]$ bin/kafka-server-start.sh config/server.properties & [lili@hadoop104 kafka]$ bin/kafka-server-start.sh config/server.properties &在打開一個shell窗口連接hadoop102查看系統進程
[lili@hadoop102 ~]$ xcall.sh jps ----------hadoop102---------- 21025 NameNode 21460 NodeManager 21142 DataNode 21543 JobHistoryServer 25946 Kafka 20826 QuorumPeerMain 23707 Application 30063 Jps ----------hadoop103---------- 16547 Kafka 13589 QuorumPeerMain 13800 ResourceManager 15736 Application 17993 Jps 13918 NodeManager 13662 DataNode ----------hadoop104---------- 19041 Jps 14242 DataNode 14358 SecondaryNameNode 16840 Kafka 14447 NodeManager 14175 QuorumPeerMain [lili@hadoop102 ~]$關閉集群
[lili@hadoop102 kafka]$ bin/kafka-server-stop.sh stop [lili@hadoop103 kafka]$ bin/kafka-server-stop.sh stop [lili@hadoop104 kafka]$ bin/kafka-server-stop.sh stop關閉集群后查看系統進程
[lili@hadoop102 ~]$ xcall.sh jps ----------hadoop102---------- 21025 NameNode 21460 NodeManager 21142 DataNode 21543 JobHistoryServer 30135 Jps 20826 QuorumPeerMain 23707 Application ----------hadoop103---------- 18050 Jps 13589 QuorumPeerMain 13800 ResourceManager 15736 Application 13918 NodeManager 13662 DataNode ----------hadoop104---------- 14242 DataNode 14358 SecondaryNameNode 19102 Jps 14447 NodeManager 14175 QuorumPeerMain [lili@hadoop102 ~]$kafka啟動腳本
編寫腳本
[lili@hadoop102 bin]$ vim kf.sh #!/bin/bash case $1 in "start"){for i in hadoop102 hadoop103 hadoop104doecho " --------啟動 $i Kafka-------"ssh $i "/opt/module/kafka/bin/kafka-server-start.sh -daemon /opt/module/kafka/config/server.properties " #daemon進程又稱為守護 進程,是在系統 啟動就運行,系統關閉才停止的進程,獨立于終端之外,不與客戶端交 #互。一般進程在關閉終端后就停止了,而daemon進程不會停止。done };; "stop"){for i in hadoop102 hadoop103 hadoop104doecho " --------停止 $i Kafka-------"ssh $i "/opt/module/kafka/bin/kafka-server-stop.sh stop"done };; esac增加腳本權限
[lili@hadoop102 bin]$ chmod 777 kf.sh啟動腳本
[lili@hadoop102 module]$ kf.sh start關閉腳本
[lili@hadoop102 module]$ kf.sh stopKafka命令行操作
1.查看Kafka Topic列表
[lili@hadoop102 kafka]$ bin/kafka-topics.sh --zookeeper hadoop102:2181 --list topic_event topic_start如果沒有出現這兩個topic可能的原因
2.創建Kafka Topic
進入到/opt/module/kafka/目錄下分別創建:啟動日志主題、事件日志主題。
創建啟動日志主題
[lili@hadoop102 kafka]$ bin/kafka-topics.sh --zookeeper hadoop102:2181,hadoop103:2181,hadoop104:2181 --create --replication-factor 1 --partitions 1 --topic topic_start創建事件日志主題
[lili@hadoop102 kafka]$ bin/kafka-topics.sh --zookeeper hadoop102:2181,hadoop103:2181,hadoop104:2181 --create --replication-factor 1 --partitions 1 --topic topic_event3.刪除Kafka Topic
刪除啟動日志主題
[lili@hadoop102 kafka]$ bin/kafka-topics.sh --delete --zookeeper hadoop102:2181,hadoop103:2181,hadoop104:2181 --topic topic_start刪除事件日志主題
[lili@hadoop102 kafka]$ bin/kafka-topics.sh --delete --zookeeper hadoop102:2181,hadoop103:2181,hadoop104:2181 --topic topic_event4.kafka消費信息
消費啟動日志主題
[lili@hadoop102 kafka]$ bin/kafka-console-consumer.sh \ --bootstrap-server hadoop102:9092 --from-beginning --topic topic_start–from-beginning:會把主題中以往所有的數據都讀取出來。根據業務場景選擇是否增加該配置。
消費事件日志主題
[lili@hadoop102 kafka]$ bin/kafka-console-consumer.sh \ --bootstrap-server hadoop102:9092 --from-beginning --topic topic_event5.查看kafka Topic詳情
查看啟動日志主題
[lili@hadoop102 kafka]$ bin/kafka-topics.sh --zookeeper hadoop102:2181 \ --describe --topic topic_start查看事件日志主題
[lili@hadoop102 kafka]$ bin/kafka-topics.sh --zookeeper hadoop102:2181 \ --describe --topic --topic topic_event6.kafka壓力測試
利用kafka自帶的官方腳本,對Kafka進行壓測。
kafka-producer-perf-test.sh
kafka-consumer-perf-test.sh
Kafka Producer壓力測試
[lili@hadoop102 kafka]$ bin/kafka-producer-perf-test.sh --topic test --record-size 100 --num-records 100000 --throughput -1 --producer-props bootstrap.servers=hadoop102:9092,hadoop103:9092,hadoop104:9092說明:
record-size是一條信息有多大,單位是字節。
num-records是總共發送多少條信息。
throughput 是每秒多少條信息,設成-1,表示不限流,可測出生產者最大吞吐量。
打印結果:
100000 records sent, 27510.316369 records/sec (2.62 MB/sec), 1303.49 ms avg latency, 1597.00 ms max latency, 1434 ms 50th, 1569 ms 95th, 1589 ms 99th, 1595 ms 99.9th.參數解析:本例中一共寫入10w條消息,吞吐量為2.62 MB/sec,每次寫入的平均延遲為1597.00毫秒,最大的延遲為1595毫秒。(我的電腦好菜!)
Kafka Consumer壓力測試
[lili@hadoop102 kafka]$ bin/kafka-consumer-perf-test.sh --zookeeper hadoop102:2181 --topic test --fetch-size 10000 --messages 10000000 --threads 1說明:
–zookeeper 指定zookeeper的鏈接信息
–topic 指定topic的名稱
–fetch-size 指定每次fetch的數據的大小
–messages 總共要消費的消息個數
注:Consumer的測試,如果這四個指標(IO,CPU,內存,網絡)都不能改變,考慮增加分區數來提升性能。
打印結果:
start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec 2021-07-31 07:23:43:903, 2021-07-31 07:23:49:424, 19.0735, 3.4547, 200000, 36225.3215開始測試時間,測試結束數據,共消費數據19.0735MB,吞吐量3.4547MB/s,共消費200000條,平均每秒消費36225.3215條。
總結
以上是生活随笔為你收集整理的Kafka对接采集日志Flum的集群搭建与部署的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 盘口技术大全(二): 盘口语言
- 下一篇: 投票选举