Window下kafka 单机SASL_SCRAM加密及身份认证
? ? ?KAFKA加密認證機制中的SASL主要包括SASL_PLAINTEXT SASL_GSSAPI SASL_SCRAM。這里主要記錄一下Windows下搭建配置單機sasl_scram環境。
一、前情提要
? ? SCRAM是kafka安全機制SASL家族中的一個,通過執行用戶名/密碼認證(如PLAIN和DIGEST-MD5)的傳統機制來解決安全問題,Kafka中的默認SCRAM實現是在Zookeeper中存儲SCRAM的證書。包括SCRAM-SHA-256和SCRAM-SHA-512,SHA_256和SHA_512是哈希算法。下面的提到的zookeeper都用的是kafka自帶的。
二、配置準備
1:JAVA_HOME?
有JAVA_HOME的環境變量,且java版為1.8及以上,jdk目錄無中文和空格;
2:KAFKA項目部署
下載解壓KAFKA項目到一個無中文無空格的目錄下。配置非加密下的KAFKA環境,參考Windows環境下kafka搭建;
三、JAAS配置
3.1? zookeeper配置
3.1.1 config目錄下zookeeper.properties修改為:
dataDir=./data/zookeeperclientPort=2181maxClientCnxns=0authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider requireClientAuthScheme=sasl jaasLoginRenew=3600000 zookeeper.sasl.client=true3.1.2 config目錄新增kafka_zoo_jaas.conf,內容為:
Server {org.apache.zookeeper.server.auth.DigestLoginModule requiredusername="admin"password="admin"user_admin="admin"; };3.1.3 bin/windows/zookeeper-server-start.bat 設置KAFKA_OPTS,修改后內容為:
@echo offIF [%1] EQU [] (echo USAGE: %0 zookeeper.propertiesEXIT /B 1 ) SetLocalIF ["%KAFKA_LOG4J_OPTS%"] EQU [""] (set KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%~dp0../../config/log4j.properties ) IF ["%KAFKA_HEAP_OPTS%"] EQU [""] (set KAFKA_HEAP_OPTS=-Xmx512M -Xms512M ) set KAFKA_OPTS=-Djava.security.auth.login.config=E:/demo/kafka/SASL_SCRAM/broker/2181/kafka_2.12-1.1.1/config/kafka_zoo_jaas.conf"%~dp0kafka-run-class.bat" org.apache.zookeeper.server.quorum.QuorumPeerMain %* EndLocal3.1.4 啟動zookeeper
執行下面的命令,如果出現下面的圖片信息則為啟動成功。
.\bin\windows\zookeeper-server-start.bat .\config\zookeeper.properties3.2 設置SCRAM憑證
3.2.1 設置憑證
? ? 在這一步的時候,要先保證zookeeper是啟動狀態。在kafka_zoo_jaas.conf中,設置了用戶admin,密碼admin。下面為admin創建scram憑證,執行命令,結果如下圖:
.\bin\windows\kafka-configs.bat --zookeeper 192.168.40.150:2181 --alter --add-config "SCRAM-SHA-256=[password=admin],SCRAM-SHA-512=[password=admin]" --entity-type users --entity-name admin3.2.2 查看憑證
.\bin\windows\kafka-configs.bat --zookeeper 192.168.40.150:2181 --describe --entity-type users --entity-name admin3.3 配置kafka server
3.3.1 config目錄下創建文件kafka_server_jaas.conf,具體的文件內容為:
Client {org.apache.zookeeper.server.auth.DigestLoginModule requiredusername="admin"password="admin"; };KafkaServer {org.apache.kafka.common.security.scram.ScramLoginModule requiredusername="admin"password="admin"user_admin="admin"; };其中client用來broker認證zookeeper,kafkaServer用于broker和broker的通訊,及client的認證
3.3.2 server.properties文件修改,修改后的文件內容為:
############################# Server Basics #############################broker.id=2 port=9091 host.name=192.168.40.150 advertised.port=9091 advertised.host.name=192.168.40.150############################# Socket Server Settings #############################listeners=SASL_PLAINTEXT://192.168.40.150:9091 advertised.listeners=SASL_PLAINTEXT://192.168.40.150:9091#SASL機制 security.inter.broker.protocol=SASL_PLAINTEXT sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256 sasl.enabled.mechanisms=SCRAM-SHA-256,SCRAM-SHA-512num.network.threads=3num.io.threads=8socket.send.buffer.bytes=102400socket.receive.buffer.bytes=102400socket.request.max.bytes=104857600############################# Log Basics #############################log.dirs=./tmp/kafka-logsnum.partitions=1num.recovery.threads.per.data.dir=1############################# Internal Topic Settings #############################offsets.topic.replication.factor=1 transaction.state.log.replication.factor=1 transaction.state.log.min.isr=1############################# Log Flush Policy ##############################log.flush.interval.messages=10000#log.flush.interval.ms=1000############################# Log Retention Policy #############################log.retention.hours=168#log.retention.bytes=1073741824log.segment.bytes=1073741824log.retention.check.interval.ms=300000############################# Zookeeper #############################zookeeper.connect=192.168.40.150:2181 zookeeper.set.acl=true zookeeper.connection.timeout.ms=60000############################# Group Coordinator Settings #############################group.initial.rebalance.delay.ms=0這里選用了SCRAM(PLAIN、DIGEST-MD5)?中的PLAIN,授權協議選擇了SCRAM-SHA-256
3.3.3 修改bin/windows/kafka-server-start.bat
將上面創建kafka_server_jaas.conf設置為JVM參數,修改后內容為:
@echo offIF [%1] EQU [] (echo USAGE: %0 server.propertiesEXIT /B 1 )SetLocal set KAFKA_OPTS=-Djava.security.auth.login.config=E:/demo/kafka/SASL_SCRAM/broker/2181/kafka_2.12-1.1.1/config/kafka_server_jaas.conf IF ["%KAFKA_LOG4J_OPTS%"] EQU [""] (set KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%~dp0../../config/log4j.properties ) IF ["%KAFKA_HEAP_OPTS%"] EQU [""] (rem detect OS architecturewmic os get osarchitecture | find /i "32-bit" >nul 2>&1IF NOT ERRORLEVEL 1 (rem 32-bit OSset KAFKA_HEAP_OPTS=-Xmx512M -Xms512M) ELSE (rem 64-bit OSset KAFKA_HEAP_OPTS=-Xmx1G -Xms1G) ) "%~dp0kafka-run-class.bat" kafka.Kafka %* EndLocal3.3.4 啟動kafka server
用下面的命令啟動,且出現下面的截圖則為啟動成功
.\bin\windows\kafka-server-start.bat .\config\server.properties?3.4 創建TOPIC
3.4.1修改bin/windows/kafka-topics.bat,增加jvm參數,修改后的內容如下:
@echo offSetLocal set KAFKA_OPTS=-Djava.security.auth.login.config=E:/demo/kafka/SASL_SCRAM/broker/2181/kafka_2.12-1.1.1/config/kafka_server_jaas.conf "%~dp0kafka-run-class.bat" kafka.admin.TopicCommand %* EndLocal3.4.2 創建topic
使用下面的命令創建topic NOTICE:
.\bin\windows\kafka-topics.bat --create --zookeeper 192.168.40.150:2181 --replication-factor 1 --partitions 1 --topic NOTICE3.5 創建生產者
3.5.1 修改config下producer.properties
和上面server.properties和kafka_server_jaas.conf修改的信息保持一致,修改后的內容為:
############################# Producer Basics #############################bootstrap.servers=192.168.40.150:9091compression.type=nonesasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin"; security.protocol=SASL_PLAINTEXT sasl.mechanism=SCRAM-SHA-2563.5.2 打開生產者并生產消息
.\bin\windows\kafka-console-producer.bat --broker-list 192.168.40.150:9091 --topic NOTICE --producer.config .\config\producer.properties?3.6 創建消費者
3.6.1 修改config目錄下consumer.properties
具體修改和producer.properties一樣,修改后的信息為:
bootstrap.servers=192.168.40.150:2181group.id=test-consumer-groupsasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin"; security.protocol=SASL_PLAINTEXT sasl.mechanism=SCRAM-SHA-256?3.6.2 創建消費者并消費數據
.\bin\windows\kafka-console-consumer.bat --bootstrap-server 192.168.40.150:9091 --topic NOTICE --consumer.config .\config\consumer.properties --from-beginning總結
以上是生活随笔為你收集整理的Window下kafka 单机SASL_SCRAM加密及身份认证的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 案例 :深度学习 V.S. 谜题游戏
- 下一篇: 流行音乐