限流方式
方式優點缺點
| client id | 簡單便捷 | client id,一次只能有一個生產者實例,只能單并發 |
| user | 可以多 producer 同時進行,可與client id 進行組合,可以設置用戶密碼,增加一定的安全性,但用戶名密碼位置容易暴露 | 需要對kafka 開啟安全認證,部署復雜行增加 |
基于 client id 限流
使用方法
./bin/kafka-configs.sh --bootstrap-server xxxx:9092 --alter --add-config
'producer_byte_rate=10240,consumer_byte_rate=10240,request_percentage=200' --entity-type clients --entity-name test_lz4_10m_client
基于 user 限流
kafka 認證方式
- SASL/GSSAPI (Kerberos) - starting at version 0.9.0.0
- SASL/PLAIN - starting at version 0.10.0.0 (每次生效需要重啟broker)
- SASL/SCRAM-SHA-256 and SASL/SCRAM-SHA-512 - starting at version 0.10.2.0 (可動態增加用戶)
- SASL/OAUTHBEARER - starting at version 2.0
其中 SASL/GSSAPI (Kerberos) 這種可能是生產環境使用最合適的,但是筆者這邊暫時還沒有使用 Kerberos ,所以這里主要使用 SASL/PLAIN 和 SASL/SCRAM-SHA-256 這兩種做一個探索。
使用 SASL_PLAINTEXT/PLAIN 進行用戶認證實現限流
broker 配置
authorizer.class.name
=kafka.security.auth.SimpleAclAuthorizer
listeners=SASL_PLAINTEXT://xxxxxx:9092
advertised.listeners
=SASL_PLAINTEXT://xxxxxx:9092
security.inter.broker.protocol
= SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol
=PLAIN
sasl.enabled.mechanisms
=PLAIN
super.users
=User:admin
創建 kafka_server_jaas.conf
KafkaServer
{org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"password="admin-secret"user_admin="admin-secret"user_test="123456";
};
啟動腳本添加指定 kafka_server_jaas.conf
vim kafka-server-start.sh
exec $base_dir/kafka-run-class.sh
$EXTRA_ARGS -Djava.security.auth.login.config
=/data/kafka_2.13-2.7.1/config/kafka_server_jaas.conf kafka.Kafka
"$@"
添加生產者消費者配置文件
KafkaClient
{
org.apache.kafka.common.security.plain.PlainLoginModule required
username
= "test"
password="123456";
};
exec $(dirname $0)/kafka-run-class.sh -Djava.security.auth.login.config
=/data/kafka_2.13-2.7.1/config/producer_jaas.conf kafka.tools.ConsoleProducer
"$@"
exec $(dirname $0)/kafka-run-class.sh -Djava.security.auth.login.config
=/data/kafka_2.13-2.7.1/config/consumer_jaas.conf kafka.tools.ConsoleConsumer
"$@"
使用
./bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect
=xxxxxx:2181/kafka27 --add --allow-principal User:test --operation Read --topic test_se
./bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect
=xxxxxx:2181/kafka27 --add --allow-principal User:test --operation Read --group test-group
./bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect
=xxxxxx:2181/kafka27 --add --allow-principal User:test --operation Write --topic test_se
./bin/kafka-console-producer.sh --bootstrap-server xxxxxx:9092 --topic test_se --producer-property security.protocol
=SASL_PLAINTEXT --producer-property sasl.mechanism
=PLAIN
./bin/kafka-console-consumer.sh --bootstrap-server xxxxxx:9092 --from-beginning --topic test_se --consumer.config ./config/console_consumer.conf
使用 SASL_PLAINTEXT/SCRAM 進行用戶認證實現限流
broker 配置
sasl.enabled.mechanisms
=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol
=SCRAM-SHA-256
security.inter.broker.protocol
=SASL_PLAINTEXT
listeners=SASL_PLAINTEXT://xxxxxx:9092
advertised.listeners
=SASL_PLAINTEXT://xxxxxx:9092
authorizer.class.name
=kafka.security.auth.SimpleAclAuthorizer
super.users
=User:admin
創建 kafka-broker-scram.jaas
KafkaServer
{
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin";
};
指定 kafka-broker-scram.jaas 位置
修改 vim kafka-server-start.sh
exec $base_dir/kafka-run-class.sh
$EXTRA_ARGS -Djava.security.auth.login.config
=/data/kafka_2.13-2.7.1/config/auth/kafka-broker-scram.jaas kafka.Kafka
"$@"
添加生產者消費者配置文件
security.protocol
=SASL_PLAINTEXT
sasl.mechanism
=SCRAM-SHA-256
sasl.jaas.config
=org.apache.kafka.common.security.scram.ScramLoginModule required
username="test" password="123456";
授權
./bin/kafka-configs.sh --zookeeper xxxxxx:2181/kafka27 --alter --add-config
'SCRAM-SHA-256=[password=123456]' --entity-type
users --entity-name
test
./bin/kafka-acls.sh --authorizer-properties zookeeper.connect
=xxxxxx:2181 --add --allow-principal User:
"test" --consumer --topic
'test_topic' --group
'*'
./bin/kafka-acls.sh --authorizer-properties zookeeper.connect
=xxxxxx:2181 --add --allow-principal User:
"test" --producer --topic
'test_topic'
./bin/kafka-console-producer.sh --broker-list xxxxxx:9092 --topic test_scram --producer.config config/auth/producer-scram.conf
./bin/kafka-console-consumer.sh --bootstrap-server xxxxxx:9092 --topic test_scram --consumer.config config/auth/consumer-scram.conf
限流
./bin/kafka-configs.sh --zookeeper xxxxxx:2181/kafka27 --alter --add-config
'producer_byte_rate=10485760' --entity-type
users --entity-name
test
./bin/kafka-configs.sh --zookeeper xxxxxx:2181/kafka27 --alter --add-config
'producer_byte_rate=10485760' --entity-type clients --entity-name clientA
此處的限流應該是對單個 broker 限流為 10 M/S ,應為測試 topic 有 3 分區分別分布在三個 broker 所以總體限流大概在 30M/S 左右?
附一張 kafka 壓縮類型對比(無 CPU 對比)
總結
以上是生活随笔為你收集整理的Kafka 压缩、限流和 SASL_PLAIN 、 SASL_SCRAM-SHA-256简单认证的全部內容,希望文章能夠幫你解決所遇到的問題。
如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。